text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Python password manger
Project description
Pysafe is a terminal-based password manager with a very easy to use CLI.
Features:
- Generate a unique encryption key to be used as a master password.
- Generate new passwords for new login entries.
- Encrypt all saved password using the generated encryption key.
- Show all your saved login credentials with a click of button.
- A ready to export CSV file for an easy backup experience.
- Master reset to delete saved encryption key and all saved login entries.
How to install:
On mac:
Open terminal and type:
python3 pip install pysafe
On windows:
Open Powershell and type
python3 pip install pysafe
How to use:
On mac:
Open terminal and type
Python3
Import pysafe
On Windows:
Open CMD and type
python3
import pysafe
General usage:
- Generate a new encryption key: You only need to do this once. Save your encryption key somewhere safe as this will be used as your master password.
Note: Encrypted credentials are linked to the generated encryption key . Avoid generating a new key with existing login credentials as this would result in losing the ability to decrypt your saved login credentials.
Use previous encryption key: Here you can use previously generated encryption key. The encryption key is 44 character.
Save a new login: After saving your encryption key, you can enter as many login entries as you so desire with the ability to generate new passwords along the way, or just enter a password of your choosing. Then the program will save it for you in CSV file. The passwords in that CSV file will be encrypted, and can only be seen using the generated encryption key that we mentioned earlier.
Show saved logins: The program will print out the saved logins using the saved encryption key and saved CSV file, so you can see them in plain text that you can use.
Generate a new password: This will give your the ability to generate a new password without having an encryption key or entering any additional information such as a website tile or website link. You only need to enter the length of the password and the program will do the rest.
Show locations of saved files: This will show you the absolute path of the generated files by the program
Delete all data: This will be used in case you wish to have a fresh start without any encryption key nor login credentials file file.
Note: This will permanently delete your encryption key and saved login credentials. So only do that, if you know what you are doing.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pysafe/
|
CC-MAIN-2022-40
|
refinedweb
| 450
| 60.95
|
Opened 11 years ago
Closed 6 years ago
#7670 closed defect (invalid)
notebook -- evidently only the first 6 characters are significant???
Description
Hi, There is a password issue with sage notebook account. Please read below: Sameer On Fri, Dec 11, 2009 at 1:22 PM, Sameer Regmi <> wrote: > On Fri, Dec 11, 2009 at 1:16 PM, Ondrej Certik <> wrote: >> On Fri, Dec 11, 2009 at 1:12 PM, Sameer <> wrote: >>> Hi I have found a weird issue with FEMhub online lab account. Let's >>> say my password is "nevada". Then whenever I enter any text (in >>> password field) with nevada as the prefix it will login. That means if >>> I enter nevada123 (or whatever as the suffix) it will >>> login. >> >> Seems like a bug in the Sage notebook. Could you please try to verify >> this against sagenb.org and if the problem is in there as well, >> could you please report it to the sage notebook list? > > Exactly! Its the bug in Sage notebook. The issue is there in sagenb.org too. > I even can login with "nevad" if the password is of nevada. I am > reporting to sage notebook list > > Sameer
Change History (9)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
But crypt supports whatever the OS's underlying crypt(3) supports. We could instead do, e.g.,
import crypt as c, random as r salt = repr(r.random())[2:] '77551456940940877' c.crypt('abcdefgh', '$6$' + salt + '$') '$6$7755145694094087$uW0RGjvJG3I.BDFKIAieUTPZkD4IGI6b8RtLt1fZ9czR0TefjriLwRGPItgPyZogDFsy.YorN24v2GM4YrBwK0' c.crypt('abcdefghi', '$6$' + salt + '$') '$6$7755145694094087$txEQuYAJlZ.042gqmPTeLSczXBv1sI6kSjzpbmU7o89rh.Tk7qUGHhLHtL1GIrVXmUdFrQBuIefktTTptuEq31'
If Linux and Mac OS X, at least, both support SHA-512, I suggest we use it by default. Should we generate each user's pseudo-random "salt" --- used to avoid clustering --- differently than above?
- Milestone changed from sage-6.4 to sage-duplicate/invalid/wontfix
- Reviewers set to Karl-Dieter Crisman
- Status changed from new to needs_review
I cannot replicate this, and it is so old I am going to ask to close this.
comment:8 Changed 6 years ago by
- Status changed from needs_review to positive_review
comment:9 Changed 6 years ago by
- Resolution set to invalid
- Status changed from positive_review to closed
Note: See TracTickets for help on using tickets.
Could the problem be
sagenb.notebook.user.User's use of crypt:
?
|
https://trac.sagemath.org/ticket/7670
|
CC-MAIN-2021-04
|
refinedweb
| 385
| 65.93
|
Each restaurant or business will have different opening hours. To have that information and display them for each on a website, there might be some techniques including using custom fields. We’re going to add an Opening Hours section into the single restaurant page in this tutorial as this example:
Before Getting Started
For having extra information such as time slots for each day, we’ll use custom fields. So, we need the Meta Box plugin to have a framework to create them. It’s free and available on wordpress.org. Furthermore, we also need some advanced features from Meta Box that are available in the following extensions:
- MB Views: a premium extension of Meta Box. It helps you create templates and display the custom fields’ value on the frontend without touching the theme’s files;
- Meta Box Builder: this premium extension of Meta Box provides an intuitive UI in the back end to create and configure custom fields without using code;
- MB Group: this extension helps to arrange fields into groups;
- MB Conditional Logic: this premium extension allows you to create rules to control the display of the fields.
After installing and activating all these plugins, let’s move to the how-to parts right now.
Video Tutorial
Step 1: Create Custom Fields
Normally, we will create a new custom post type for the restaurants first. I skipped that step since I already had it on my website. If you haven’t had it yet, create it first then go to this step.
Go to Meta Box > Custom Fields > Add New to create a new field group. Then, add custom fields with the structure like this:
As you saw in the table above, the first field is a select field and there are 3 options that I filled in the Choice box as below.
These 3 options will be used to set the display rule of other group fields.
To set conditions to groups, for example, with the All days have the same opening hours group. Go to the Advanced tab, and then set the rule in the Conditional Logic section like this:
This rule means this group will be displayed when the Choose an Option field is selected as All days have the same opening hours option which has the value is
all_days_are_the_same.
Similarly, if the Choose an Option field is chosen as the Difference between weekdays and weekend options, all the subfields of the Weekdays and Weekend group will be displayed. Or if the Choose an Option field is selected as the Custom option, the group fields for every day in a week will be shown up. That’s the concept of how to create fields.
In each group field, I also have subfields with the same structure.
The Type of Opening Hours field is the select field. I have 4 options in the Choice box:
- Open All day;
- Close All day;
- By Appointment Only;
- Enter Hours.
If the restaurant has different opening hours, you can choose Enter Hours to display Start Time and End Time field in the Choose Time Slots group. So, I will set the rule for this group like this:
In case the restaurant opens in multiple time slots, we’ll need this group to be cloneable. So, I tick this box as below:
After creating all the fields, go to the Settings tab of the field group, choose Location as Post Type, and select Restaurant to apply these fields to this post type.
Publish this field group and go to the post editor in Restaurant, you will see the custom fields here.
They work exactly like the rule we set.
Step 2: Display the Fields’ Value on the Frontend
In this step, we’ll get the input data from those custom fields to display into the Restaurant single page. To do it, we’ll use MB Views in order not to touch the theme’s file.
If you’ve had a template created by MB Views for the Restaurant single page already, just go to it and edit. If not, create one. Then, add code to the view.
The code is quite long, so I put it in my Github here. You can refer to it for more details. The code is divided into several parts to get corresponding group data. Because of the same concept in all parts, I’ll explain a typical part to be more clear about the logic.
<div class="opening-hours"> <h2> Opening Hours </h2> <div class="detail-schedule"> {% set options = post.choose_an_options.value %} {% if (options == "all_days_are_the_same") %} {% set same_days = post.all_days_have_the_same_opening_hours.type_of_opening_hours.value %} {% if ( same_days == "enter_hours") %} <div class="date-time"> <div class="date"> Mon - Sun </div> <div class="hour"> {% for clone in post.all_days_have_the_same_opening_hours.choose_time_slots %} <div class="time-slot"> <p class="starting-hour"> {{ clone.start_time | date( 'h:i A' ) }} </p> <p class="ending-hour"> {{ clone.end_time | date( 'h:i A' ) }} </p> </div> {% endfor %} </div> </div> {% else %} <div class="date-time"> <div class="date"> Mon - Sun </div> <div class="hour"> {{ post.all_days_have_the_same_opening_hours.type_of_opening_hours.label }} </div> </div> {% endif %}
Here I created a variable named options to admit the value from the Choose an Option field. Then I set a rule based on its value to display values from the subfield in the corresponding group. It’s quite the same with the rules we created when configuring the fields.
First, if its value is
all_day_are_the_same, the values of fields in the All days have the same opening hours group will be displayed.
I set a variable named
same_days to admit the value from the Type of Opening Hours field. If it is
enter_hours, the Choose Time Slots group will be obtained and displayed the data. As we set before, this group is cloneable, so there is a loop here.
After adding code, depending on if the view is new or not, choose the right Type and Location for the view in the Settings section.
Go to the single Restaurant page and see how the opening hours section displays.
All the time slots are shown exactly as I input in the backend. But this section doesn't look as good as I want. So, I will add some CSS to style this section in the next step.
Step 3: Style the Opening Hours Section
Still, in the view of the single Restaurant page, go to the CSS tab.
And then, add the code below.
.container.detail-restaurant .opening-hours { max-width: 300px; margin: 0 auto; width: 100%; } .container.detail-restaurant .opening-hours h2 { margin: 0 0 10px; text-align: center; border: 4px solid rgba(231,57,72,.85); border-right: 0; border-left: 0; } .container.detail-restaurant .detail-schedule .date-time { display: flex; width: 100%; justify-content: space-between; flex-wrap: wrap; align-items: baseline; } .container.detail-restaurant .detail-schedule .date-time .hour .starting-hour:after{ content: '-'; margin: 0 5px; } .container.detail-restaurant .detail-schedule .date-time .hour .time-slot { display: flex; } .container.detail-restaurant .detail-schedule .date-time .date { font-weight: 700; font-size: 15px; } .container.detail-restaurant .detail-schedule { display: flex; flex-wrap: wrap; justify-content: center; border-bottom: 4px solid rgba(231,57,72,.85); padding-bottom: 10px; }
Now, go back to the single restaurant page, you’ll see the Opening Hours section looks much more beautiful.
Last Words
If you’re running an online food order platform or own a business that has several branches with different time zones, this tutorial may make sense to you. Try it out and share the results in the comment section. For other page builders, we’ll have some upcoming tutorials. By the way, if you have any idea about any other tutorials, feel free to leave a comment. Thank you for reading. Good luck!
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/wpmetabox/how-to-display-opening-hours-for-restaurants-p1-using-meta-box-gutenberg-4a4i
|
CC-MAIN-2022-27
|
refinedweb
| 1,286
| 71.24
|
Answering my own question here. It looks like the right approach would be
for my application to function as a PowerShell Host
Use of PowerShell V2 objects is defined
You can use the :! vim command. For example, to echo 'Hello, World!' from
inside vim (and therefore from within a vim script, also), type
:! echo 'Hello, World!'
in vim. Or, in a vim script, you can put just
! echo 'Hello, World!'
The reason you need the before the ! is because vim performs special
handling of ! characters in the argument of a ! command. If you were
running a command that does not include any ! character, then you do not
need to escape it.
If you want to read more in depth about this, you can type
in vim, as @FDinoff said.
Variables are supposed to contain data, and bash treats them as data. This
means that shell meta-characters like quotes are also treated as data.
See this article for a complete discussion on the topic.
The short answer is to use arrays instead:
ASCIIDOC_OPTS=( --asciidoc-opts='-a lang=en -v -b docbook -d book' )
DBLATEX_OPTS=( --dblatex-opts='-V -T db2latex' )
cmd=(a2x -v -f pdf -L "${ASCIIDOC_OPTS[@]}" "${DBLATEX_OPTS[@]}"
"$1".asciidoc)
# Print command in pastable format:
printf '%q ' "${cmd[@]}"
printf '
'
# Execute it
"${cmd[@]}"
Make sure not to use eval:
eval "$cmd" #noooo
This will appear to work with your code as you posted it, but has caveats
and security problems.
Are you sure DB2 is blocking? Did you put a semicolon between commands
db2 CONNECT TO ktest4 ; db2 -v -f
/tmp/sql/application_system/opmdb2_privilege_remove.sql.5342
In order to trace the execution, I advise you to put some output, in order
to detect where is it blocking
date ; db2 -r /tmp/output.log CONNECT TO ktest4 ; db2 -r /tmp/output.log
values current timestamp ; db2 -r /tmp/output.log -v -f
/tmp/sql/application_system ; db2 -r /tmp/output.log values current
timestamp ; db2 -r /tmp/output.log terminate
With a command like this, you will save all outputs, and then you could
check where is the error..
Since I'm running a shell script, it uses a sub-shell so it cannot access
the parent shell's environment.
Problem solved running the shell script in this way:
. ./script.sh.
For an SQL script you must ..
either have the data inlined in your script (in the same file).
or you need to utilize COPY to import the data into Postgres.
I suppose you use a temporary staging table, since the format doesn't seem
to fit the target tables. Code example:
How to bulk insert only new rows in PostreSQL
There are other options like pg_read_file(). But:
Use of these functions is restricted to superusers.
Intended for special purposes.
ideally just put the commands in quotes like this:
ssh admin@10.x.x.x '/sbin/arp -an | grep lanx' > lanx
or
ssh admin@10.x.x.x '/sbin/arp -an' | grep lanx > lanx
The other problem might be the user admin on your machine does not have arp
in PATH (is he root? arp is usually in /sbin/ and /sbin/ is usually not in
PATH of a regular user..
Well, in my experience, I would say that Java is not very fond of being ran
by means of script.
I did a quick google search. Try the ProcessBuilder, it looks perfect for
this situation, in my opinion.
I hope this helps you! :)
You are on the right track, and almost got it. You just need to use "$@"
instead of $@.
Here's a summary of what $* and $@ do, with and without quotes:
$* and $@ paste in the positional arguments, then tokenise them (using
$IFS) into separate strings.
"$*" pastes in the positional arguments as one string, with the first
character of $IFS (usually a space) inserted between each.
"$@" pastes in the positional arguments, as a string for each argument.
Examples:
$ set foo bar "foo bar:baz"
$ printf "%s
" $*
foo
bar
foo
bar:baz
$ printf "%s
" $@
foo
bar
foo
bar:baz
$ printf "%s
" "$*"
foo bar foo bar:baz
$ printf "%s
" "$@"
foo
bar
foo bar:baz
Here's what changes when you set $IFS:
$ IFS=:
$ printf "%s
" $*
foo
bar
foo bar
baz
$ printf "%s
" $@
foo
bar
foo bar
baz
$ printf "%s
" "$*"
f
You have to escape your double quotes in the string:
exec("awk -F: '{printf "\n"
,$1, $2 }' < /opt/lampp/htdocs/$filename > /opt/lampp/htdocs/2.txt");
putty is interactive command line. Try the below. bash variables can be
used.
#!/bin/bash
su - mqm -c "echo 'DISPLAY QLOCAL (<QUEUENAME>) CURDEPTH'|runmqsc
QUEUEMANAGER"
To store results in string array. You can use this sample block of code in
place of those commented code and adjust the sql statement accordingly. you
can store these results in DataTable or in Multidimentional array also.
sqlComm = New System.Data.SqlClient.SqlCommand("select Column1 from
table ", sqlConn)
Dim sqlReader As System.Data.SqlClient.SqlDataReader
Dim arrList As New ArrayList()
sqlReader = sqlComm.ExecuteReader
Do While sqlReader.Read
arrList.Add(sqlReader.GetString(0))
Loop
Sudo normally requires an interactive shell to enter your password. That's
obviously not going to happen in a PHP script. If you're sure you know what
you're doing and you've got your security issues covered, try allowing the
Apache user to run sudo without a password, but only for certain commands.
For example, adding the following line in your sudoers file will allow
Apache to run sudo without a password, only for the gzip command.
nobody ALL=NOPASSWD: gzip
Adjust the path and add any arguments to suit your needs.
Caution:
There might still be complications due to the way PHP calls shell
commands.
Remember that it's very risky to allow the web server to
run commands as root!
Another alternative:
Write a shell script with the suid bit to make it run as root no matter who
calls i%"')
Yes.
See and there specifically
use this
mysql_query()
instead of
mysql_subquery()
you should use PDO or mysqli instead of mysql. mysql is already deprecated
I think it is a little bit overcomplicated. Actually you do not need to
start bash as it is started when You start ./executeServer.sh. Add
executable rights to ./executeServer.sh and start it as it is. And do not
need to start it in the background, as it already starts Server in the
background in the script.
This tcl script uses regex parsing to extract pieces of the commandline,
transforming your third argument into a list.
Splitting is done on whitespaces - depending on where you want to use this
may or may not be sufficient.
#!/usr/bin/env tclsh
#
# Sample arguments: 1 2 {element1 element2} 4
# Split the commandline arguments:
# - tcl will represent the curly brackets as { which makes the regex a bit
ugly as we have to escape this
# - we use '->' to catch the full regex match as we are not interested
in the value and it looks good
# - we are splitting on white spaces here
# - the content between the curly braces is extracted
regexp {(.*?)s(.*?)s\{(.*?)\}s(.*?)$} $::argv -> first second third
fourth
puts "Argument extraction:"
puts "argv: $::argv"
puts "arg1: $first"
puts "arg2: $s
I finally figured this out and thought it would be a good idea to come back
and share the answer. First here is the file that works.
# Required Blender information.
bl_info = {
"name": "My Exporter",
"author": "",
"version": (1, 0),
"blender": (2, 65, 0),
"location": "File > Export > Test (.tst)",
"description": "",
"warning": "",
"wiki_url": "",
"tracker_url": "",
"category": "Import-Export"
}
# Import the Blender required namespaces.
import sys, getopt
import bpy
from bpy_extras.io_utils import ExportHelper
# The main exporter class.
class MyExporter(bpy.types.Operator, ExportHelper):
bl_idname = "export_scene.my_exporter";
bl_label = "My
According.
Here is a simple Python script that pipes the output of a locally executed
command (dir on a Windows computer in this case) via a web request (using
the excellent web.py library):
import web
from subprocess import check_output
urls = (
'/', 'index'
)
app = web.application(urls, globals())
class index:
def GET(self):
return '<pre>'+check_output('dir', shell=True)+'</pre>'
if __name__ == "__main__":
app.run()
Some links in facebook call ajax functions, so most of time clicking on a
link in facebook doesn't change the page, just loads content via ajax. So
your content script executes only once.
There are two possible solution,
You can also add a DOM change listener like DOMSubtreeModified, and run the
code every time when content is changed.
You can add CSS code to page, so it doesn't need to be run every time even
if page is changed via ajax.
Like this,
manifest.json
{
"name": "NewsBlock",
"description": "NewsBlock - Facebook Newsfeed Hider",
"version": "1.0.0",
"background": {
"scripts": ["background.js"]
},
"permissions": [
"tabs", "http://*/*", "https://*/*", "storage"
],
"browser_action": {
"default_title": "NewsBlock",
"default_icon": "icon.png"
Your tcl script should write on stdout:
set a 10
set b 20
set c [expr {$a + $b}]
puts $c
If your tcl script is supposed to output multiple lines:
e.g.:
puts $c
puts $c
puts $c
you can capture them in a PHP array:
e.g.:
$array = array();
function print_procedure ($arg) {
echo exec("/opt/tcl8.5/bin/tclsh test.tcl",$array);
//print_r($array);
}
I wouldn't try to use sed to edit XML. Unless there is some constant to
match on (that lets you decide if you are on a /1 or /2) it'll be hard (I
would say impossible, but some sed guru will chime and and prove me
wrong...)
If you can't install stuff, something like an awk or perl script that lets
you keep a minimum of state you work better than sed.
First you should check that the argument exist, by checking argc. Then
simply assign it to a char * if you can't use argv[x] directly (which you
should be able to do).
Take a look at /apps/ins/.profile and the files it executes - such files
usually have a conditional that exits the script early if it's not run in a
terminal.
For example:
[ -z "$PS1" ] && return
This may change the behavior of your script (or even skip it, if exit is
used in place of return). At the very least, you will miss aliases,
possible PATH changes, and other things set up in the .profile script,
which will affect if and how your main script is run.
Try to comment the line . /apps/ins/.profile in the script above, and see
if it still runs in the terminal. If it does, run it like that from crontab
and see if it fixes your problem.
Usually, the process what do you send to background with the & and the
script waiting for an input from the terminal, going into the stopped
state.
E.g. have a bash script valecho:
#!/bin/sh
read val
echo $val
runnig it as:
./valecho &
the script will stop.
When run it as
echo hello | ./valecho &
will correctly run and finish.
So, check your php - probably wants some input from the stdin
Edit - based on comment:
i'm not an php developer - but just tried the next script (p.php)
<?php
while(true){
echo 'hi '.time()."
";
sleep(3);
}
?>
with the command:
php -f p.php &
and running nicely... so... sry for confusion...
You can use subprocess module.
subprocess is a newer way to spawn processes rather than using os.spawn*()
or os.system().
In your case:
import subprocess
subprocess.Popen(["ds9"])
This should run ds9 in background.
See the documentation here.
You can use echo, but in a pipeline.
echo 1000 | java myProgram
If you want to send a file, you can use cat:
cat file.txt | java myProgram
|
http://www.w3hello.com/questions/-Execute-Command-Line-Script-from-ASP-Page-
|
CC-MAIN-2018-17
|
refinedweb
| 1,920
| 74.69
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Can someone tell me whats wrong with my view.xml???
<?xml version="1.0" encoding="utf-8"?> <openerp> <data> <record id="products_tree" model="ir.ui.view"> <field name = "name">prods.prods.name</field> <field name= "model">prods.prods</field> <field name="arch" type="xml"> <tree string="prods_tree" version="7.0"> <field name="name" /> <field name="code" /> <field name="sku" /> <field name="description" /> <field name="salepricewithvat" /> <field name="group" /> <field name="id" /> <field name="applicationid" /> </tree> </field> </record> <record model="ir.actions.act_window" id="action"> <field name="name">prods.prods.action</field> <field name="res_model">prods.prods</field> <field name="view_type">tree</field> <field name="view_mode">tree</field> </record> <menuitem id="main_item" name="List Of Products" icon="terp-partner"/> <menuitem id="main_item_child" name="Products" parent="main_item"/> <menuitem id="main_item_option" name="Products List" parent="main_item_child" action="action" /> </data>
</openerp>
I keep getting a Validate error . but it looks okay to me? Error occurred while validating the field(s) arch: Invalid XML for View Architecture!
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
what is with this framework. i found a deprecated variable self._table in orm.py. next to it it says something like # this is deprecated use _model instead but _table is still used in the class?
The log file will give you more specific details about the error - it should point you to a tag mismatch or an unknown field attribute.
File "C:\OpenERP 7.0-20140221-003045\Server\server\openerp\addons\base\ir\ir_ui_view.py", line 126, in _check_render_view AttributeError: 'NoneType' object has no attribute 'fields_view_get' 2014-03-23 18:51:43,832 7956 ERROR absyla openerp.tools.convert: Parse error in: <record id="products_tree" model="ir.ui.view">
i dunno been at this the last few days. a basic module shouldnt involve this much time. its ridiculous
In addition to basic syntax checks, OpenERP also checks that all fields in your view are in your model definition - is that the case - you haven't shared your Python code.
Please, keep on with the same question like in, if you want to find a solution.
i will upload the python this evening but iam ppretty sure the field names are the exact same as what I have in the _columns dictionary
from openerp.osv import osv, fields
class Products(osv.Model): _name = 'prods.prods' _columns = { 'name': fields.text('Prod Name', size=100), 'code': fields.text('Code'), 'sku': fields.text('SKU', size=30), 'description': fields.text('Description'), 'salepricewithvat': fields.float('Sale Price + VAT'), 'group': fields.float('Group'), 'id': fields.text('ID'), 'applicationid': fields.text('App ID'), } _table = 'prods_prods'
that is my model class above
i honestly cant see whats wrong with this
"Naming Convention" problems for module name and Model Name !!!
cool. thanks! Ill have a look at that in the morning see. if it resolves my issue
yeah that doesnt make a difference really. my modules all lowercase one word. meplugin. might just try unistall the openerp server.
changed the names of the files. all to lower case got passed this error.
|
https://www.odoo.com/forum/help-1/question/can-someone-tell-me-whats-wrong-with-my-view-xml-47205
|
CC-MAIN-2017-51
|
refinedweb
| 550
| 61.43
|
Quote:Ok, well, i needed a 3 syllable word beginning with p- I'm sure there's better ones but damned if I could think of them
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
Mark_Wallace wrote:I find that thinking is completely unnecessary, when posting to the Lounge.
RyanDev wrote:I think you're right
Sander Rossel wrote:Does one really have to think to post a thought of the day?
Manfred R. Bihy wrote:is a process that takes 5+ years as you have to work
RyanDev wrote:If you mix in manure you can get good soil in a matter of months actually
RyanDev wrote:If you mix in manure you can get good soil in a matter of months actually.
Manfred R. Bihy wrote:No, believe me if you start with no topsoil left you will not get it back that quickly.
Quote:Note: It is best to let a sheet mulch sit at least a few weeks before you plant it or even a winter season if you can. Sheet mulch will be a lot more productive in the second and third year than in the first.
RyanDev wrote:I've done it. It works.
Manfred R. Bihy wrote: It wouldn't have worked for us.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=4435451&fr=1
|
CC-MAIN-2015-11
|
refinedweb
| 243
| 72.09
|
In Google Kubernetes Engine, Platform HTTP(S) load balancer and configures it according to the information in the Ingress and its associated Services. GCP network services
- An Ingress can configure GCP features such as Google-managed SSL certificates (Beta), Google Cloud Armor, Cloud CDN, and Cloud GCP
- You can provision your own SSL certificate and create a certificate resource in your GCP.
Limitations
The total length of the namespace and name of an Ingress must not exceed 55 as health checks, the Pods for an Ingress must exist at the
time of Ingress creation. If your replicas are scaled to 0, the default health
check will apply. For more information, see this issue comment.
Changes to a Pod's
readinessProbe do not affect the Ingress after it is
created.
Multiple backend services
An HTTP(S) load balancer provides one stable IP address that you can use to route requests to a variety of backend services.:
apiVersion: extensions, notice that the
type is
NodePort. This is the
required type for an Ingress that is used to configure an HTTP(S) load balancer..
Note that.
Note that: extensions.. Note that GCP backend service
A Kubernetes
Service
and a
GCP backend service
are different things. There is a
strong relationship between the two, but the relationship is not necessarily
one to one. The GKE ingress controller creates a GCP backend
service for each (
serviceName,
servicePort) pair in an Ingress manifest. So
it is possible for one Kubernetes Service object to be related to several GCP
backend services.
Support for GCP features
You can use a BackendConfig to configure an HTTP(S) load balancer to use features like Google Cloud Armor, Cloud CDN, and Cloud IAP.
BackendConfig is a custom resource that holds configuration information for GCP features.
An Ingress manifests and Configure Domain Names with Static IP Addresses.
Setting up HTTPS (TLS) between client and load balancer
An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients,., you can specify the name of a Kubernetes
Secret in the
tls field of your
Ingress manifest. The Secret, which you created previously, holds the
certificate and key. Cloud IAP for your workloads.
|
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress?hl=ar
|
CC-MAIN-2019-26
|
refinedweb
| 372
| 63.09
|
How to properly document a S3 method of a generic from a different package, using Roxygen?
how to use roxygen2
roxygen2 requires encoding: utf-8
roxygen2 cheat sheet
roxygen tags
create r package roxygen2
warning: the existing 'namespace' file was not generated by roxygen2, and will not be overwritten.
how to use importfrom in roxygen2
I am writing a package that defines a new class, surveyor, and a
print.surveyor. My code works fine and I use roxygen for inline documentation. But
R CMD check issues a warning:
Functions/methods with usage in documentation object 'print.surveyor' but not in code: print
I have used the following two pages, written by Hadley, as inspiration:
Namespaces and Documenting functions, both of which states that the correct syntax is
@method function-name class
So my question is: What is the correct way of documenting the
Here is my code: (The commented documentation indicated attempts at fixing this, none of which worked.)
#' Prints surveyor object. #' #' Prints surveyor object #' ## #' @usage print(x, ...) ## #' @aliases print print.surveyor #' @param x surveyor object #' @param ... ignored #' @S3method print surveyor print.surveyor <- function(x, ...){ cat("Surveyor\n\n") print.listof(x) }
And the roxygenized output, i.e.
print.surveyor.Rd:
\name{print.surveyor} \title{Prints surveyor object.} \usage{print(x, ...) #'} \description{Prints surveyor object.} \details{Prints surveyor object #'} \alias{print} \alias{print.surveyor} \arguments{\item{x}{surveyor object} \item{...}{ignored}}
As of roxygen2 > 3.0.0., you only need
@export because roxygen can figure out that
print.surveyor is an S3 method. This means that you now only need
#' Prints surveyor object. #' #' @param x surveyor object #' @param ... ignored #' @export print.surveyor <- function(x, ...){ cat("Surveyor\n\n") print.listof(x) }
However, in this case since the documentation isn't very useful, it'd probably better to just do:
#' @export print.surveyor <- function(x, ...){ cat("Surveyor\n\n") print.listof(x) }
Object documentation � R packages, I am writing a package that defines a new class, surveyor, and a print method for this, i.e. print.surveyor . My code works fine and I use roxygen for inline�.
Update
As of roxygen2 > 3.0.0 the package has gotten a lot smarter at figuring all this out for you. You now just need the
@export tag and roxygen will work out what kind of thing you are documenting and do the appropriate thing when writing the
NAMESPACE etc during conversion.
There are exceptions where you may need to help out roxygen. An example that Hadley Wickham uses in his R Packages book is
all.equal.data.frame. There is ambiguity in that function name as to what is the class and what is the generic function (
all,
all.equal, or
all.equal.data)?
In such cases, you can help roxygen out by explicitly informing it of the generic and class/method, e.g.
@method all.equal data.frame
The original answer below explains more about the older behaviour if you need to explicitly use
@method.
Original
The function should be documented with the
@method tag:
#' @method print surveyor
On initial reading, @hadley's document was a little confusing for me as I am not familiar with roxygen, but after several readings of the section, I think I understand the reason why you need
@method.
You are writing full documentation for the
@S3method is related to the
NAMESPACE and arranges for the method to be exported.
@S3method is not meant for documenting a method.
Your Rd file should have the following in the
usage section:
\method{print}{surveyor}(x, ...)
if this works correctly, as that is the correct way to document S3 methods in Rd files.
`document` generates wrong S3 exports if NAMESPACE doesn't , Instead of writing these files by hand, we're going to use roxygen2 which It abstracts over the differences in documenting different types of objects, so you need to learn fewer details. This is a generic function: methods can be defined for it directly or via the S3 generics are regular functions, so document them as such. S3 methods are printed by pasting the generic function and class together, separated by a ‘.’, as generic.class. The S3 method name is followed by an asterisk * if the method definition is not exported from the package namespace in which the method is defined.
@export only works if the generic is loaded. If the generic is in another package you need to import the generic. With current roxygen this is solved with a block like
#' @importFrom tibble data_frame #' @export tibble::data_frame
taken from dplyr/R/reexport-tibble.r . In this example, the
data_frame method is imported from the tibble package, and tibble::data_frame is exported. Such re-exported objects are then documented in a
reexports.Rd file that - needless to say - satisfies R CMD check.
Rd (documentation) tags, However, the roxygen2 tags inside my package are correct: #' @importFrom roxygen2 roclet_tags As a consequence, even though the Imports are correctly listed in the DESCRIPTION , devtools fails to import the necessary S3 generic which my method extends. Add a new file R/test.r with the contents. S3 methods in the R programming language are a way of writing functions in R that do different things for objects of different classes. S3 methods are so named as the methods shipped with the release of the third version of the “S” programming language, which R was heavily based upon [@chambers1992, @rclassmethods, @R].
Generating Rd files, Text within roxygen blocks can be formatted using markdown or Rd This is a generic function: methods can be defined for it directly #' or via the [Summary()] group generic. put them in separate files and use @example path/relative/to/ package/root to insert S3 generics are regular functions, so document them as such. How to properly document S4 “[” and “[<-“ methods using roxygen? How to properly document a S3 method of a generic from a different package, using Roxygen? Roxygen2-how to properly document S3 methods ; How to properly document S4 methods using roxygen2
roxygen2 2.2, With roxygen2, there are few reasons to know about Rd files, so here I'll avoid discussing them This is a generic function: methods can be defined for it directly #' or via the put them in separate files and use @example path/relative/ to/package/root to insert S3 generics are regular functions, so document them as such. In this case, roxygen requires quite strict conventions for function exporting. We have found that in practice it is next to impossible to dynamically assign S3 generics and methods to the package environment and successfully document the generics/methods in a way that allows I) correct documentation and II) the correct methods exported.
roxygen2 2.2, The default @inherit my_fun will inherit all, you can document an object Non- primitive, internal S3 generics (e.g. 'rbind', 'cbind') are now properly detected as S3 generics. S3 method declarations via R.methodS3::setMethodS3() and function and now exported making it possible to create roclets in other packages. A simple example is that I have created an extension to show, which is a S4 base method.I don't want to cause a disambiguation fork by re-documenting show in my package, and I also want to consolidate the documentation of my extension to show in the documentation for the new class, myPkgSpClass, by adding an alias for show,myPkgSpClass-method.
- Neat. Thank you. I'll try this immediately!
- Hmm. I’d like to register an S3 method without exporting it. Unfortunately that generates a deprecation warning. Is there a way around this?
- @KonradRudolph what does it mean to register an S3 method without exporting it?
- @hadley I wanted to avoid using
@exportbecause then
R CMD checkcomplained about undocumented parameters (I hadn’t documented the method since it’s supposed to be invisible anyway). However, I cannot reproduce this now: no
.Rdfile is generated and no
R CMD checkwarning is displayed for missing parameters. I probably had some empty line in the doc comment beforehand, causing an incomplete documentation to be generated for this method.
- Thanks, been struggling with this too, but how do you use it with a class specific [-function, e.g.
[.myClass? This solution passes
R CMD checkwithout warnings but the resulting Rd tag is a mess.
@method [ myClassbecomes
\method{[}{myClass} (x, i, j, ...).
- @Backlin What's wrong with that? That looks like the proper way to mark-up and S3 method in Rd. It will render properly in the Usage section but with
## S3 method blah blahabove the usage code.
- Oh no, my bad! I was looking at the documentation in an old version of R but it renders correctly in the newer versions. Hope I didn't waste too much of your time with it.
- @Backlin - no worries. Glad the Q&A was useful to you.
- @cboettig Yes,
@methodis the tag for the documentation (the
.Rdfile) and
@S3methodis for the namespace mechanism. You need both these days if you have your own
NAMESPACEfile. You only need
@exportif you really want to make the method visible outside the namespace of the package.
|
http://thetopsites.net/article/58635509.shtml
|
CC-MAIN-2020-50
|
refinedweb
| 1,509
| 56.05
|
This page summarizes the key design principles that FIDL has adopted over time.
Priority of constituencies
FIDL aims to respect the following priority of constituencies:
- Users (using a Fuchsia product)
- End-developers (using FIDL bindings)
- Fuchsia contributors (using FIDL bindings)
- API designers (authoring FIDL libraries)
- Fuchsia FIDL Team members
This list was adapted from that of the API Council Charter.
ABI first, API second
From RFC-0050: Syntax revamp:
FIDL is primarily concerned with defining Application Binary Interface (ABI) concerns, and second with Application Programming Interface (API) concerns.
Binary wire format first
From RFC-0050: Syntax revamp:
While many formats can represent FIDL messages, the FIDL Wire Format (or "FIDL Binary Wire Format") is the one which has preferential treatment, and is catered to first ... we choose to over rotate towards the binary ABI format when making syntax choices.
Fewest features
From RFC-0050: Syntax revamp:
We strive to have the fewest features and rules, and aim to combine features to achieve use cases. In practice, when considering new features, we should first try to adapt or generalize other existing features rather than introduce new features.
You only pay for what you use
From RFC-0027: You only pay for what you use:.
For example, RFC-0047: Tables followed this principle by adding tables to the language rather than replacing structs:
Tables are necessarily more complicated than structs, and so processing them will be slower and serializing them will use more space. As such, it's preferred to keep structs as is and introduce something new.
In contrast, RFC-0061: Extensible unions reached the opposite decision of replacing static unions with extensible unions, but only after a careful analysis of the tradeoffs. Unlike with tables, the extra cost imposed by extensible unions is marginal or nonexistent in most cases.
Solve real problems
We design FIDL to solve real problems and address real needs, not imagined ones. We avoid designing a "solution in search of a problem".
For example, FIDL initially did not support empty structs because it was unclear how to represent them in C/C++. In RFC-0056: Empty structs, we saw users were employing workarounds and recognized the need for an official solution. Only then did we add empty structs to the language.
Optimize based on data
Optimizing without data is useless at best and dangerous at worst. When designing optimizations (e.g. performance, binary size), we follow the data.
For example, RFC-0032: Efficient envelopes was initially accepted, but later rejected. In hindsight, it should never have been accepted because there was no data to back it up.
No breakage at a distance
We strive to avoid breakage at a distance. Changes in one place should not
cause surprising breakage in a faraway place. For example, it would be
surprising if adding a struct named
Foo to a FIDL file broke compilation
because another FIDL file in a completely different part of the codebase already
had a type named
Foo. This is why FIDL, like most programming languages, uses
namespaces to limit the scope of name collisions.
RFC-0029: Increasing method ordinals discusses this problem as it relates to protocol composition. RFC-0048: Explicit union ordinals revisits the topic, explaining why FIDL only uses hashing for protocols.
RFC-0057: Default no handles introduced a distinction between value
and resource types. One motivation for this was providing the
Clone trait in Rust for types without handles without breakage at a distance:
Although FIDL bindings can conditionally enable code based on the presence of handles, doing so is undesirable because it breaks evolvability guarantees. For example, adding a field to a table is normally safe, but adding a handle field would become source-breaking — not only for that table, but for all types transitively containing it.
Liberal syntax, idiomatic style
We do not rigidly adhere to a "one way to do it" philosophy. When we are
concerned that users will waste time deciding between trivial alternatives, we
introduce restrictions in
fidl-lint or
fidl-format rather than in
fidlc.
For example, FIDL accepts modifier keywords in any order, but we intend to enforce a consistent ordering in the linter.
As another example, RFC-0040: Identifier uniqueness fixed the
problem of identifiers colliding after case transformation by having
fidlc
report an error if any two identifiers have the same canonical form. A simpler
alternative would have been to enforce FIDL naming conventions in the compiler.
However, this goes a step too far. There are valid reasons for using different
naming styles, for example in describing the Kernel API, where
snake_case
methods are strongly preferred.
Canonical representation
The FIDL wire format is canonical: there is exactly one encoding for a given message. As a corollary, every byte is accounted for: there is no byte that can be changed without altering the message's meaning.
For example, the specification requires that all padding bytes are zero. Similarly, RFC-0047: Tables disallows storing extraneous empty envelopes to ensure a canonical representation.
A canonical representation makes FIDL simpler and more secure. For example,
allowing nonzero padding could result in FIDL messages leaking sensitive
information that happened to occupy those bytes in memory. Allowing multiple
representations for a given message also leads to rarely executed code paths
that can hide bugs, e.g. the "extra empty envelopes" code path. A canonical
representation also makes it easy to compare messages for equality without
knowing the schema: for value types, it is a simple
memcmp.
No allocations required
From the wire format specification:
FIDL is designed such that encoding and decoding of messages can occur in place in memory.
This requirement significantly influences the design of the wire format: you must be able to decode in place using only the stack. It is the reason the wire format uses presence indicators and a depth-first traversal order rather than, for example, and offset-based format that requires auxiliary data structures to keep track of information while decoding.
This principle is related to "You only pay for what you use",
in that it caters to very low-level uses of FIDL where
malloc may not yet
exist, or is prohibitively expensive.
Transport generality
While the binary wire format comes first, this does not mean FIDL should be tightly coupled to the Zircon channel transport. There are other important use cases to consider, such as describing the Kernel API, in-process messaging, and persistence.
RFC-0050: Syntax revamp describes the future direction for transport generalization.
RFC-0062: Method impossible was rejected in part because it coupled FIDL too closely to the Zircon channel transport.
|
https://fuchsia.dev/fuchsia-src/concepts/fidl/design-principles
|
CC-MAIN-2021-21
|
refinedweb
| 1,095
| 51.78
|
Handling Events in Qt
Overview
In this article we will draw a picture how to handle events by using the specialized event handler timerEvent()
Basic Idea
We will implement a widget that displays the clock time in the local display format and also the current date in the appropriate format, alternating every ten seconds. The display itself should update every second.
The below two screen shots recorded at an interval of ten seconds
Class Definition
We implement the clock in a class called ClockWidget, which we derive from QLCDNumber, a Qt class that provides an imitation of an LCD display.
#ifndef CLOCKWIDGET_H
#define CLOCKWIDGET_H
#include <QMainWindow>
#include <QLCDNumber>
namespace Ui {
class ClockWidget;
}
class QTimerEvent;
class ClockWidget : public QLCDNumber
{
Q_OBJECT
public:
explicit ClockWidget(QWidget *parent = 0);
~ClockWidget();
private:
Ui::ClockWidget *ui;
protected:
void timerEvent(QTimerEvent *e);
private:
int updateTimer, switchTimer;
bool showClock;
};
#endif // CLOCKWIDGET_H
Here we are particularly interested in, besides the constructor, the specialized event handler timerEvent(), which will update the clock time. In the updateTimer and switchTimer member variables we save numbers that serve as identifiers for the timers. The showClock status flag determines whether the clock time (showClock=true) or the date (showClock=false) appears on the widget.
Class Implementation
The implementation in clockwidget.cpp begins by specifying the form of the display. Usually QLCDNumber shows a frame around the digital display. This behavior, inherited from QFrame, is disabled by the QFrame::NoFrame frame style. In addition we dissuade the widget from drawing the LCD elements with shadows and a border, by passing on QLCDNumber::Flat to the widget’s setSegmentStyle() method.
#include "clockwidget.h"
#include <QtGui>
ClockWidget::ClockWidget(QWidget *parent) :
QLCDNumber(parent), showClock(true)
{
setFrameShape(QFrame::NoFrame);
setSegmentStyle(QLCDNumber::Flat);
updateTimer = startTimer(1000);
switchTimer = startTimer(10000);
QTimerEvent *e = new QTimerEvent(updateTimer);
QCoreApplication::postEvent(this, e);
}
Now we need two timers. Each QObject can start a timer using the startTimer() method. As an argument startTimer() expects the number of seconds that must pass before it triggers a QTimerEvent, which is addressed to the current widget. Each QTimerEvent in turn contains an identification number, which is returned by the invocation of startTimer() that originates it. We manually send a timer event with the ID of the update-Timer, using the postEvent() method of QCoreApplication, so that we do not have to wait for a second to elapse before the time appears on the widget’s display.
In the timerEvent() method we first check whether the pointer to the event really is valid—just to be on the safe side. Next, if the event contains the switchTimer ID, this only toggles the showClock variable. The actual work awaits in the last conditional statement, which is triggered by an event containing the updateTimer ID.
void ClockWidget::timerEvent(QTimerEvent *e)
{
if (!e) return;
if (e->timerId() == switchTimer)
showClock = !showClock;
if (e->timerId() == updateTimer) {
if (showClock) {
QTime time = QTime::currentTime();
QString str = time.toString(Qt::LocalDate);
setNumDigits(str.length());
display(str);
} else {
QDate date = QDate::currentDate();
QString str = date.toString(Qt::LocalDate);
setNumDigits(str.length());
display(str);
}
}
}
If the widget is supposed to display the time, then we first determine the current time. In Qt, the QTime class is responsible for handling time. The currentTime() static method of this provides the current system time in a QTime object. This time is converted by toString() into a QString. Qt::LocalDate instructs the method to take into account the country settings (locales)of the user. Finally we must inform the display how many LCD digit positions are required. We deduce this from the string length and display the string with display().
QLCDNumber cannot display all alphanumerical characters, we use setNumDigits() for that. On the other hand, if showClock is set to false, which means that the widget should display just the date, we proceed in the same way with the QDate class, which in Qt is responsible for managing date specifications, and whose API corresponds almost exactly to that of QTime.
Source Code
The full source code presented in this article is available here File:ClockWidgetQt.zip
Reference Articles
--somnathbanik 12:49, 5 May 2011 (UTC)
|
http://developer.nokia.com/community/wiki/Handling_Events_in_Qt
|
CC-MAIN-2014-35
|
refinedweb
| 681
| 52.6
|
Update 2/26/2020 – due to the comments I’ve posted a github project that you can check out to help.
What.
0365
Seems like there is flickering before hiding.
Tom Daly
you have some trade offs here due to when app customizers load and caching. css vs js . Just remember all Microsoft code loads first, then it allows app customizers / webparts.
the least amount of flicker is to hide the bar with CSS and then show it with JavaScript. the css will cache but there is always a flicker.
Jake
That would be great, thanks very much for the response Tom.
Tom Daly
Just posted an update to the article w/ a link to a sample github project. let me know if you have any other questions!
Jake
Thanks so much Tom, this works great. Your help is much appreciated. 🙂
William
Hello, I am new to Sharepoint and trying to apply your tutorial. I dit all the MS Sharepoint tutorials you advised (create an extension with top and bottom placeholders) but i am stuck with following errors when running ‘gulp serve’ :
– ERROR TS2304: Cannot find name ‘IHeaderProps’
– ERROR TS2552: Cannot find name ‘Header’. Did you mean ‘Headers’ ?
I assume these need to be declared but how and where?
For the React and the ReactDom I have added:
import * as React from “react”;
import * as ReactDom from “react-dom”;
Do I need to understand that a file Header.tsx needs to be created ? If so, where to place this ?
Thank you for your help.
William
Hello,
We resolved this issue, no further action needed.
Thank you and regards,
W.
Jake
Hi William,
I’m not sure whether you will see this comment but I’m in similar position to yourself, could you perhaps expand on what you had to do to resolve these errors? Any help would be greatly appreciated.
Thanks
Tom Daly
Thanks for the feedback. The header is if you had a component you were creating in the application customizer. This was a snippet from my original POC. Tomorrow I’ll post a link to a sample github project with the complete solution.
Jake
Hi William,
I don’t know whether you will see this but I am in the same situation as yourself. I was wondering if you could perhaps elaborate on how you were able to resolve these errors?
Thanks
|
http://thomasdaly.net/2018/05/14/trimming-the-suite-bar-ribbon-on-modern-sharepoint-sites-in-office-365/
|
CC-MAIN-2020-45
|
refinedweb
| 394
| 83.86
|
I'm new to C++, if anyone can help me, i'll be grateful.
The question is a follow:
User will be asked to key in the number of players (maximum 5 players) that challenge each other for top ranking. Their name will then be taken in one by one. Assume the names will not exceed 29 characters including spaces.
My question is how to store the name into an array. I only know how to store one player's name into it. And why is it always start with the 2nd participants instead of the first one??
Here is my code:
Code:#include<iostream> using namespace std; int main(void) { int no_participants; int crtl_1; char name_participants[30]; cout<<"Please enter the number of participants (1-5):"; cin>>no_participants; for (crtl_1=1; crtl_1<=no_participants; crtl_1++) { cout<<"Please enter the name of participants "<<crtl_1 <<":"; cin.getline (name_participants, 30); cout<<endl; } cout<<name_participants; cout<<endl; return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/140355-help-array.html
|
CC-MAIN-2014-10
|
refinedweb
| 155
| 55.95
|
More Videos
Streaming is available in most browsers,
and in the WWDC app..
Resources
Related Videos
WWDC 2016
- Download
[Music] [Silence] Dave Abrahams: Hi, everybody. My name is Dave Abrahams, and I'm the technical lead for the Swift standard library, and it is truly my privilege to be with you here today. It is great to see all of you in this room. The next 40 minutes are about putting aside your usual way of thinking about programming. What we're going to do together here won't necessarily be easy, but I promise you if you stick with me, that it'll be worth your time. I'm here to talk to you about themes at the heart of Swift's design, and introduce you to a way of programming that has the potential to change everything. But first, let me introduce you to a friend of mine. This is Crusty.
Now you've probably all worked with some version of this guy. Crusty is that old-school programmer who doesn't trust debuggers, doesn't use IDEs.
No, he favors an 80 x 24 terminal window and plain text, thank you very much. And he takes a dim view of the latest programming fads.
Now I've learned to expect Crusty to be a little bit cynical and grumpy, but even so it sometimes takes me by surprise.
Like last month we were talking about app development, and said flat out, 'I don't do object-oriented.' I could hardly believe my ears.
I mean, object-oriented programming has been around since the 1970s, so it's not exactly some new-fangled programming fad.
And, furthermore, lots of the amazing things that we've all built together, you and I and the engineers on whose shoulders we stand, were built with objects.
'Come on,' I said to him as I walked over to his old-school chalkboard. 'OOP is awesome.
Look what you can do with classes.' [Silence] Yes. So first you can group related data and operations.
And then we can build walls to separate the inside of our code from the outside, and that's what lets us maintain invariants.
Then we use classes to represent relatable ideas, like window or communication channel. They give us a namespace, which helps prevent collisions as our software grows. They have amazing expressive syntax. So we can write method calls and properties and chain them together. We can make subscripts. We can even make properties that do computation.
Last, classes are open for extensibility. So if a class author leaves something out that I need, well, I can come along and add it later.
And, furthermore, together, these things, these things let us manage complexity and that's really the main challenge in programming.
These properties, they directly address the problems we're trying to solve in software development. At that point, I had gotten myself pretty inspired, but Crusty just snorted and [sighed]. [hiss sound] He let all the air out of my balloon. And if that wasn't bad enough, a moment later he finished the sentence. [Laughter] Because it's true, in Swift, any type you can name is a first class citizen and it's able to take advantage of all these capabilities. So I took a step back and tried to figure out what core capability enables everything we've accomplished with object-oriented programming. Obviously, it has to come from something that you can only do with classes, like inheritance. And this got me thinking specifically about how these structures enable both code sharing and fine-grained customization. So, for example, a superclass can define a substantial method with complex logic, and subclasses get all of the work done by the superclass for free. They just inherit it. But the real magic happens when the superclass author breaks out a tiny part of that operation into a separate customization point that the subclass can override, and this customization is overlaid on the inherited implementation. That allows the difficult logic to be reused while enabling open-ended flexibility and specific variations. And now, I was sure, I had him.
'Ha,' I said to Crusty. 'Obviously, now you have to bow down before the power of the class.' 'Hold on just a darn tootin' minute,' he replied. 'First of all, I do customization whatchamacallit all the time with structs, and second, yes, classes are powerful but let's talk about the costs. I have got three major beefs with classes,' said Crusty.
And he started in on his list of complaints. 'First, you got your automatic sharing.' Now you all know what this looks like.
A hands B some piece of perfectly sober looking data, and B thinks, 'Great, conversation over.' But now we've got a situation where A and B each have their own very reasonable view of the world that just happens to be wrong. Because this is the reality: eventually A gets tired of serious data and decides he likes ponies instead, and who doesn't love a good pony? This is totally fine until B digs up this data later, much later, that she got from A and there's been a surprise mutation. B wants her data, not A's ponies.
Well, Crusty has a whole rant about how this plays out. 'First,' he says, 'you start copying everything like crazy to squash the bugs in your code.
But now you're making too many copies, which slows the code down. And then one day you handle something on a dispatch queue and suddenly you've got a race condition because threads are sharing a mutable state, so you start adding locks to protect your invariants.
But the locks slow the code down some more and might even lead to deadlock. And all of this is added complexity, whose effects can be summed up in one word, bugs.' But none of this is news to Cocoa programmers. [Laughter] It's not news. We've been applying a combination of language features like @property(copy) and coding conventions over the years to handle this. And we still get bitten.
For example, there's this warning in the Cocoa documentation about modifying a mutable collection while you're iterating through it. Right? And this is all due to implicit sharing of mutable state, which is inherent to classes.
But this doesn't apply to Swift. Why not? It's because Swift collections are all value types, so the one you're iterating and the one you're modifying are distinct. Okay, number two on Crusty's list, class inheritance is too intrusive.
First of all, it's monolithic. You get one and only one superclass. So what if you need to model multiple abstractions? Can you be a collection and be serialized? Well, not if collection and serialized are classes. And because class inheritance is single inheritance, classes get bloated as everything that might be related gets thrown together. You also have to choose your superclass at the moment you define your class, not later in some extension. Next, if your superclass had stored properties, well, you have to accept them. You don't get a choice. And then because it has stored properties, you have to initialize it. And as Crusty says, 'designated convenience required, oh, my.' So you also have to make sure that you understand how to interact with your superclass without breaking its invariants. Right? And, finally, it's natural for class authors to write their code as though they know what their methods are going to do, without using final and without accounting for the chance that the methods might get overridden. So, there's often a crucial but unwritten contract about which things you're allowed to actually override and, like, do you have to chain to the superclass method? And if you're going to chain to the superclass method, is it at the beginning of your method, or at the end, or in the middle somewhere? So, again, not news to Cocoa programmers, right? This is exactly why we use the delegate pattern all over the place in Cocoa.
Okay, last on Crusty's list, classes just turn out to be a really bad fit for problems where type relationships matter.
So if you've ever tried to use classes to represent a symmetric operation, like Comparison, you know what I mean.
For example, if you want to write a generalized sort or binary search like this, you need a way to compare two elements.
And with classes, you end up with something like this. Of course, you can't just write Ordered this way, because Swift demands a method body for precedes.
So, what can we put there? Remember, we don't know anything about an arbitrary instance of Ordered yet.
So if the method isn't implemented by a subclass, well, there's really nothing we can do other than trap. Now, this is the first sign that we're fighting the type system.
And if we fail to recognize that, it's also where we start lying to ourselves, because we brush the issue aside, telling ourselves that as long as each subclass of Ordered implements precedes, we'll be okay. Right? Make it the subclasser's problem.
So we press ahead and implement an example of Ordered. So, here's a subclass. It's got a double value and we override precedes to do the comparison. Right? Except, of course, it doesn't work. See, "other" is just some arbitrary Ordered and not a number, so we don't know that "other" has a value property. In fact, it might turn out to be a label, which has a text property. So, now we need to down-cast just to get to the right type. But, wait a sec, suppose that "other" turns out to be a label? Now, we're going to trap. Right? So, this is starting to smell a lot like the problem we had when writing the body for precedes in the superclass, and we don't have a better answer now than we did before. This is a static type safety hole.
Why did it happen? Well, it's because classes don't let us express this crucial type relationship between the type of self and the type of other.
In fact, you can use this as a "code smell." So, any time you see a forced down-cast in your code, it's a good sign that some important type relationship has been lost, and often that's due to using classes for abstraction. Okay, clearly what we need is a better abstraction mechanism, one that doesn't force us to accept implicit sharing, or lost type relationships, or force us to choose just one abstraction and do it at the time we define our types; one that doesn't force us to accept unwanted instance data or the associated initialization complexity.
And, finally, one that doesn't leave ambiguity about what I need to override. Of course, I'm talking about protocols.
Protocols have all these advantages, and that's why, when we made Swift, we made the first protocol-oriented programming language.
So, yes, Swift is great for object-oriented programming, but from the way for loops and string literals work to the emphasis in the standard library on generics, at its heart, Swift is protocol-oriented. And, hopefully, by the time you leave here, you'll be a little more protocol-oriented yourself. So, to get you started off on the right foot, we have a saying in Swift.
Don't start with a class. Start with a protocol. So let's do that with our last example.
Okay, first, we need a protocol, and right away Swift complains that we can't put a method body here, which is actually pretty good because it means that we're going to trade that dynamic runtime check for a static check, right, that precedes as implemented. Okay, next, it complains that we're not overriding anything.
Well, of course we're not. We don't have a baseclass anymore, right? No superclass, no override.
And we probably didn't even want number to be a class in the first place, because we want it to act like a number. Right? So, let's just do two thing at once and make that a struct. Okay, I want to stop for a moment here and appreciate where we are, because this is all valid code again.
Okay, the protocol is playing exactly the same role that the class did in our first version of this example. It's definitely a bit better.
I mean, we don't have that fatal error anymore, but we're not addressing the underlying static type safety hole, because we still need that forced down-cast because "other" is still some arbitrary Ordered. Okay. So, let's make it a number instead, and drop the type cast. Well, now Swift is going to complain that the signatures don't match up. To fix this, we need to replace Ordered in the protocol signature with Self.
This is called a Self-requirement. So when you see Self in a protocol, it's a placeholder for the type that's going to conform to that protocol, the model type. So, now we have valid code again. Now, let's take a look at how you use this protocol.
So, this is the binary search that worked when Ordered was a class. And it also worked perfectly before we added that Self-requirement to Ordered. And this array of ordered here is a claim. It's a claim that we're going to handle a heterogeneous array of Ordered. So, this array could contain numbers and labels mixed together, right? Now that we've made this change to Ordered and added the Self-requirement, the compiler is going to force us to make this homogeneous, like this.
This one says, 'I work on a homogeneous array of any single Ordered type T.' Now, you might think that forcing the array to be homogeneous is too restrictive or, like, a loss of functionality or flexibility or something. But if you think about it, the original signature was really a lie. I mean, we never really handled the heterogeneous case other than by trapping.
Right? A homogeneous array is what we want. So, once you add a Self-requirement to a protocol, it moves the protocol into a very different world, where the capabilities have a lot less overlap with classes. It stops being usable as a type. Collections become homogeneous instead of heterogeneous.
An interaction between instances no longer implies an interaction between all model types. We trade dynamic polymorphism for static polymorphism, but, in return for that extra type information we're giving the compiler, it's more optimizable. So, two worlds.
Later in the talk, I'll show you how to build a bridge between them, at least one way. Okay. So, I understood how the static aspect of protocols worked, but I wasn't sure whether to believe Crusty that protocols could really replace classes and so I set him a challenge, to build something for which we'd normally use OOP, but using protocols. I had in mind a little diagramming app where you could drag and drop shapes on a drawing surface and then interact with them. And so I asked Crusty to build the document and display model. And here's what he came up with.
First, he built some drawing primitives. Now, as you might imagine, Crusty really doesn't really do GUI's.
He's more of a text man. So his primitives just print out the drawing commands you issue, right? I grudgingly admitted that this was probably enough to prove his point, and then he created a Drawable protocol to provide a common interface for all of our drawing elements.
Okay, this is pretty straightforward. And then he started building shapes like Polygon. Now, the first thing to notice here about Polygon is it's a value type, built out of other value types. It's just a struct that contains an array of points.
And to draw a polygon, we move to the last corner and then we cycle through all the corners, drawing lines. Okay, and here's a Circle.
Again, Circle is a value type, built out of other value types. It's just a struct that contains a center point and a radius. Now to draw a Circle, we make an arc that sweeps all the way from zero to two pi radians. So, now we can build a diagram out of circles and polygons. 'Okay,' said Crusty, 'let's take her for a spin.' So, he did. This is a diagram. A diagram is just a Drawable.
It's another value type. Why is it a value type? Because all Drawables are value types, and so an array of Drawables is also a value type. Let's go back to that. Wow. Okay, there.
An array of Drawables is also a value type and, therefore, since that's the only thing in my Diagram, the Diagram is also a value type.
So, to draw it, we just loop through all of the elements and draw each one. Okay, now let's take her for a spin.
So, we're going to test it. So, Crusty created a Circle with curiously specific center and radius.
And then, with uncanny Spock-like precision, he added a Triangle. And finally, he built a Diagram around them, and told it to draw. 'Voila,' said Crusty, triumphantly. 'As you can plainly see, this is an equilateral triangle with a circle, inscribed inside a circle.' Well, maybe I'm just not as good at doing trigonometry in my head, as Crusty is, but, 'No, Crusty,' I said, 'I can't plainly see that, and I'd find this demo a whole lot more compelling if I was doing something actually useful for our app like, you know, drawing to the screen.' After I got over my annoyance, I decided to rewrite his Renderer to use CoreGraphics.
And I told him I was going to this and he said, 'Hang on just a minute there, monkey boy. If you do that, how am I going to test my code?' And then he laid out a pretty compelling case for the use of plaintext in testing. If something changes in what we're doing, we'll immediately see it in the output. Instead, he suggested we do a little protocol-oriented programming.
So he copied his Renderer and made the copy into a protocol. Yeah, and then you have to delete the bodies, okay. There it is.
And then he renamed the original Renderer and made it conform. Now, all of this refactoring was making me impatient, like, I really want to see this stuff on the screen.
I wanted to rush on and implement a Renderer for CoreGraphics, but I had to wait until Crusty tested his code again.
And when he was finally satisfied, he said to me, 'Okay, what are you going to put in your Renderer?' And I said, 'Well, a CGContext.
CGContext has basically everything a Renderer needs." In fact, within the limits of its plain C interface, it basically is a Renderer.
'Great,' said Crusty. 'Gimme that keyboard.' And he snatched something away from me and he did something so quickly I barely saw it. 'Wait a second,' I said. 'Did you just make every CGContext into a Renderer?' He had. I mean, it didn't do anything yet, but this was kind of amazing. I didn't even have to add a new type.
'What are you waiting for?' said Crusty. 'Fill in those braces.' So, I poured in the necessary CoreGraphics goop, and threw it all into a playground, and there it is. Now, you can download this playground, which demonstrates everything I'm talking about here in the talk, after we're done. But back to our example.
Just to mess with me, Crusty then did this. Now, it took me a second to realize why Drawing wasn't going into an infinite recursion at this point, and if you want to know more about that, you should go to this session, on Friday. But it also didn't change the display at all.
Eventually, Crusty decided to show me what was happening in his plaintext output. So it turns out that it was just repeating the same drawing commands, twice. So, being more of a graphics-oriented guy, I really wanted to see the results.
So, I built a little scaling adapter and wrapped it around the Diagram and this is the result. And you can see this in the playground, so I'm not going to go into the scaling adapter here. But that's kind of a demonstration that with protocols, we can do all the same kinds of things that we're used to doing with classes. Adapters, usual design patterns. Okay, now I'd like to just reflect a second on what Crusty did with TestRenderer though, because it's actually kind of brilliant. See, by decoupling the document model from a specific Renderer, he's able to plug in an instrumented component that reveals everything that we do, that our code does, in detail.
And we've since applied this approach throughout our code. We find that, the more we decouple things with protocols, the more testable everything gets.
This kind of testing is really similar to what you get with mocks, but it's so much better. See, mocks are inherently fragile, right? You have to couple your testing code to the implementation details of the code under test. And because of that fragility, they don't play well with Swift's strong static type system. See, protocols give us a principled interface that we can use, that's enforced by the language, but still gives us the hooks to plug in all of the instrumentation we need. Okay, back to our example, because now we seriously need to talk about bubbles. Okay. We wanted this diagramming app to be popular with the kids, and the kids love bubbles, of course.
So, in a Diagram, a bubble is just an inner circle offset around the center of the outer circle that you use to represent a highlight.
So, you have two circles. Just like that. And when I put this code in context though, Crusty started getting really agitated. All the code repetition was making him ornery, and if Crusty ain't happy, ain't nobody happy.
[Laughter] 'Look, they're all complete circles,' he shouted. 'I just want to write this.' I said, 'Calm down, Crusty. Calm down. We can do that.
All we need to do is add another requirement to the protocol. All right? Then of course we update our models to supply it.
There's test Renderer. And then the CGContext.' Now, at this point Crusty's got his boot off and he's beating it on the desk, because here we were again, repeating code. He snatched the keyboard back from me, muttering something about having to do everything his own self, and he proceeded to school me using a new feature in Swift. This is a protocol extension. This says 'all models of Renderer have this implementation of circleAt.' Now we have an implementation that is shared among all of the models of Renderer.
So, notice that we still have this circleAt requirement up there. You might ask, 'what does it means to have a requirement that's also fulfilled immediately in an extension?' Good question.
The answer is that a protocol requirement creates a customization point. To see how this plays out, let's collapse this method body and add another method to the extension. One that isn't backed by a requirement. And now we can extend Crusty's TestRenderer to implement both of these methods. And then we'll just call them. Okay. Now, what happens here is totally unsurprising.
We're directly calling the implementations in TestRenderer and the protocol isn't even involved, right? We'd get the same result if we removed that conformance.
But now, let's change the context so Swift only knows it has a Renderer, not a TestRenderer. And here's what happens.
So because circleAt is a requirement, our model gets the privilege of customizing it, and the customization gets called.
That one. But rectangleAt isn't a requirement, so the implementation in TestRenderer, it only shadows the one in the protocol and in this context, where you only know you have a Renderer and not a TestRenderer, the protocol implementation is called. Which is kind of weird, right? So, does this mean that rectangleAt should have been a requirement? Maybe, in this case, it should, because some Renderers are highly likely to have a more efficient way to draw rectangles, say, aligned with a coordinate system.
But, should everything in your protocol extension also be backed by a requirement? Not necessarily.
I mean, some APIs are just not intended as customization points. So, sometimes the right fix is to just not shadow the requirement in the model, not shadow the method in the model. Okay. So, this new feature, incidentally, it's revolutionized our work on the Swift Standard Library. Sometimes what we can do with protocol extensions, it just feels like magic.
I really hope that you'll enjoy working with the latest library as much as we've enjoyed applying this to it and updating it.
And I want to put our story aside for a second, so I can show you some things that we did in Standard Library with protocol extensions, and few other tricks besides.
So, first, there's a new indexOf method. So, this just walks through the indices of the collection until it finds an element that's equal to what we're looking for and it returns that index. And if it doesn't find one, it returns nil. Simple enough, right? But if we write it this way, we have a problem. See the elements of an arbitrary collection can't be compared with equal-equal.
So, to fix that, we can constrain the extension. This is another aspect of this new feature. So, by saying this extension applies when the element type of the collection is Equatable, we've given Swift the information it needs to allow that equality comparison.
And now that we've seen a simple example of a constrained extension, let's revisit our binary search. And let's use it on an array of Int.
Hmm. Okay, Int doesn't conform to Ordered. Well that's a simple fix, right? We'll just add a conformance.
Okay, now what about Strings? Well, of course, this doesn't work for Strings, so we do it again.
Now before Crusty starts banging on his desk, we really want to factor this stuff out, right? The less-than operator is present in the Comparable protocol, so we could do this with an extension to comparable. Like this.
Now we're providing the precedes for those conformances. So, on the one hand, this is really nice, right? When I want a binary search for Doubles, well, all I have to is add this conformance and I can do it. On the other hand, it's kind of icky, because even if I take away the conformance, I still have this precedes function that's been glommed onto Doubles, which already have enough of an interface, right? We maybe would like to be a little bit more selective about adding stuff to Double. So, and even though I can do that, I can't binarySearch with it.
So it's really, that precedes function buys me nothing. Fortunately, I can be more selective about what gets a precedes API, by using a constrained extension on Ordered. So, this says that a type that is Comparable and is declared to be Ordered will automatically be able to satisfy the precedes requirement, which is exactly what we want. I'm sorry, but I think that's just really cool.
We've got the same abstraction. The same logical abstraction coming from two different places, and we've just made them interoperate seamlessly. Thank you for the applause, but I just, I think it's cool. Okay, ready for a palate cleanser? That's just showing it work. Okay. This is the signature of a fully generalized binarySearch that works on any Collection with the appropriate Index and Element types. Now, I can already hear you guys getting uncomfortable out there. I'm not going to write the body out here, because this is already pretty awful to look at, right? Swift 1 had lots of generic free functions like this. In Swift 2, we used protocol extensions to make them into methods like this, which is awesome, right? Now, everybody focuses on the improvement this makes at the call site, which is now clearly chock full of method-y goodness, right? But as the guy writing binarySearch, I love what it did for the signature.
By separating the conditions under which this method applies from the rest of the declaration, which now just reads like a regular method.
No more angle bracket blindness. Thank you very much.
Okay, last trick before we go back to our story. This is a playground containing a minimal model of Swift's new OptionSetType protocol.
It's just a struct with a read-only Int property, called rawValue. Now take a look at the broad Set-like interface you actually get for free once you've done that. All of this comes from protocol extensions. And when you get a chance, I invite you to take a look at how those extensions are declared in the Standard Library, because several layers are working together to provide this rich API.
Okay, so those are some of the cool things that you can do with protocol extensions. Now, for the piece de resistance, I'd like to return to our diagramming example. Always make value types equatable. Why? Because I said so.
Also, eat your vegetables. No, actually, if you want to know why, go to this session on Friday, which I told you about already.
It's a really cool talk and they're going to discuss this issue in detail. Anyway, Equatable is easy for most types, right? You just compare corresponding parts for equality, like this. But, now, let's see what happens with Diagram. Uh-oh. We can't compare two arrays of Drawable for equality.
All right, maybe we can do it by comparing the individual elements, which looks something like this.
Okay, I'll go through it for you. First, you make sure they have the same number of elements, then you zip the two arrays together.
If they do have the same number of elements, then you look for one where you have a pair that's not equal. All right, you can take my word for it.
This isn't the interesting part of the problem. Oops, right? This is, the whole reason we couldn't compare the arrays is because Drawables aren't equatable, right? So, we didn't have an equality operator for the arrays. We don't have an equality operator for the underlying Drawables. So, can we just make all Drawables Equatable? We change our design like this.
Well, the problem with this is that Equatable has Self-requirements, which means that Drawable now has Self-requirements.
And a Self-requirement puts Drawable squarely in the homogeneous, statically dispatched world, right? But Diagram really needs a heterogeneous array of Drawables, right? So we can put polygons and circles in the same Diagram. So Drawable has to stay in the heterogeneous, dynamically dispatched world. And we've got a contradiction. Making Drawable equatable is not going to work.
We'll need to do something like this, which means adding a new isEqualTo requirement to Drawable.
But, oh, no, we can't use Self, right? Because we need to stay heterogeneous. And without Self, this is just like implementing Ordered with classes was. We're now going to force all Drawables to handle the heterogeneous comparison case.
Fortunately, there's a way out this time. Unlike most symmetric operations, equality is special because there's an obvious, default answer when the types don't match up, right? We can say if you have two different types, they're not equal.
With that insight, we can implement isEqualTo for all Drawables when they're Equatable. Like this.
So, let me walk you through it. The extension is just what we said. It's for all Drawables that are Equatable.
Okay, first we conditionally down-cast other to the Self type. Right? And if that succeeds, then we can go ahead and use equality comparison, because we have an Equatable conformance. Otherwise, the instances are deemed unequal.
Okay, so, big picture, what just happened here? We made a deal with the implementers of Drawable. We said, 'If you really want to go and handle the heterogeneous case, be my guest. Go and implement isEqualTo. But if you just want to just use the regular way we express homogeneous comparison, we'll handle all the burdens of the heterogeneous comparison for you.' So, building bridges between the static and dynamic worlds is a fascinating design space, and I encourage you to look into more. This particular problem we solved using a special property of equality, but the problems aren't all like that, and there's lots of really cool stuff you can do. So, that property of equality doesn't necessarily apply, but what does apply almost universally? Protocol-based design. Okay, so, I want to say a few words before we wrap up about when to use classes, because they do have their place. Okay? There are times when you really do want implicit sharing, for example, when the fundamental operations of a value type don't make any sense, like copying this thing. What would a copy mean? If you can't figure out what that means, then maybe you really do want it to be a reference type. Or a comparison. The same thing.
That's another fundamental part of being a value. So, for example, a Window. What would it mean to copy a Window? Would you actually want to see, you know, a new graphical Window? What, right on top of the other one? I don't know. It wouldn't be part of your view hierarchy. Doesn't make sense.
So, another case where the lifetime of your instance is tied to some external side effect, like files appearing on your disk.
Part of this is because values get created very liberally by the compiler, and created and destroyed, and we try to optimize that as well as possible.
It's the reference types that have this stable identity, so if you're going to make something that corresponds to an external entity, you might want to make it a reference type. A class. Another case is where the instances of the abstraction are just "sinks." Like, our Renderers, for example. So, we're just pumping, we're just pumping information into that thing, into that Renderer, right? We tell it to draw a line. So, for example, if you wanted to make a TestRenderer that accumulated the text to output of these commands into a String instead of just dumping them to the console, you might do it like this. But notice a couple of things about this.
First, it's final, right? Second, it doesn't have a base class. That's still a protocol.
I'm using the protocol for the abstraction. Okay, a couple of more cases. So, we live in an object-oriented world, right? Cocoa and Cocoa Touch deal in objects. They're going to give you baseclasses and expect you to subclass them.
They're going to expect objects in their APIs. Don't fight the system, okay? That would just be futile.
But, at the same time, be circumspect about it. You know, nothing in your program should ever get too big, and that goes for classes just as well as anything else.
So, when you're refactoring and factoring something out of class, consider using a value type instead. Okay, to sum up.
Protocols, much greater than superclasses for abstraction. Second, protocol extensions, this new feature to let you do almost magic things.
Third, did I mention you should go see this talk on Friday? Go see this talk on Friday. Eat your vegetables.
Be like Crusty. Thank you very much. [Silence]
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.
|
https://developer.apple.com/videos/play/wwdc2015/408
|
CC-MAIN-2019-35
|
refinedweb
| 6,217
| 74.69
|
From: Toon Knapen (toon.knapen_at_[hidden])
Date: 2004-08-16 04:17:57
I added an extension module 'f.jam' to be able to compile fortran
sources with bjam version 2. Now I also tried to add requirements for
include files but I did not find a way to make it work. What I did in my
f.jam is:
import toolset : flags ;
flags f.compile INCLUDES <include> ;
and than use the $(INCLUDES) variable on the command-line where the
fortran compiler is invoked. But the INCLUDES variable remains empty.
Any ideas how I can solve this?
(my project is in attachment)
--------------010901080909080104070303 Content-Type: application/zip;
name="fortranproject.zip"
Content-Transfer-Encoding: base64
Content-Disposition: inline;
filename="fortranproject.zip"
[Attachment content not displayed.] --------------010901080909080104070303--
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/boost-build/2004/08/6987.php
|
CC-MAIN-2019-26
|
refinedweb
| 153
| 63.96
|
The Allegro licence is absolutely clear unless you are a lawyer.
However, it's written quite informally so this mini-FAQ tries to clarify
some things.
As the error message suggests, you need to provide more memory for the
compiler to use. The go32-v2 program will tell you how much is
currently available. If you are running under DOS, try to free up more
disk space for use as a swapfile. When using win95, increase the DPMI
memory limit in the properties for your DOS session to 65535 (you'll
have to type this in by hand, because the pulldown list doesn't go
above 16384).
As the error message suggests, there is a conflict between newer
versions of gcc and older versions of the libc. You must upgrade your
djdevxxx.zip package to the latest update (timestamp 11 August 2002)
of DJGPP-2.03 or above.
You haven't read the docs, have you? :-) You need to link your program
with the library file, liballeg.a. First, make sure you have installed
everything properly (running make install should do this for you).
Second, if you are compiling from the command prompt or with a makefile,
add -lalleg to the end of your gcc command line, or if you are using Rhide,
go to the Options/Libraries menu, type alleg into the first empty field,
and make sure the box next to it is checked.
No, sorry. For starters, liballeg.a is about 450k, but you'd probably
also want various utilities like the grabber, sound setup, etc. And
what about all the example programs? If we included compiled versions
of everything, a binary distribution would be over 7 megs: way too big
to be practical! More seriously though, there just isn't any reason
why you can't build it yourself. A compiler is a deterministic
process: given a particular input file and command line, it will always
produce the same output. If this isn't working, you either have the
wrong input files (ie. your copy of Allegro is broken in some way), or
the wrong command line (it is hard to see how that could happen, since
all you have to do is type make...) or your compiler is broken, ie. you
didn't install djgpp properly. You need to find and fix the problem,
not just sweep it under the carpet by getting someone else to compile
Allegro for you...
GNU tools write their error messages to the error stream, stderr.
Unfortunately command.com is too stupid to know how to redirect this,
but fortunately DJ was smart enough to work around that, so you can
use his redir program to capture the output messages, for example
redir -eo make > logfile.txt
Your mileage may vary. Some people have reported problems, while
others say that it works fine. Use the Windows version of Allegro
if you want to make Windows programs. If you want to run DOS
programs, use DOS!
Yes, but with some caveats. If you are using the OpenDOS version of
EMM386, you must disable their DPMI implementation (specify DPMI=OFF
on the EMM386.EXE device line in your config.sys). You should also
make sure the PIC=OFF flag is set, but this is the default so it won't
be a problem unless you have specifically enabled it.
You can't. The limit is imposed by the VGA hardware, not by Allegro.
To access more than 256k of video memory you need to use an SVGA mode,
which means either switching to a higher resolution or getting a copy
of the SciTech Display Doctor, which provides several low resolution
SVGA modes.
It does for some people, but not for others. The problem is that
Creative Labs refuse to release any specs, so we don't know how to
write a driver for it. Complain to them, or buy a different card from
a more reasonable manufacturer.
This might be because you have the volume set very low: try changing
this in the setup program. Also, Allegro is mixing several sounds into
a single output buffer, unlike the Windows sound player that only
plays one sample at a time, so each individual sound can only get a
smaller percentage of the total output volume. This is just the price
you pay for multiple output channels. If you don't like it, use the
setup program to tweak the number of channels: this can be any power
of two less than or equal to 64, and the smaller you make it, the
louder the sound will be. Alternatively, use set_volume_per_voice(),
described in the docs. This will enable you to adjust the overall
volume of Allegro's digital sound output.
set_volume_per_voice()
Try using a FreeBE/AF driver (),
or the commercial SciTech Display Doctor ().
If it still doesn't work, post a description of your problem to the
Allegro mailing list, along with a copy of the output from running the
afinfo and vesainfo programs.
The VBE/AF interface already provides this for the video drivers: see
the FreeBE/AF project on.
For more general things like the sound, VESA, and mode-X routines,
this would be very difficult to do because the drivers depend on a lot
of helper functions from the core lib. The djgpp DXE format is nowhere
near flexible enough to support this, and we don't want to make Allegro
dependent on any other dynamic linking packages.
Well duh, you need to increase the size of your environment then :-)
You can do this by changing the settings for your DOS box (click the
system menu and select "properties"), or at startup in your config.sys
file (eg. shell=c:\command.com /e:8192 /p).
Well duh, you need to increase the size of your environment then :-)
You can do this by changing the settings for your DOS box (click the
system menu and select "properties"), or at startup in your config.sys
file (eg. shell=c:\command.com /e:8192 /p).
Make sure that you don't have a semi-colon appended to your MSVCDIR
variable if you are using MSVC, to your MINGDIR variable if you are
using MinGW or to your BCC32DIR variable if you are using BCC.
Also run make -v from the command line and make sure you are using
GNU make and not Borland make or Microsoft make.
You need to tell your compiler how to find the DirectX include files
and libraries: put the DirectX SDK /include and /lib directories in the
compiler/linker path. Alternatively, if you don't want to modify any
configuration stuff, you can simply copy the files from the DirectX SDK
/include and /lib directories to the corresponding ones for your
compiler.
You need to upgrade to a more recent version of the DirectX SDK, at
least version 5, which you can get from the Microsoft Developer site.
If an anti-virus software (Norton or McAfee for example) is running in
the background on your computer, try to disable it temporarily.
You need to write END_OF_MAIN() just after your main() function.
Allegro uses this, along with some preprocessor magic, to turn a normal
main() function into a Windows-style WinMain() entry point.
END_OF_MAIN()
main()
WinMain()
Make sure you're building a Windows GUI Application, and not a Windows
Console Application. This is a setting when you create the project in
MSVC, Dev-C++ or Borland C++ Builder. Alternatively, this is specified
on the command line by the -subsystem:windows option for the MSVC
linker, by the -Wl,--subsystem,windows option for the MinGW compiler
or the -tW option for the Borland C++ compiler. Either that, or define
the preprocessor symbol ALLEGRO_USE_CONSOLE prior to including Allegro
headers if you really need the console for your program.
Disable direct updating for the DirectX windowed driver by using the
dedicated configuration variable. See the 'Configuration routines'
section in the docs and the allegro.cfg template file for more
detailed informations.
Disable direct updating for the DirectX windowed driver by using the
dedicated configuration variable. See the 'Configuration routines'
section in the docs and the allegro.cfg template file for more
detailed informations.
You need to write END_OF_MAIN() just after your main() function.
Allegro uses this, along with some preprocessor magic, to get a copy
of your argv[] parameters (it needs those for various internal things).
argv[]
You are probably on a Darwin/MacOS X system. If so, make sure the symbol
USE_CONSOLE is not defined in your program: it is a deprecated symbol
that must be replaced by ALLEGRO_USE_CONSOLE. Also note that the magic
main can't be disabled on such a system: you simply can't define the
symbol ALLEGRO_NO_MAGIC_MAIN in a program linked against Allegro.
USE_CONSOLE
ALLEGRO_USE_CONSOLE
ALLEGRO_NO_MAGIC_MAIN
You need to redirect stderr into a file, so you can view them later.
The method for doing this depends on your shell: if you are using a
Bourne-style shell like bash, try make 2> logfile.txt
You need to teach the dynamic linker where to find the Allegro shared library.
See docs/build/unix.txt, near the end of the 'Installing Allegro' section.
There are two possible reasons:
1) DGA2 may support different resolutions/color depths than X,
run the gfxinfo program to know what combinations you can use,
2) You may have a buggy DGA2 implementation, see next question.
You are probably using a XFree86 server with a very buggy DGA2
implementation, such as 4.0.3 (shipped with Red Hat 7.1 for example).
Upgrading to 4.1.0 will probably solve your problem. You can obtain
it from selecting
the directory suited to your platform and following the instructions
you find in the Install file.
The grabber needs to be linked with the code in datedit.c. But you
shouldn't have to worry about doing this by hand: just run make and
that will build everything for you.
datedit.c
make
You need to update your version of GNU binutils. See readme.txt to
find out what the minimum required version you need is.
Not unless <foobar> is mentioned in readme.txt as one of the
supported platforms. You could port it, but that is usually a lot of
work. If <foobar> is a 16 bit DOS compiler like Borland C, you
might as well just forget the idea :-)
WIP stands for "work in progress", and refers to any changes that are
more recent than the last official release. WIP versions of the
library can be obtained as patches from the Allegro website
(), and are usually
quite stable, although obviously not so well tested as a final release
version.
Do you have a copy of patch.exe? If not, go and get it from the same
place that you downloaded the rest of djgpp: this tool is a standard
part of the compiler distribution. Similarly, you can get the Mingw
compiled version from.
If).
That file is distributed separately in the WIP versions. It is here:. (alldata.zip)
This is just the way that the video hardware works: there can only be
one palette in use at any given moment. Either convert your images so
they all use the same palette, or switch to a truecolor graphics mode.
See the Allegro.cc homepage ()
for some utilities, for example FixPal and Smacker.
The VGA hardware only uses 6 bits for each color component, which
means the red, green, and blue values in the palette structure range
from 0 to 63, not all the way up to 255. That gives you a potential
2^18 = 262144 different colors, or 64 shades of grey. If you need more
than this you could try using VESA function 0x4F08 to select an 8 bit
wide DAC, but Allegro doesn't support this directly and I don't know
how reliable it will be across different hardware.
When you are in a 256 color mode, the VGA card displays color #0
around the border of the display area (in truecolor modes it displays
black). Your funny color will go away if you change the palette so
that color #0 is black.
With great difficulty :-) There is no such easy trick as just altering
the palette, so you will have to repeatedly redraw the image in a
lighter or darker form. You could draw black translucent rectangles
over the screen to darken it down, or use draw_lit_sprite() to tint a
bitmap while copying it to the screen, but be warned that these
operations are expensive and will require a fast PC!
draw_lit_sprite()
Also, have a look at for add-on packages
(notably FBlend v0.5) that attempt to make this operation as fast as
possible.
fade_in() and fade_out() only work in 8-bit paletted modes.
See the previous question for details.
fade_in()
fade_out().
In your favourite paint program, get hold of the RGB sliders and drag
the red and blue ones up as far as they go (usually to 255, but this
will depend on what units your software uses), and the green one right
down to zero. The result is a special shade of Magic Pink, or as some
people prefer to call it, magenta.
Remember that the vertex positions are stored in fixed point format,
so you must use the itofix() macro or shift your coordinates 16 bits
to the left.
itofix()
Remember that the angle of rotation is stored in fixed point format,
so you must use the itofix() macro or shift your coordinates 16 bits
to the left. For example, rotate_sprite(bmp, spr, x, y, itofix(32))
will rotate the graphic by 45 degrees.
rotate_sprite(bmp, spr, x, y, itofix(32))
You are probably trying to initialise the dialog structure with a
pointer to your bitmap, right? That won't work because the dialog is
created at compile time, but the bitmap is only loaded at runtime, so
the compiler doesn't yet know where it will be located. You need to
fill in the dialog structure with a NULL pointer, and then copy the
real bitmap pointer into the dp field as part of your program init
code, after you've loaded the bitmap into memory.
NULL
dp
It depends on exactly what you are doing. If your images are totally
opaque, there is no advantage to using an RLE sprite, and it will
probably be faster to use a regular bitmap with the blit() function.
If your graphics contain masked areas, an RLE sprite may be both smaller
and faster than the draw_sprite() function, depending on your CPU
and your bitmaps.
Compiled sprites are in general quite a bit faster than both the
others for masked images, and slightly faster for opaque graphics, but
this is far more variable. They are at their best with small sprites,
on older machines and in mode-X, and may actually be slower than
blit() when using SVGA modes on a pentium (the large size of a
compiled sprite is very bad for the cache performance).
blit()
draw_sprite()
You need to make sure the game logic gets updated at a regular rate,
but skip the screen refresh every now and then if the computer is too
slow to keep up. This can be done by installing a timer handler that
will increment a global variable at your game logic speed, eg:
volatile int speed_counter = 0;
void increment_speed_counter()
{
speed_counter++;
}
END_OF_FUNCTION(increment_speed_counter)
void play_the_game()
{
LOCK_VARIABLE(speed_counter);
LOCK_FUNCTION(increment_speed_counter);
install_int_ex(increment_speed_counter, BPS_TO_TIMER(60));
while (!game_over) {
while (speed_counter > 0) {
update_game_logic();
speed_counter--;
}
update_display();
}
}
Add a call to save_bitmap() somewhere in your code. See the
save_bitmap() documentation for a discussion of one common pitfall
when doing this, and some example code.
save_bitmap()
Call srand(time(NULL)) at the beginning of your program, and then use
rand()%limit to obtain a pseudo-random number between 0 and limit-1.
srand(time(NULL))
rand()%limit
0
limit-1
There is no need. The linker will only include the parts of the
library that you actually use, so if you don't call any of, say, the
texture mapping or FLIC playing functions, they will be left out of
your executable. This doesn't work perfectly because a lot of the
Allegro code uses tables of function pointers that cause some
unnecessary routines to be linked in, so the majority of the graphics
functions will be included in every executable, but I have tried to
keep this to a minimum. See allegro.txt for information about more
precise ways to remove some of the graphics and sound drivers.
No. This sort of hardware support would be most useful as part of a
proper 3D API, which Allegro is not, and will never be. If you want
to do some work on this, the MESA library (a free implementation
of OpenGL) is IMHO the place to start.
However, if you are interested in using OpenGL for graphics and Allegro
for everything else, you can try the various add-ons libraries linked
from such as AllegroGL.
Several very good ones already exist, for instance the JGMOD or DUMB
packages. See the audio library extensions section on the Allegro.cc
website (). You are not allowed to suggest
that one of these libraries be merged into Allegro, because this topic
has already been done to death on the mailing list and we are tired of it.
There are several networking packages currently in development or
floating around on the net, though, and in our opinion this sort of
code is more useful as an external library than it would be as part of
Allegro.
Unisys has a patent on the LZW compression algorithm that is used by
the GIF format. We want everything in Allegro to be freely usable
without any restrictions whatsoever, which means we can't include any
code that is subject to licensing or the payment of royalties.
Perhaps. Try to isolate the smallest fragment of code that is able to
reproduce the problem, and we'll have a look at it. If you can send us
a 10 line program, we will fix it. 100 lines, and we can probably fix
it. 1000 lines, and we don't have a chance :-)
Sure. See the giftware terms in readme.txt. We don't mind what you do
with it, and there are no problems with commercial usage.
Whenever it is done! A little encouragement is always welcome, but we
don't have any completion deadlines and we're not going to make one up
for you :-) As soon as it is finished, it will be released.
The grabber can import directly from GRX or BIOS format .fnt files, or
you can draw them onto a .pcx image using any paint program. See for a utility that
will convert Windows TrueType fonts into this .pcx format.
See the Allegro homepage ()
for some links. You can use Gravis patches (.pat format), or SoundFont
2.0 (.sf2) files, but the latter must be converted into a patches.dat
file with the pat2dat utility.
You need to download the makertf conversion utility
(), and the Windows
Help compiler ().
Make a temporary directory, copy the allegro.txi file from the
allegro/docs dir, and run the commands makertf --no-warn allegro.txi
-o allegro.rtf -J allegro.hpj followed by hcp allegro.hpj. The
second command will give a lot of warnings, but they can safely be
ignored.
allegro.txi
allegro/docs
The allegro.rtf file can be read directly into Microsoft Word and
printed from there, but you should right-click and update the table of
contents and index fields to fill them with the correct data first.
Alternatively you can install the TeX package and use the tex and
dvips programs to convert allegro.txi into Postscript format.
Check the Allegro.cc homepage,.
If you have anything to add to this, please post the URL!
It simplifies the maintenance of your program, in case the value of PI
ever needs to be changed. Also it will make your program more portable
to other compilers that use different values of PI.
A number of graphics cards have buggy or incomplete VESA
implementations, and often the vsync() function is not implemented. For
examples on flicker-free drawing, look at the code for the demo game,
which uses a variety of methods to draw itself.
vsync()
If the code works without optimisations, then it could be the
compiler's fault. You can try beating the compiler into submission,
for example:
while (!key[KEY_ENTER])
rest(0);
For this case, however, it would be better to use readkey() instead.
Or consider upgrading or downgrading your compiler.
readkey()
You are probably declaring the use of a namespace before including
Allegro headers. For example:
#include <iostream>
using namespace std;
#include <allegro.h>
Move the `using' declaration after the `include' directives referring
to Allegro headers:
#include <iostream>
#include <allegro.h>
using namespace std;
See.
|
http://alleg.sourceforge.net/faq.html
|
crawl-001
|
refinedweb
| 3,471
| 62.38
|
Tuple<T1, T2, T3, T4, T5, T6, T7, TRest>.IComparable.CompareTo Method
Compares the current Tuple<T1, T2, T3, T4, T5, T6, T7, TRest> object to a specified object and returns an integer that indicates whether the current object is before, after, or in the same position as the specified object in the sort order.
Namespace: SystemNamespace: System, T7, TRest> instance is cast to an IComparable interface.
This method provides the IComparable.CompareTo implementation for the Tuple<T1, T2, T3, T4, T5, T6, T7> class. Although the method can be called directly, it is most commonly called by the default overloads of collection-sorting methods, such as Array.Sort(Array) and SortedList.Add, to order the members of a collection.
This method uses the default object comparer to compare each component.
The following example creates an array of octuples whose components are integers that contain a range of prime numbers. The example displays the elements of the array in unsorted order, sorts the array, and then displays the array in sorted order. The output shows that the array has been sorted by Item1, or the tuple's first component. Note that the example does not directly call the IComparable.CompareTo(Object) method. This method is called implicitly by the Sort(Array) method for each element in the array.
using System; public class Example { public static void Main() { // Create array of 8-tuple objects containing prime numbers. Tuple<Int32, Int32, Int32, Int32, Int32, Int32, Int32, Tuple<Int32>>[] primes = { new Tuple<Int32, Int32, Int32, Int32, Int32, Int32, Int32, Tuple<Int32>>(2, 3, 5, 7, 11, 13, 17, new Tuple<Int32>(19)), new Tuple<Int32, Int32, Int32, Int32, Int32, Int32, Int32, Tuple<Int32>>(23, 29, 31, 37, 41, 43, 47, new Tuple<Int32>(55)), new Tuple<Int32, Int32, Int32, Int32, Int32, Int32, Int32, Tuple<Int32>>(3, 2, 5, 7, 11, 13, 17, new Tuple<Int32>(19)) }; // Display 8-tuples in unsorted order. foreach (var prime in primes) Console.WriteLine(prime.ToString()); Console.WriteLine(); // Sort the array and display its 8-tuples. Array.Sort(primes); foreach (var prime in primes) Console.WriteLine(prime.ToString()); } } // The example displays the following output: // (2, 3, 5, 7, 11, 13, 17, 19) // (23, 29, 31, 37, 41, 43, 47, 55) // (3, 2, 5, 7, 11, 13, 17, 19) // // (2, 3, 5, 7, 11, 13, 17, 19) // (3, 2, 5, 7, 11, 13, 17, 19) // (23, 29, 31, 37, 41, 43, 47,.
|
http://msdn.microsoft.com/en-us/library/vstudio/dd990484.aspx
|
CC-MAIN-2014-10
|
refinedweb
| 403
| 53.21
|
DB2 9 XML support overview
DB2 9 introduced the native XML data type. It stores XML in a parsed hierarchical (native)
format and allows a user to query the data using XQuery and SQL/XML languages. DB2 XQuery
expressions use XML documents stored in the DB2 database as the source of XML for
querying. The functions
xmlcolumn and
sqlquery are used to concatenate the XML values stored in the database and provide XML sequences to XQuery parser.
Apart from the XQuery language, DB2 9 provides SQL/XML functions to work with both XML and
relational data at the same time in a single query. The SQL/XML functions
xmlquery,
xmltable and
xmlexists help embedding XQuery in SQL statements.
DB2 9 also supports schema validation. New commands and stored procedures were introduced for
registering the schemas to the database to make them function as database objects.
These registered schemas can be used to validate the XML values prior to or after an
insert operation using the
xmlvalidate function. These
schemas can also be annotated for decomposing the XML data into relational tables.
Publishing functions like
xmlelement,
xmlattributes and the like can be used to transform relational values
into an XML document. The utilities (import, export, etc.) were also
updated for XML data support. For more details on XML support in Version 9 refer to
the Resources section.
The existing DB2 9 functionality is quite powerful in handling and working with XML data. DB2 V9.5 enhances some of the existing features and introduces additional functionality to make XML handling more powerful and efficient. Below is a list of the functionality addressed in this article:
- Support for XML in Non-Unicode databases
- Sub-document update
- Base-table storage/compression
- Compatible XML schema evolution
- Validation triggers
- Validation check constraints
- XML replication
- XML federation
- XML load
- sqlquery() parameters
- User-friendly publishing functions
- Default parameter passing for SQL/XML functions
- XSLT function
- XML decomposition enhancements
- XML index enhancements
- Index advisor and optimizer enhancements
- DB2 Data Web services
Most of the code samples in the following sections are based on the DB2 V9.5 sample
database. You can create the sample database by running the command
db2sampl
from DB2 V9.5 command line processor. It can also be
created.
Support for XML in non-Unicode database
DB2 9 allows users to create the database with XML data only in UTF-8 codepage. This means that, even if the values in the XML document are in ASCII format, it needs to be stored in a UTF-8 database. DB2 V9.5 removes this restriction and allows a user to create the database with an XML column in any code set. Because this restriction is removed, even if the database is not created in UTF-8, a user can alter a table to add an XML column or create a new table with XML columns.
The following code creates a sample database and a sample table with an XML column:
Listing 1. Database with default codepage
DB2 V9.5 allows a user to update a portion of an XML document stored in the database. It
introduces the XQuery
transform expression which uses four
updating expressions --
insert, delete, replace and
rename to modify the XML document fragments. A transform expression is a part of the XQuery language and hence can be used in an XQuery expression. With DB2 V9.5 an updating expression can be used only within a transform expression. A transform expression has the following clauses:
COPY: The copy clause of the transform expression binds the source XML value to a variable. The updating expression works on this copy further in the query.
MODIFY: A modify clause modifies the copied XML as per the updating expressions. Multiple updating expressions can be used in the
MODIFYclause.
RETURN: The return clause returns the modified values.
The four updating expressions are explained below :
- The insert expression is used to insert a new XML node into an existing XML document. You can specify the position of the insertion within the XML document.
- The replace expression is used to update a particular value of a particular node.
- The delete expression is used to delete a particular node from the XML document.
- The rename expression is used to rename a node.
Because the
transform expression is a part of the XQuery language, it can be used within an SQL
statement using the
xmlquery function and can also be used in
an update statement to update an XML value.
The code in Listing 2 updates the info column of the customer table in the sample
database. It updates the XML document to match the
CID
attribute with the value of a cid relation column.
Listing 2. Transform expression with UPDATE to table
If an XML validation check constraint exists on the XML column of the table, the user may need to validate the new XML value before the update either manually or by creating a trigger.
The following code example removes an item from the
purchaseorder table and gives the modified document as a result of the query.
Listing 3. Transform expression
The sample xupdate.db2 gives various examples of the transform expression. You can find this sample under the sqllib/samples/xml/xquery/clp directory.
Base-table row storage/compression
In DB2 9, XML data is stored in the different storage location than the relational data. This storage location is called XML data area (XDA). DB2 9 stores all the XML documents to this storage location which means accessing XML values along with relational data need more I/O. If the size of the XML documents is not big and the page size has enough space after storing the relation values that it can accommodate the XML value too, storing the XML in the same page provides good performance benefit. Some of benefits are:
- Compression : As the XML data is stored along with the relational data, it can be compressed using compression technology introduced in DB2 9. As the relative size of the XML value is bigger than that of a relational data, a good amount of compression can be achieved.
- Query performance : Inlining of XML data makes base table size larger than usual as now the XML data is stored at the same location as relational. This gives a benefit while querying the data if XML data is accessed as often as other relational values in the table.
DB2 V9.5 introduces base table row storage of XML data. This means that XML data can be
stored along with relational data on the same physical page if the total size of
relational and XML data per row doesnât exceed the size of a page. Base table row storage of the XML data is possible only when the total size of a record
doesnât exceed the page size. In case it does, XML data is stored in the XML storage
location (XDA) as usual. The maximum page size allowed in DB2 is 32 KB, so the maximum
inline length of an XML value has a limit of 32 KB. If the size of the internal tree
representation of the document is less than the specified inline length, they are ready
to be inlined. The code in Listing 4 creates a table with base table row storage of XML
data:
Listing 4. Base table storage of XML data
Use the
INLINE option to specify that XML data is stored along with the relational data. It is advantageous for the queries that fetch XML data as all the data will be found in the same place. On the other hand, for the queries that access non-XML data, this may result in more I/O to find the relational data.
The ideal situation for base table row storage of XML data is when the table has only one column of XML type and the maximum size of the XML document doesn't exceed the page size.
Compatible XML schema evolution
To increase the flexibility and provide better schema evolution, DB2 V9.5 introduces the
update feature for an XML schema. A previously registered schema can be updated with the
new schema if they are compatible with each other. Two schemas are compatible if the XML
documents validated with the older schema are still valid with the new schema.
For example, an addition of a new optional element in the new schema is compatible with the old one as the XML documents validated with the old one remain valid because of the optional nature of the new element. New XML documents can have this optional element and can be validated using the new schema. There is no need to do anything once the schema is updated as the old documents are still valid. Schema update fails if the schemas are not compatible. The annotation and the identifiers for the old schema are retained too.
To update a schema, the XSR_UPDATE stored procedure is introduced. The stored procedure checks for compatibility and updates the schema only if the new schema is compatible. To update a schema, a user needs to register both the schemas independently and then call the XSR_UPDATE stored procedure. A user can choose to retain the old schema or drop it once the old one is updated.
Lets take an example of info column in customer table. The info column has the addr element with the following definition. (You can find the complete schema under the sqllib/samples/db2sampl directory).
Listing 5. Old XML schema definition
In the future, a user wants to have a
HouseNo element which is an optional element.
To update the already registered schema so that the identifiers remain the same, you
need to first register the new schema with an additional element. The definition of the new addr element is:
Listing 6. New XML schema definition
After registration, the following stored procedure can be used to update the existing schema with the new one:
Listing 7. Update schema using XSR_UPDATE
The last argument value, 0, indicates that the new schema should not be dropped after the update. If the value is set to any other non-zero value, the new schema is dropped after the update operation.
The sample code xsupdate.db2 demonstrates the compatible XML schema evolution. The sample can be found in the sqllib/samples/xml/clp directory.
Validation trigger support
To increase the application flexibility and provide the user automatic validation of
an incoming XML document, DB2 V9.5 extends the XML support in before triggers.
Before triggers are triggers created using the
BEFORE option and executed before the inster/update/delete
operations. XML
values can be referenced in new variable in before triggers. The action part of the
trigger can apply the
xmlvalidate function on the new
values. The
WHEN clause of the trigger can be used to
check if the new value is validated or not validated according to any of the
specified schemas. This can be done by using the
IS
VALIDATED or
IS NOT VALIDATED ACCORDING TO XMLSCHEMA
clause in the
WHEN condition. Based on the output from the
WHEN condition, you may want to validate the XML
value or set a new value. At present, only the
xmlvalidate function is allowed on the new transition variable of XML
type. When created, this trigger is automatically activated and executed for every insert in the table and hence can be used to validate the XML values in case they are not validated in insert statement.
The following code is a DDL statement to create the customer table and a trigger
defined on the table. The trigger is activated whenever there is an insert operation
on the table. The trigger validates an XML document before inserting if the document
is not validated in the insert statement using the
xmlvalidate function. The example assumes that the table customer does not exists and the customer schema is already registered in the database.
Listing 8. A trigger defined on a table
The sample xmltrig.db2 gives different scenarios and the operations which can be performed inside the trigger for assigning new values and validation of XML values. The sample can be found in the sqllib/samples/xml/clp directory.
XML validation check constraints
A check constraint is a type of constraint which can be put on a table column at the time of the creation of the table. Whenever the constraint is valid, DB2 allows the insert operation, otherwise the insert fails.
DB2 V9.5 supports check constraints for XML values. A user can enforce
validation on an XML column using check constraints. Similar to the before trigger,
the
IS VALIDATED ACCORDING TO XMLSCHEMA clause can be
used in the check constraints to enforce the validation. The only difference here
is that this constraint only checks for the validation condition and does not
actually do the validation. A user either needs to explicitly validate the XML
value using
xmlvalidate in the insert statement or use a
before trigger to do the automatic validation. Insert succeeds only when the XML value is valid according to the schemas specified in the check constraint.
The before trigger and check constraint on a table XML value always ensures that the XML
values are valid as per the specified schema. While a before trigger does the automatic
validation whenever insert operation is issued, a check constraint forces the user to
explicitly use
xmlvalidate function. Both can be used together to enforce the integrity of the XML values.
The code in Listing 9 alters the table customer created in Listing 8 to put a check constraintson the table:
Listing 9. Check constraint
The check constraint created above always checks that the document is validated according
to the customer schema. In case the trigger does not exist, the user needs to explicitly
validate the value using the
xmlvalidate function.
The sample code xmlcheckconstraint.db2 demonstrates how users can create a view on the different tables with same structure with check constraints and implement partitioning of tables by schema.
DB2 V9.5 supports XML data replication to another database that supports XML data. Replication can be done using WebSphere ®Replication Server version 9.5 or WebSphere Data Event Publisher version 9.5. The WebSphere Replication Server can be used to replicate to the federated targets that supports XML data types, or the XML can be mapped to CLOB/BLOB columns.
Replication for XML data is done within the transaction message just like any other relational column, so the maximum transaction message length puts a restriction on the size of XML being replicated. In case the size of the data is large, a placeholder document can be inserted in place of the original document. An exception can also be inserted into the exceptions table.
While doing the replication, it's impossible to replicate the XML schema registration. In addition, the XML data cannot be validated during replication.
The WebSphere Federation Server Version 9.1 supports pureXML and hence can be used to integrate the local and remote XML store data. XML data from various databases can be viewed as local and DB2 XQuery and SQL/XML can be applied to query the data. A view can be created on the remote federated database to view the data as a serialized string and can again be parsed to XML value at the WebSphere Federation Server. Now DB2 can query this data using SQL/XML and XQuery languages by using the nicknames created for the view.
The XML data coming from the different federated sources can also be validated using db2
xmlvalidate function in the same manner it is used to validate the local XML values
For more details on how to use WebSphere Federation server for pureXML please see the Resources section.
DB2 9 primarily supports two ways of populating a table with XML values. An
insert statement to insert the XML values to the table and an
import utility to import the bulk data into the table.
DB2 V9.5
extends this support for a
load utility. Most of the
options supported by
import for XML data are supported for
load. You can specify the path for XML data using the XML
FROM clause. XML data can be validated during
load using the
XMLVALIDATE USING clause.
The different options are XDS, SCHEMA and SCHEMALOCATION HINTS. While specifying the
XDS option,
DEFAULT, IGNORE and
MAP clauses can be used. The meaning of these options are same as in
import. The codepage of the data can be specified using
file type modifiers
XMLCHAR and
XMLGRAPHIC. The XML data specifier (XDS) is used in the data file to
specify the XML value. The behavior of the
load restart
remains the same. It rebuilds all the indexes by scanning all the XML documents.
The sample code xmlload.db2 demonstrates the load options available in DB2 V9.5 for XML data. You can find the sample in the sqllib/samples/xml/clp directory.
DB2 V9.5 provides the functionality of processing XML document using XSL transformations
within the database itself. The XML document stored in the database can be transformed to
the HTML format by applying an XSLT stylesheet. To do this, DB2 V9.5 introduces the
xsltransform function. It also supports stylesheets which use
parameters. The
xsltransform function can apply the XSLT
stylesheet stored in a database table column as an XML document on an XML document.
This gives the user the flexibility to retrieve a transformed XML document from the database and directly represent it on the web.
Now, suppose you have this XML document:
Listing 10. XML document
along with this corresponding XSLT stylesheet:
Listing 11. XSLT stylesheet
These documents need to either be stored in the table, or they can be passed as parameters. While passing the values as a parameter, make sure that the documents are well formed XML documents. The data type for the parameter is either XML, VARCHAR, CLOB or BLOB. Assuming both the document and stylesheet are stored in the table, the following statement can be used to transform the XML document:
Note: The example assumes that the table in which the documents are store is named xslt and the column name for XML document is xmldoc and XSL document is xsldoc.
Listing 12. XSLTransform expression
The query gives you an HTML document as an output which can be viewed in a browser. Listing 13 shows this HTML output:
Listing 13. HTML output from XSLTransform expression
Users can also store the XML documents and an XSLT stylesheet in separate tables and can apply a single XSLT stylesheet on multiple XML values by joining the tables.
Publishing functions are used to transform the relational data into an XML value.
DB2 9 introduces SQL/XML support which has been enhanced and simplified in DB2 V9.5.
Some of the DB2 9 SQL/XML functions like
xmlelement
require providing the name of all the XML elements, attributes and other nodes while
the values of these elements are either derived from the relational columns of a
table or provided explicitly. Sometimes users need XML values to be generated but
donât want to bother about the element names.
DB2 V9.5 provides extensions to
existing publishing functions by introducing the new function
xmlrow and
xmlgroup. These functions derive both the names and values of the XML
elements from the columns of the table. While the
xmlrow
output is the sequence of the row values represented as XML, the
xmlgroup function groups all the values under a root node.
The following table is a sample employee table which has the address details of the employees with the following record.
The following query applies the
xmlrow and
xmlgroup function to this row.
Listing 14. New publishing functions
In DB2 9 to get the same result,
xmlelement needs to be applied
to each of the column values in addition to providing the element name explicitly.
The samples code xmlintegrate.db2 provides some more and complex examples for these functions while comparing it with the V91 version of the same queries. You can find this sample under the sqllib/samples/xml/clp directory.
Parameter passing to sqlquery function
In DB2 9, the
sqlquery function is used to embed an SQL
statement within an XQuery expression. This function takes a string value as input
which is a valid SQL full select statement. In DB2 9, there is no way of passing the
parameter to this function from XQuery statement.
DB2 V9.5 enhances this function to introduce a new
parameter function which takes an integer value as input. Now, the
sqlquery function can take multiple arguments as input where the first argument is a string value representing a full select followed by the values of the parameters. The first string argument to the
sqlquery function can contain the
parameter function which is replaced by the arguments passed to this
function after the first mandatory string argument. The integer value passed to the
parameter function tells the position of the argument in the
invocation of an
sqlquery function which is replaced in place of this function call.
For example, parameter(1) tells the parser to replace this with the first argument
after the string argument. The type of the argument should be the same as the type
of value the full select is expecting. A cast function can be used to cast the value to an appropriate type.
Let us take an example using the customer table in the sample database. The sample
database can be created by running the
db2sampl command
or.
The customer table has the column cid as a relational
column and key value which represents customer id. info XML column has an
attribute Cid which again represents the customer id. If the data is consistent,
then the attribute Cid value should be the same as cid column value for
specific row. The following query checks the number of rows which are consistent. The value of the Cid attribute is passed to the
sqlquery function to compare it to the relational cid value.
Listing 15. Passing parameter to sqlquery function
The query returns all the customer ids for which data is consistent.
The sample code
xqueryparam.db2 provides good examples for one parameter and multiple
parameter passing to
sqlquery function. The sample can be
found in the sqllib/samples/xml/clp directory.
Default passing behaviour of existing functions XMLQuery, XMLtable and XMLExists
In DB2 9, The functions
xmlquery,
xmltable and
xmlexists are used to embed
the Xquery statement within an SQL statement. The parameter is passed to these
functions from the SQL statement using the
PASSING clause
of these functions.
In DB2 9, if there are a multiple occurrences of these functions
within the same SQL statement, a separate
PASSING clause
for each occurrence is needed. Sometime this makes the query look complex and large
in structure. DB2 V9.5 extends these functions to use a default passing mechanism.
Now a column name is used as the variable name in the Xquery used in these
functions. DB2 passes the same column by default for the variable in case an
explicit
PASSING clause is not used. This make the query
smaller and easier to understand. The following code gives an example based in the
sample database tables. The query fetches the first item in the
purchaseorder for a
customer with the name Robert Shoemaker.
Listing 16. Default passing behaviour of SQL/XML functions
For the
xmlquery function in the
SELECT clause, the
porder column of the
purchaseorder table is passed by default. In the same way, the
xmlexists function, info column and custid columns of customer
table are passed by default. Make sure to use the name of these variables in capital letters as xquery is a case sensitive language and relation column names are always stored in capital letters.
XML validation constraints
DB2 V9.5 enhances the
IS VALIDATED clause used in the SELCT
statement to include
ACCORDING TO XML SCHEMA ID. Now
users can provide multiple schemas and select only those XML values which are
validated according to these schemas. DB2 V95 also provides the flexibility of
providing any XML expression instead of a column as an operand. The below example
selects only those documents from the customer table which are validated using
customer schema.
Listing 17. XML validation constraints in SELECT statement
Annotated XML schema decomposition
DB2 9 supports annotation of XML schemas so that the data can be decomposed into relational tables with the restriction on recursive schemas. DB2 V9.5 lifts this restriction, and now users can annotate and decompose the data even when the schema is recursive.
DB2 V9.5 extends the decomposition to provide the order of insertion. This is important when data is decomposed into multiple tables having foreign key relationship. In that case, the primary table should be populated first to maintain the referential constraint. The order of insertion can be specified using the following annotation:
Listing 18. Schema annotation to provide the order of insertion
The sample code
recxmldecomp.db2 and
xmldecomposition.db2 gives good examples to annotate and decompose a recursive schema and order of insertion into the table respectively. The sample can be found in sqllib/samples/xml/clp directory.
DB2 9 introduced XML indexes. XML indexes can be created on a particular node of the XML documents stored in the database. The data type for the index can be VARCHAR, DOUBLE, DATE or TIMESTAMP. If the data type of the index doesnât match with the element type of the XML documents, DB2 inserts the XML value but an index will not be created for this particular XML value.
DB2 V9.5 introduces an additional clause for XML indexes
REJECT
INVALID VALUES. If the index is created with this clause and the data type of the
index doesnât match with the data type of the element in the XML document being
inserted, the insert will fail. On the other hand, if the index is created after inserting the value and the data type doesnât match, the index creation will fail.
This is the default behaviour for DB2 V 9.5 and can also be specified explicitly using
the
IGNORE INVALID VALUES clause.
The following example will create an index on the customer table Cid attribute with the
option
REJECT INVALID VALUES.
Listing 19. XML index
Index advisor and optimizer enhancements
An index advisor can be used to get a suggestion on indexing of XML and relational data together. A user can get a big performance gain by using the indexing on both XML and relation data. The DB2 9.5 optimizer uses both type of indexes to optimize the queries and help selecting the best plan for query execution.
DB2 V9.5 XML data can also be exposed as Web services for database manipulation (ML) operations using Data Web services. Data Web services (DES) expose ML operations like insert, update, select and stored procedures as a Web services. These Web services can be consumed using SOAP over HTTP, POST and GET methods e.g., through Web browsers, or custom clients. Data Web services are supported by Eclipse-based tools integrated into existing database tooling.
The control center is also updated to work with XML data.
DB2 9 introduced XML as a new data type and provided the infrastructure for handling XML values. It provided the basic functionality like querying the XML document, registering schema and validation XML documents, interaction between SQL and XQuery using SQL/XML. DB2 V9.5 enhances the current functionality and provides even more functions to handle XML data efficiently.
Thanks to Matthias Nicola, Susan Malaya, Cindy Sirocco and Henrik Looser for their valuable technical review comments on this article. Thanks to Mel for ID review.
Learn
- What's new in DB2 Viper: Discover what's new in DB2 Viper
- Query DB2 XML data with XQuery: Use XQuery to query your XML data
- Get off to a fast start with DB2 Viper: Master the new XML features in DB2 Viper
- pureXML in DB2 9: Which way to query your XML data?: See how to query your XML data
- XML in a non-Unicode database: Find tips on how XML can be stored in non-Unicode databases
- 15 best practices for pureXML performance in DB2 9: View tips and tricks to manage your XML data with the best performance
- Industry Standard Demos and downloads: Get a quick start to work with DB2 and industry standards XML schema
- Update XML in DB2 9.5: Learn about XML update features in DB2 V9.5
- Use WebSphere Federation Server Version 9.1 to integrate XML data: Learn about WebSphere Federation Server Version 9.1 to integrate XML data
-_0<<
Manoj works as an application developer for the DB2 Samples Development team in the IBM India Software Labs. His expertise is in the area of DB2 pureXML and Java application development. He is an IBM certified DB2 9 application developer, database administrator, and solution developer for XML and related technologies. He has authored an IBM Redbooks publication for DB2 Express-C application development. Prior to this role, he was a member of the functional verification team to test the utilities and deadlock monitoring feature for DB2.
|
http://www.ibm.com/developerworks/data/library/techarticle/dm-0711sardana/
|
crawl-003
|
refinedweb
| 4,864
| 53.71
|
Make search become the internal function of Internet
- Marion Walton
- 2 years ago
- Views:
Transcription
1 Make search become the internal function of Internet Wang Liang 1, Guo Yi-Ping 2, Fang Ming 3 1, 3 (Department of Control Science and Control Engineer, Huazhong University of Science and Technology, WuHan, P.R.China) 2 (Library of Huazhong University of Science and Technology, WuHan P.R.China) Phone: Abstract: Domain Resource Integrated System (DRIS) is introduced in this paper. DRIS is a distributed information retrieval system, which will solve problems like poor coverage, long update interval in current web search system. The most distinct character of DRIS is that it's a public opening system, and acts as an internal component of Internet, but not the production of a company. The implementation of DRIS is also represented. Keywords: search engine, domain, information retrieval, distributed system, Web-based service, information network 1 Introduction As a customer of Internet, we usually use search engine to find information, but there are several obvious inconveniences we always encounter [1]. There are two many unrelated records in results. We may have to turn several pages to find the real record. Some results are too old. The update interval of pages database is almost one month, so you need not believe the lottery number the search engine tell you. Every user obtains the same results for a query words from the search engine, which can't express our special interests. Missing much useful information. There is no a search engine which can cover all the pages on the Internet, needless to say the information in BBS, FTP or IEEE's databases. All in all, we can't always obtain the precise and comprehensive results from current search engine. Seeing from the whole Internet, current search engine is playing a dangerous role. It is ridiculous that hundreds of search engine access each site. These spiders will extract all of the pages from the web sites over and over again, which great increase the traffic load of Internet. Since Internet is the computer of next generation, search engine can be treated as its operation system. But when a search engine become the only dominating searches system, we may have to bear the bothersome advertisements which are added to the results that have already been not so satisfactory We can see the origin of problem from the development of Internet. DNS-Yahoo-Google, It's the explosion of Internet that promotes the improvement of "Web information management system ". Now the Internet is still expanding. The limitations of current search engine have already become very obvious. The other reason that results in the difficulty in information retrieval is the freedom principle of the Internet. You can release any kinds of information and not comply with any rule. But the web spider may never find such web pages if you don't register it in DNS or search engine. There are many other kinds of information on the Internet, but till now management only exists in "physical Internet ", but not in "information Internet ". Obviously, any current search engines belong to a company can't build a database that can completely mirror the exponentially increasing pages on Internet. And they may also encounter many bottleneck problems when the size of its database reaches some critical values. So improving some internal content of Internet to meet the new demand of its customer may be an alternative method to solve current problems in information retrieval. This just is our basic idea. As mentioned above, it's the tremendous size of Internet and the freedom principle of web information's
2 distribution that bring the most problems in information retrieval, so the solution is evident. First, we should convert the Internet into a "smaller" one. Second, a management system should be built to administrate the distribution of information. This management system had better not give any additional restriction to the web composer. We just propose such an information management system, DRIS (Domain resources integrated system), which truly make information retrieval become an internal function of Internet and can act as the information management system of Internet. A testbed of DIRS is also built in CERNET (China education and research network). CERNET is composed of all the networks of Chinese universities, which corresponds to the domain of "edu.cn". 2 DRIS The main idea of DRIS is very simple. In DRIS, the domain (the definition in DNS) is defined as the basic cell of "information Internet" but not web server. All the web pages in one domain will be indexed and rearranged in a central server. Then all the search applications are created on these servers. The theory of DRIS will be detailed as follows. 2.1 Why domain? The first question we should answer is that why to use the domain as basic component for information retrieval. The reasons are detailed as follows. On the Interne all the Web servers have already been strictly arranged by domain. So using the DNS, we can easily know how many Webs servers are in one domain, and indexing these Web pages will be more easily. Normally, each domain corresponds to an organization. For instance, "hust.edu.cn " is our university's domain name. All the servers under this domain and a DNS server are managed by us. Since the DRIS server will extract and index all the pages in this domain, the organization that corresponds to this domain should have authority on own their DRIS and can efficiently control the distribution of information. So we answer a crucial question, that is who is in charge of the DRIS server. If the DRIS is brought to realization, DNS can be easily updated to DRIS. 2.2 The architecture of DRIS Since the definition of domain is used, the architecture of the DRIS is as same as that of DNS. It's shown in figure 1. root Uk(England) Ru(Russia) Cn(China) Fr(France) First layer Gov.cn Edu.cn(CERNET) Com.cn Second layer Hust.edu.cn Pku.edu.cn Tsinghua.edu.cn Third layer Figure 1 The architecture of the DRIS The node in the third layer corresponds to a third level domain like a university, and the second layer may be the sub network of one country like domain of "edu.cn". The node in top layer normally is a country except USA. DRIS is a hierarchical distributed information retrieval system. Every node of DRIS is an integrated search engine,
3 but has different search scope. We can always see that many sites don't comply with the discipline of DNS and use the top domain name such as "net ", "com" without adding the domain name of their country. This condition will be discussed in implementation part of DRIS. 2.3 search system in each layer We will introduce each level's search system from third layer to the top layer Third layer: extract and Index the pages The node in this layer just acts as a normal search engine. Only difference is that the search scope is limited to a third level domain, like a university. This search system comprise of three parts: page-fetching mechanism, indexer, and searching interface. The three parts are introduced respectively Extract the pages Page-fetching mechanism will download all the pages in a three level domain. For example, " " is the domain name of our university, so the server under the domain of "hust.edu.cn" like the department of CS's server "cs.hust.edu.cn " can be easily found by referring to the DNS server. Then the spiders can download all sites included in this domain. The work of spider is organized by single web site. When a spider visits a web server, it will download all the content of this site and stop working when it encounters a URL that links to other site. The content in this kind of URL in other site is also treated as valuable matter of this site and be download either. This is because such links also express the interest of this site, and it's important when we rank the pages by using the Hyperlink Analysis technology [2] in the following parts Index the pages Now there is still no a "standard index technology" for the search engines. Normally, the key issue in indexing is the appreciate selection of metadata. In the common sense, a web page is represented by the keywords with its ranking score. We also use this method to index the web pages. By this means, the rank score of keyword will be determined by the position and frequency that the word appears in the whole text. The other fields such as title, encoding and abstract will also be used to describe a web page Search interface Providing user interface and processing the search results is main function of search interface. Ranking method is the key issue to search interface. Now there are two methods to rank the pages. First, the score of keyword. In this method, the score of keywords [3] is calculated by the frequency and the place of this word appearing in the page. When a query word is submitted, all the pages that contain that word will be retrieved and then ranked by the score of corresponding keywords. The other method, Hyperlink Analysis, is applied by some current search engines, like Google [4]. By this means, the rank score of a page is determined by the number of link that point to this page. In this layer, we use the fist method to rank the results. This is because in this condition, the pages are limited to a small area like a school, but the Hyperlink Analysis is more useful in a large scale of Internet. So score of keywords is enough Second layer: Harvest the metadata This layer will provide the search in all the servers under a second levels domain. The search engines of this layer have only two parts, web page database and search interface but no its own spider. The data of search engine is downloaded directly from the database of the search engine in the corresponding third layer. For example, the search engine corresponding to the domain "edn.cn" will obtain its data from the page databases in different schools in China. The search engine in this layer only harvests the metadata from the pages databases in the third layer, but not all the pages itself. In this method, the update interval can be shorter than directly extracting web pages from
4 thousands of web servers. A notable problem that should be concerned is the overlap of web pages. In the third layer, when the spider extracts pages from a site, it also gets some pages which don t belong to this site. So when harvesting the metadata, some pages may appear many times. As the download method in the third layer, this number is the number that the other sites direct to this page, except the page itself. According to the theory of Hyperlink Analysis, this number will be treated as the ranking score of this page. As mentioned above, Hyperlink Analysis will be applied to rank pages in this layer, which is different from the ranking method in the third layer. This is because the scope in this level's domain, like all the universities in China, is large enough and the Hyperlink Analysis becomes efficient in this condition Top layer: a distributed search system The two former layer's search engines are all centralized structure, but the search engine in this layer will be distributed information retrieval system. It has merely one part, search interface, no spider, and no page databases. There are three issues when designing a distributed search system [5].1 the underlying transport protocol for sending the message (e.g., TCP/IP). 2 a search protocol that specifies the syntax and semantics of message transmitted between client and servers.3 a mechanism that can combine the results from different servers. These problems are detailed as follows. 1 Communication protocol. In DRIS, SOAP is applied as the basic protocol for communication, because the SOAP has many advantages than HTTP in security, opening, etc. 2 search protocol. The search protocol of DRIS is based on Webservice. Webservice uses the SOAP as the fundamental protocol and provides an efficient distributed structure. We refer to the SDLIP [6] and Google's search Webservice [7] to define the format query and results. By this means, all the search engines in the second layer should provide the standard search Webservice according to this protocol. And Search engine in this layer just needs to index all the Webservice in lower layer. 3 combining the results. The key problem for the combination of results is the rank of pages from different servers. In the second layer, the ranking score of one page is represented by its overlap number in databases. In this layer, we also use this theory to calculate the rank of document. The score of the same page from different database will be simply added up and a finial ranking list of documents will be produced. Some other factor like the speed of different service will be also considered in ranking the results in the further work. The work theory of this search engine is just like that of Meta search engine, which has no its own pages database, but an index of the search interface of other engines. But the sub search engine of DRIS is arranged strictly, complied with the same protocol, so performance of this Meta search system is much better than any current Meta search engine. The search engine in this layer will provide the search in a country, which will cover the most of search request on Internet. If you want to search parallel in several countries, you can use the search Webservice in top layer to build a special search engine. The problem of multiple languages processing should be considered in this application The OO (Object Oriented) module of DRIS All the nodes in DRIS will provide two kinds of search interface, user interface based on web and application program interface (API) based on Webservice. The web interface is just as same as normal search engine. But there are some special rules in search Webservice of DRIS for the advantage of API user. The search function and its interface are all the same in different layers' search Webservice. And all the Webservice in DRIS will be arranged by the Object Oriented theory. As an Object Oriented software system, we use the class tree to describe the whole system. All the nodes in the DRIS are arranged under the namespace "DRIS" and be treated as the child class of DRIS. All search Webservice should provide the service through URL of "DRIS. Domain name" and the main class name of this Webservice will
5 be "DRIS. Reversed domain name". For example, the domain name of our university is "hust.edu.cn". Then our DRIS server will provide the search service through link of "DRIS.hust.edu.cn" and its main class name is "DRIS.cn.edu.hust ". The class tree of DRIS is shown in fig2. DRIS namespace DRIS.uk DRIS.ru DRIS.cn DRIS.fr... DRIS.cn.org DRIS.cn.edu DRIS.cn.com... DRIS.cn.edu.hust DRIS.cn.edu.pku... First layer class Second layer class Third layer class Figure 2 the class tree of DRIS This naming system gives many conveniences when applying search service in DRIS. First, if you want to use the search service of a node, just knowing its domain name is enough. Second, a higher node will index all search Webservice under its domain to maintain the inherit relation between layers. So you can only cite a Webservice of a node and use all the search service under its domain. For example, you can cite the service from "DRIS.edu.cn" in your program, and when you want to visit the search service the in our university, "hust ", you can directly use the function in child class "DRIS.cn.edu.hust" and need not cite it. 3 The advantage of DRIS Normally there are three principles to judge a search engine. 1. The coverage of the search engine. The more pages are indexed, the better a search engine is. In fact, the DRIS can embody almost all the pages on the Internet. 2. The update speed of search engine. All the work of downloading the pages is carried out by many distributed spiders in the third layer of DRIS, so the content of search engine's databases can be updated everyday. 3 Quality of search service. Intelligent personal search service will be the general trend to provide a better search service. Now the data source of current intelligent search system comes from two places, pages database built by them or using the search service of current search engine, like Google's API. These two methods both have obvious disadvantages. On the contrary, DRIS provide standard search service in three levels' scope of Internet and is free. From our point of view, personal intelligent search engine will be the most promising service based on DRIS. All in all, DRIS first makes the size of Internet "smaller ". The DRIS servers will be the basic component for information retrieval. And all the pages on Internet are rearranged in a uniform format by DRIS servers. So as the designer of search engine, his spider need not visit billions of diversiform web pages, but thousands of normative pages databases.; as the web composers, they can still use HTML or other format to publish their information as their wish; as the customer of Internet, you can truly get the results you really want, which is more comprehensive and precise than before. 4 the implementation of DRIS Although the theory of DRIS can make the information retrieval more efficiently, but when taken into realization, the most important thing that we should consider is why the organizations willing to build the DRIS server, and what they can benefit from this system. Just indexing their resources for the advantage of others may not be a sufficient reason to build such a system. DRIS should first solve some urgent problems of their own and then
6 benefit the others. We fully consider this condition when building testbed of DRIS in CERNET. The implementation procedure of DRIS is introduced as follows. 4.1 union search systems in single university First we build a third layer DRIS system in the university, which is the underlying structure of DRIS. And fortunately, we have found some urgent need to build such a system. The reason to build a sub DRIS is introduced as follows. A university needs a Web search engine that indexes all the pages in school. Now many universities haven't built their own search engine, though there may be hundreds of web servers in the school. If you want to find some notes, you may have to remember many URL of sites. Now many sites use the Google to execute the search in site, but we always need to search all the sites in school.if such a standard excellent and free search engine is proposed, I think many universities will adopt it. A university needs Union search system for all resources. In fact, there are many other resources in school besides web pages, such as FTP, many digital resources in library and each department, but there no a mature system to manage all these information. Even just in the library, hundreds of databases have been introduced these years, and we always have to search in different search interfaces one by one to get a comprehensive result. Everyone hope to "search in one page, and then search all the school ". Firstly, the web search engine for a university is introduced. The standard DRIS only indexes the servers under the corresponding domain, but in fact many servers are not strictly named by the university's domain and some sites even haven't a domain name. So we should first determine how many servers in the school. Most of servers have been registered in the network center and we just need to get a list from there, but we still scan all the IP of our school to ensure all the servers are indexed. So in practice, the "domain" in DRIS is always a sub network of Internet, not strictly corresponding to the "domain" in DNS, but the naming method of Webservice has no change. Then a union search engine that can integrate many kinds of resources is proposed. Information retrieval for heterogeneous resources is still an unsolved problem. Some digital library projects like InfoBus just deal with this issue. The key to the search in the heterogeneous resources is an efficient middleware, the SDLIP and Z39.50 is just such protocol to build the middleware. After studying these protocols, we propose our experimental protocol [8] based on SOAP/Webservice/UDDI. This protocol can be applied by web search engine, professional databases, FTP, etc. Based on this protocol, we have built a union search system that can search heterogeneous resources in our school. In this system, all data source should provide the uniform search Webservice. And if one system can't provide such an interface, our systems will automatically transform its search interface into standard one. The user can search many kinds of information like web pages, file in FTP, and digital book in one search interface. All the search services in our university are also arranged by Object Oriented module. The class tree of our university's node in DRIS is shown in fig3. All the nodes will provide standard search Webservice. Top class "DRIS.cn.edu.hust" will provide a union search
7 Webservice of all the resource in our school. The function in the class of "DRIS.cn.edu.hust.webpage" will provide the web pages search service. "DRIS.cn.edu.hust.ftp" and other class will also provide corresponding search service. Child class of "DRIS.cn.edu.hust.database" will provide the search of many professional databases of our university, such as the digital resource purchased by library. 4.2 search system in CERNET Merely as a Web search engine, our search engine can distribute freely to other universities. Then the web search system in second layer of DRIS can be easily built. Search system in CERNET will integrate all the web pages in Chinese universities. But what most attractive to build second layer of DRIS is not the web page search engine, but the characteristic resources in different schools. Till now, there are no a mature interoperation protocol for the exchange of information in different organization. The Protection of copyright and security consideration also bring some troubles in exchange. A protocol that can deal with these problems should be proposed in the future. The data fusing and resources selection should be especially considered in this protocol. 4.3 search systems in China If all search systems in the second layer are finished, just as in CERNET. The search engine of a country could be easily created. As the final version, DRIS will integrate not only web pages, but also the other kinds of resources. More issues in this layer will be considered in the further experiment. 5 further work Main work in the future will be concerned with the standard protocol of DRIS, and more experiment is needed to improve such a protocols. Some hardware resources can also be integrated by using the basic idea of DRIS. Now some Grid Computing technology like Globus is just applied in a small area, a large-scale application in the whole Internet may be more useful. DRIS will integrate all the resource on internet and turn Internet into a supercomputer. If these assumptions come into realization, you can obtain all the information of this world and powerful computing ability from the Internet from your PDA. REFERENCE [1] Risvik, Knut Magne, "Search engines and Web dynamics ", Computer Networks, Vol39, pp , [2] Henzinger M.R, "Hyperlink analysis for the Web ", IEEE Internet Computing, Vol5, pp.45-50, 2001 [3] Garratt, Andrea, "A survey of alternative designs for a search engine storage structure ", Information and Software Technology, Vol43, pp , 2001 [4] Sergey Brin, "The anatomy of a large-scale hypertextual web search engine ", Computer networks and ISDN system, 1998(30), pp [5] Liang sun, "Implementation of large-scale distributed information retrieval system ", Proceedings of Info-tech and Info-net, Vol3, pp.7-17, 2001 [6] SDLIP, [7] The web service of Google, [8] Information retrieval protocol for digital resource,
Web search engine based on DNS 1
Web search engine based on DNS 1 Wang Liang 1, Guo Yi-Ping 2, Fang Ming 3 1, 3 (Department of Control Science and Control Engineering, Huazhong University of Science and Technology, WuHan, 430074 P.R.Ch...
Chapter-1 : Introduction 1 CHAPTER - 1. Introduction
Chapter-1 : Introduction 1 CHAPTER - 1 Introduction This thesis presents design of a new Model of the Meta-Search Engine for getting optimized search results. The focus is on new dimension of internet
INTERNET DOMAIN NAME SYSTEM
INTERNET DOMAIN NAME SYSTEM Copyright tutorialspoint.com Overview When DNS was not into existence, one had to download
Layers Construct Design for Data Mining Platform Based on Cloud Computing
TELKOMNIKA Indonesian Journal of Electrical Engineering Vol. 12, No. 3, March 2014, pp. 2021 2027 DOI: 2021 Layers Construct Design for Data Mining Platform
HELP DESK SYSTEMS. Using CaseBased Reasoning
HELP DESK SYSTEMS Using CaseBased Reasoning Topics Covered Today What is Help-Desk? Components of HelpDesk Systems Types Of HelpDesk Systems Used Need for CBR in HelpDesk Systems GE Helpdesk using ReMind,
How to get your Website listed with Search Engines and Directories
How to get your Website listed with Search Engines and Directories Presented by T. Quack Quack Internet Solutions August 7, 2001 1/8 1. Introduction... 3 1.1 How do search engines work?...
Expert Search Engine - A New Approach for Web Environment
Expert Search Engine - A New Approach for Web Environment Laxmi Ahuja laxmiahuja@aiit.amity.edu Amity Institute of Information Technology Amity University, Uttar Pradesh Sec 125 Noida (UP) Dr Ela Kumar
Design of Electronic Medical Record System Based on Cloud Computing Technology
TELKOMNIKA Indonesian Journal of Electrical Engineering Vol.12, No.5, May 2014, pp. 4010 ~ 4017 DOI: 4010 Design of Electronic Medical Record System Based
Additional Information: Alternative Locations provides a link to the National Centre for e-social Science (NCeSS)home page
Citation: Lloyd, Ashley and Sloan, T.M.. 2005. : Extending the INWA Grid, in.,. (ed), First International Conference on e-social Science, Jun 22 2005. Manchester, UK: ESRC National Centre for e-social
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You erkyou@indiana.edu ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Research on Operation Management under the Environment of Cloud Computing Data Center
, pp.185-192 Research on Operation Management under the Environment of Cloud Computing Data Center Wei Bai and Wenli Geng Computer and information engineering
Key Technology Study of Agriculture Information Cloud-Services
Key Technology Study of Agriculture Information Cloud-Services Yunpeng Cui, Shihong Liu Key Laboratory of Digital Agricultural Early-warning Technology, Ministry of Agriculture, Beijing, The People s ep
Study on Architecture and Implementation of Port Logistics Information Service Platform Based on Cloud Computing 1
, pp. 331-342 Study on Architecture and Implementation of Port Logistics Information Service Platform Based on Cloud Computing 1 Changming Li, Jie Shen and
Data Discovery on the Information Highway
Data Discovery on the Information Highway Susan Gauch Introduction Information overload on the Web Many possible search engines Need intelligent help to select best information sources customize results,
Remote Sensing Images Data Integration Based on the Agent Service
International Journal of Grid and Distributed Computing 23 Remote Sensing Images Data Integration Based on the Agent Service Binge Cui, Chuanmin Wang, Qiang Wang College of Information Science and Engineering,
Introduction to Web Technology. Content of the course. What is the Internet? Diana Inkpen
Introduction to Web Technology Content of the course Diana Inkpen The Internet and the WWW. Internet Connectivity. Basic Internet Services. University of Ottawa School of Information Technology and Engineering
Search engine ranking
Proceedings of the 7 th International Conference on Applied Informatics Eger, Hungary, January 28 31, 2007. Vol. 2. pp. 417 422. Search engine ranking Mária Princz Faculty of Technical Engineering, University
The Integration Between EAI and SOA - Part I
by Jose Luiz Berg, Project Manager and Systems Architect at Enterprise Application Integration (EAI) SERVICE TECHNOLOGY MAGAZINE Issue XLIX April 2011 Introduction This article is intended to present the
5 10 15 20 25 30 35 A platform for massive railway information data storage # SHAN Xu 1, WANG Genying 1, LIU Lin 2** (1. Key Laboratory of Communication and Information Systems, Beijing Municipal Commission 19 Internet Basics
lesson 19 Internet Basics This lesson includes the following sections: The Internet: Then and Now How the Internet Works Major Features of the Internet Online Services Internet Features in Application
Technical Information
Technical Information Europa is a fully integrated Anti Spam & Email Appliance that offers 4 feature rich Services: > Anti Spam / Anti Virus > Email Redundancy > Email Service > Personalized
Integration Platforms Problems and Possibilities *
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 8, No 2 Sofia 2008 Integration Platforms Problems and Possibilities * Hristina Daskalova, Tatiana Atanassova Institute of Information
A FRAMEWORK OF WEB-BASED SERVICE SYSTEM FOR SATELLITE IMAGES AUTOMATIC ORTHORECTIFICATION
A FRAMEWORK OF WEB-BASED SERVICE SYSTEM FOR SATELLITE IMAGES AUTOMATIC ORTHORECTIFICATION Jiaojiao Tian, Xinming Tang, Huabin Wang Key Laboratory of Geo-informatics of State Bureau of Surveying and Mapping,
Enterprise Application Designs In Relation to ERP and SOA
Enterprise Application Designs In Relation to ERP and SOA DESIGNING ENTERPRICE APPLICATIONS HASITH D. YAGGAHAVITA 20 th MAY 2009 Table of Content 1 Introduction... 3 2 Patterns for Service Integration...,
Index Data's MasterKey Connect Product Description
Index Data's MasterKey Connect Product Description MasterKey Connect is an innovative technology that makes it easy to automate access to services on the web. It allows nonprogrammers to create 'connectors'
Life of a Packet CS 640, 2015-01-22
Life of a Packet CS 640, 2015-01-22 Outline Recap: building blocks Application to application communication Process to process communication Host to host communication Announcements Syllabus Should have
Wireless Sensor Networks Database: Data Management and Implementation
Sensors & Transducers 2014 by IFSA Publishing, S. L. Wireless Sensor Networks Database: Data Management and Implementation Ping Liu Computer and Information Engineering Institute,...
The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang
International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang Nanjing Communications
SEO Techniques for various Applications - A Comparative Analyses and Evaluation
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727 PP 20-24 SEO Techniques for various Applications - A Comparative Analyses and Evaluation Sandhya,
Web Service Based Data Management for Grid Applications
Web Service Based Data Management for Grid Applications T. Boehm Zuse-Institute Berlin (ZIB), Berlin, Germany Abstract Web Services play an important role in providing an interface between end user applications
Application Firewalls
Application Moving Up the Stack Advantages Disadvantages Example: Protecting Email Email Threats Inbound Email Different Sublayers Combining Firewall Types Firewalling Email Enforcement Application Distributed
Code Generation for Mobile Terminals Remote Accessing to the Database Based on Object Relational Mapping
, pp.35-44 Code Generation for Mobile Terminals Remote Accessing to the Database Based on Object Relational Mapping Wen Hu and Yan li Zhao School of Computer
Cover. White Paper. (nchronos 4.1)
Cover White Paper (nchronos 4.1) Copyright Copyright 2013 Colasoft LLC. All rights reserved. Information in this document is subject to change without notice. No part of this document may be reproduced
Grid Computing Vs. Cloud Computing
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 6 (2013), pp. 577-582 International Research Publications House. irphouse.com /ijict.htm Grid
An Alternative Web Search Strategy? Abstract
An Alternative Web Search Strategy? V.-H. Winterer, Rechenzentrum Universität Freiburg (Dated: November 2007) Abstract We propose an alternative Web search strategy taking advantage of the knowledge on
NETCONF-based Integrated Management for Internet of Things using RESTful Web Services
NETCONF-based Integrated Management for Internet of Things using RESTful Web Services Hui Xu, Chunzhi Wang, Wei Liu and Hongwei Chen School of Computer Science, Hubei University of Technology, Wuhan, China
DYNAMIC QUERY FORMS WITH NoSQL
IMPACT: International Journal of Research in Engineering & Technology (IMPACT: IJRET) ISSN(E): 2321-8843; ISSN(P): 2347-4599 Vol. 2, Issue 7, Jul 2014, 157-162 Impact Journals DYNAMIC QUERY FORMS WITH
Distributed Systems and Recent Innovations: Challenges and Benefits
Distributed Systems and Recent Innovations: Challenges and Benefits 1. Introduction Krishna Nadiminti, Marcos Dias de Assunção, and Rajkumar Buyya Grid Computing and Distributed Systems Laboratory Department
UPS battery remote monitoring system in cloud computing
, pp.11-15 UPS battery remote monitoring system in cloud computing Shiwei Li, Haiying Wang, Qi Fan School of Automation, Harbin University of Science and Technology
Introduction to Service Oriented Architectures (SOA)
Introduction to Service Oriented Architectures (SOA) Responsible Institutions: ETHZ (Concept) ETHZ (Overall) ETHZ (Revision) - Version from: 26.10.2007 1 Content 1. Introduction
Introduction to Computer Networks
Introduction to Computer Networks Chen Yu Indiana University Basic Building Blocks for Computer Networks Nodes PC, server, special-purpose hardware, sensors Switches Links: Twisted pair, coaxial cable,
Defense Technical Information Center Compilation Part Notice
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP012353 TITLE: Advanced 3D Visualization Web Technology and its Use in Military and Intelligence Applications DISTRIBUTION: Approved
Design of Hospital EMR Management System
International Journal of u-and e-service, Science and Technology, pp.341-348 Design of Hospital EMR Management System Hongfeng He and Yixin Yan * Harbin,
Development of Integrated Management System based on Mobile and Cloud Service for Preventing Various Hazards
, pp. 143-150 Development of Integrated Management System based on Mobile and Cloud Service for Preventing Various Hazards Ryu HyunKi 1, Yeo ChangSub 1, Jeonghyun
Analysis and Research of Cloud Computing System to Comparison of Several Cloud Computing Platforms
Volume 1, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: Analysis and Research of Cloud Computing System to Comparison
Automatic language translation for mobile SMS
Automatic language translation for mobile SMS S. K. Samanta, A. Achilleos, S. R. F. Moiron, J. Woods and M. Ghanbari School of Computer Science and Electronic Engineering University of Essex, Colchester,.),
Introduction. Friday, June 21, 2002
This article is intended to give you a general understanding how ArGoSoft Mail Server Pro, and en Email, in general, works. It does not give you step-by-step instructions; it does not walk you through
|
http://docplayer.net/12346676-Make-search-become-the-internal-function-of-internet.html
|
CC-MAIN-2018-13
|
refinedweb
| 5,838
| 51.68
|
Code. Collaborate. Organize.
No Limits. Try it Today.
Why do we want to see a set of data points as a curve? Maybe it is because a human eye is the best instrument for qualitative analysis. We can easily view ups and downs, trends, locate an overshoot here and a minimum there, etc.
Quantities will come later, when we want to know how big the overshoot is, or at what X the minimum occurs. So we need to have a clutter-free picture (without numbers, text labels, etc.) first, and call the data values on the screen later, and hide them to clear the picture again.
After seeing and trying many chart controls, I decided to develop my own. My intent was to get the minimum of clatter on the screen with the maximum features at my fingertips. There is the result.
I got some input from online tutorials for printing from dialog-based apps. I want to thank Igor Tandetnik for the help with template predicates for data points and with a _VARIADIC_MAX parameter (later for porting to VS2012).
_VARIADIC_MAX
I used some code from Microsoft VS2010 Help examples in the function SaveContainerImage.
SaveContainerImage
Changes in version 2.0 are (mostly) results of my discussions with the user/reader Alberl (see posts to this article.)
The screen snapshots above show this chart control in a dialog-based demo application. The first snapshot shows the chart curves, the second one shows the control with two invoked child windows: the data label and the names label. The second snapshot also features a vertical data line and the chart data points nearest to it. We will call these points the selected points. The data line is going through the user selected X-coordinate, X0. The data label shows the chart names, names of the X- and Y- axes, and the X and Y values of the corresponding selected points. Also shown are the X-axis labels. Normally, the children, the line and data points, and the X labels are not visible. They could be shown or hidden upon the user's requests. The names on the snapshots are the user's (mine) choice, just to show what could be done with this control.
This control is implemented as an MFC static class library that renders ordered series of 2-D data points as cardinal spline curves. (From Microsoft's web page: "... The spline is specified by an array of points and a tension parameter. A cardinal spline passes smoothly through each point in the array...".)
We will call the curve a chart, and the control a chart container.
You can insert any number of charts, up to 25 charts, in the container. The limit is defined by the number of popup nenu items reserved to hide/show individual charts. The chart visuals: the tension (line smoothness), the line color, dash style, and width of each chart, might be set individually. The chart data might be appended or truncated at any time. To avoid clutter on the screen, you can temporarily hide some of the charts. Since the version 1.1, you can also set the X-axis name, Y-axis names and Y-precisions for each chart. You also can supply your own formatting functions for X- and Y-values.
You can zoom and pan along the charts horizontally and vertically, change the vertical scale of the individual chart, and view the series of chart data points as a table in a separate window. If you select some data points in the table, you will see the exact positions of these points on the chart's curve.
You can print a selected chart, all visible charts, or the chart's data table.
Chart attributes and data series can be saved as an XML file. Later the file can be loaded into a chart container. You can also export the chart's data series as STL vectors.
In the version 1.2 you may save the chart container as image, in any picture format (BMP, JPEG, PNG, etc) your Windows OS supports.
You can control and manipulate the charts with a mouse and keyboard, with the container's popup menu, or programmatically.
All drawing is done in GDI+ and is double buffered to avoid flicker.
There is heavy use of STL containers: vectors, maps, and multimaps. We are extensively using STL algorithms and predicates across the code. All predicates are overwritten for use with 2-D data points.
The code is written in MS VS2010 VC++ 10 and tested under Windows 7 Pro.
The visual layout of the demo app is tuned to the screen resolution 96 DPI logical pixels, isotropic. The mouse is supposed to have three buttons and a wheel, but the option for a two-button mouse is provided.
I choose the type double for the internal representation of the chart data because it allows the best combination of range and precision. Besides, scientific data usually are double precision floating-point numbers.
double
The data series is a vector of points with coordinates double X, double Y. This vector is a data member of the chart class. The data points define the data space. The container's client rectangle comprises the client, or the screen space. Transforms from one space to another are calculated automatically.
double X
double Y
The chart container is, yes, a container, std::map. The map keys are chart IDs, the values are pointers to the charts. You are not supposed to deal with this map, and with charts, directly.
std::map
The Doxygen-generated project documentation is provided as a zip file "ChartCtrlLibDoxigen.zip". Unzip it, open "Index.htm" in the folder "html" and your default browser will show you the main page. To use the documentation links to the source files, you have to keep the folder structure like the one saved in the source zip file.
Readers' feedback and my own experience in using ChartCtrl prompted me to add some new features to the control. The reader Haekkinen asked to remove the compiler options /GL and /LTCG to make it possible to link the library with projects using other compilers. Jeff Archer asked to add the least squares curve fitting to the library. I myself felt a need for some additional representation features. As a result, version 1.1 includes the following additions and changes:
These additions forced further changes in many container functions to accommodate the new functionality. To demonstrate these new features, I added the new tab "Change Chart", and have added some new controls in the old tabs in the tab control of the demo application.
Again, the readers' feedback and my own experience in using ChartCtrl prompted me to add some new features to the control.
CChartContainer::SaveContainerImage
Ymax = 10.0
Ymax = 10-5
CChartContainer::EqualizeVertRanges(double spaceMult, bool bRedraw)
spaceMult
spaceMult = 0.9
Ymax
0.9*Ymax
CChartContainer::IsUserEnabled()
CChartContainer::EnableUser(bool bEnable, bool bClearState)
CChartContainer::SaveChartData
HRESULT CChartContainer::SaveChartData(bool bAll)
bAll
pntNmb > 2
AddChart
AppendChartData
AppendChartData
ReplaceChartData
CChartContainer::AppendChartData(int chartIdx, std::vector<double>& vTmSeries, double startX, double stepX, bool bUpdate)
CChartContainer::SetChartVisibility
CChartContainer::GetChart
chartIdx = -1
GetChart
CODE_REFRESH
As I have mentioned above, the major changes and additions in version 2.0 are the results of requests and suggestions of the user/reader Alberl. We have discussed them at length in posts to this article. Some features I agreed to are added to the control:
The changes are massive, so it is version 2.0.
Here are four ways to manipulate and control the chart container and charts:
The features of the container and charts you can use and manipulate are:
std::vector
Chart's vertical scale could be changed with the mouse wheel, keyboard arrow keys, or programmatically. There is a possibility of programmaticaly equalize the set of local vertical scales. If the user uses a mouse of keypad, the container sends WM_NOTIFY to itd parent
WM_NOTIFY
To use the chart control in your application, you should start with some preliminary steps:
To enable GDI+:
ULONG_PTR m_nGdiplusToken
class CMyApp : public CWinAppEx
{
.........................................
private:
ULONG_PTR m_nGdiplusToken;
..........................................
}
CMyApp::InitInstance
BOOL CMyApp::InitInstance ()
{
..........................................
Gdiplus::GdiplusStartupInput gdiplusStartupInput;
Gdiplus::GdiplusStartup(&m_nGdiplusToken,
&gdiplusStartupInput, NULL);
..........................................
}
ExitInstance()
CMyApp
int CMyAppApp::ExitInstance()
{
...............................................
Gdiplus::GdiplusShutdown(m_nGdiplusToken);
...............................................
}
There are two library files included to this article: the debug version ChartCtrlLibD.lib and the release version ChartCtrlLib.lib.
To add the library to your project enter the full path to the library in "Linker\Input\Additional Dependencies" in the project properties dialog box.
Yet another way to add the reference to the lib to the project is the use of VS macros. Copy the library ChartCtrlLibD.lib into your Solution (Project) Directory \Debug, and ChartCtrlLib.lib into Solution (Project) Directory\Release. In the appropriate configuration (e.g., Debug or Release) Project Properties select Configuration Properties/VC++ Directories. Enter $(SolutionDir)$(Configuration) into Library Directories. If there is entries after this directory, add a semicolon. In "Linker\Input\Additional Dependencies." enter the name of the library, ChartCtrlLibD.lib or ChartCtrlLib.lib. Again, add the semicolon if needed. It will automatically select an appropriate lib vesion when you switch between configurations.
You have to include two files, ChartDef.h and ChartContainer.h, in your project or add the path to these files in the "VC++ Directories\Include" directory in the project property pages.
If you are going to debug your project and the library together, include the path to the library source files in the "Source Directories" of this page.
Be aware that the paths entered in the VC++ Directories page in the can be inherited by your later projects. If you do not want it to happen, enter the include path into "C++\General\Additional Include Directories" instead of the "VC++ Directories" page and enter the path to the static library file ChartCtrlLib.lib into "Linker\General\Additional Library Directories".
Of course, you can include in your project all library headers and source files instead of the compiled library file.
If you are using this library together with other libraries, e.g. Boost or Windows SDK, you could get linker and compiler errors "Multiple Definitions." If the multiple definition are in the Windows SDK and/or MFC files, try to ignore them: add linker option /FORCE:MULTIPLE to the command line. It will transform errors to warnings. For other ways to mitigate these errors search MS forums.
If your project is being developed under MS Visual Studio 2012 VC++ (VC++ 10), you have to use the libraries developed under VC++ 10. I have added them in ChartCtrlLibVS2012.zip archive.
In addition, in VS 2012 Microsoft have implemented some templates, including tuples, using faux variadics. They have set the default max number of variadic template parameters to five instead of 10 in VS 2010. As a result, to be able to use ChartCtrlLib.lib in VS 2012 C++ project, you have to manually set the _VARIADIC_MAX parameter to 10. Igor Tandetnik wrote to me that the most convenient way to do it is to use the Project Properties. So, go to your Project Properties/C++/Preprocessor /Preprocessor Directives and enter _VARIADIC_MAX=10 into line of the preprocessor directives.
_VARIADIC_MAX=10
If your application is dialog-based, in the resource editor select the picture control in the toolbox and drag it to the place of the chart container in the dialog box. Adjust the control's size and position. In the control properties window, enter a control ID (something like IDC_STCHARTCONTAINER). Make sure that the NOTIFY control property is set to TRUE.
IDC_STCHARTCONTAINER
NOTIFY
TRUE
Add the data member to your CDialog class definition, like:
CDialog
CChartContainer m_chartContainer;
Add the function DDX_Control to the dialog function DoDataExchange(CDataExchange* pDX) to subclass the control, like:
DDX_Control
DoDataExchange(CDataExchange* pDX)
DDX_Control(pDX, IDC_STCHARTCONTAINER, m_chartContainer);
On the other hand, if your application is a document-view application, declare the CChartContainer data member in a view class, like:
CChartContainer
and call
m_chartContainer.CreateChartCtrlWnd(DWORD dwExStyle, DWORD dwStyle,
const CRect& wndRect, CWnd* pParent, UINT nID);
to create a container window. The dwStyle parameter will be combined with the WS_CHILD and WS_VISIBLE styles inside the function.
dwStyle
WS_CHILD
WS_VISIBLE
The chart container attributes are the container name, X-axis name, formatting function, the X range, precision, and the colors of the container's elements. Actually, you can live happily with the default values: the X-axis name is "X", the Y-axis is "Y", and X- and Y-precision is three. The default formatting functions just convert the numbers to strings with given precision. The container name is used only to print charts. The X range might be set automatically to show all charts in their full extent. If you have some special needs, you can set the X-range with the function:
CChartContainer::UpdateExtX(double minExtX, double maxExtX, bool bRedraw = false)
To set the container name, you can supply the name in the constructor like:
CChartContainer myContainer(string_t(_T("Demo"))
or call the function:
SetContainerName(string_t name);
Here, string_t is typedef for std::basic_string<TCHAR>.
string_t
typedef
std::basic_string<TCHAR>
To set the X-axis name, call the function CChartContainer::SetAxisXName(string_t nameX, bool bRedraw = false).
CChartContainer::SetAxisXName(string_t nameX, bool bRedraw = false)
To set the X-value formatting function, you have to include the code for this function in your application, and register it with the call to the container function CChartContainer::SetLabXValStrFn(val_label_str_fn pLabValStrFn, bool bRedraw = false). pLabValStrFn is the pointer to your formatting function. More info about the formatting functions I will provide later.
CChartContainer::SetLabXValStrFn(val_label_str_fn pLabValStrFn, bool bRedraw = false)
pLabValStrFn
The value of precision is the number of significant digits to show when the numbers are being converted to strings. The container precision is the precision of X values. The default value of three means that there would be three significant digits shown in any number: 1233.4567 is presented as 1230 and 0.1234567e-45 is 1.23e-045.
Use SetContainerPrecision(int precision, bool bRedraw = false) to set precision. The parameter bRedraw is a flag to request redrawing of the chart container and its children (data label and chart names legend). The precision influences only the representation of data. It does not change the precision of the chart data series.
SetContainerPrecision(int precision, bool bRedraw = false)
bRedraw
The default constructor sets the default colors (white background, black axes and border, gray dot grid, light yellow background for data and name labels, etc., as is shown on the demo snapshot above). If you want to name the container at the beginning, pass the name to the container's constructor.
The member functions of the chart container to set colors are in the file ChartContainer.h.
Be careful with colors because they are interconnected: e.g., changing the background might force you to change the colors of all other container elements and charts. Try to use the demo to see the results of your changes.
All chart container attributes could be changed at any time, not only at initialization.
Each chart keeps its data series as a vector of data points. The data point is an instantiation of the class template:
template <typename T> class PointT
for type double.
There are typedefs:
typedef PointT<double> PointD;
typedef std::vector<PointD> V_CHARTDATAD;
If you decide to use V_CHARTDATAD to prepare your data, remember that your app must see the definition of PointT to instantiate it for double. The definition of PointT, typedefs of PointD and V_CHARTDATAD are in the file ChartDef.h.
V_CHARTDATAD
PointT
PointT
PointD
V_CHARTDATAD
You might not use PointD to prepare your data altogether. The chart container also accepts data series in the form of std::vector<std::pair<double, double> >, std::vector<double> (time series), and couple of vectors, std::vector<double>, one for X-coordinates and one for Y-coordinates. The vector of PointD and the vector of pairs will be automatically sorted by X; the time series does not need to be sorted because the X-coordinates will be assigned automatically. By default, time series origin is set to 0.0, and time step is 1.0. It means that the first data point in the time series will be PointD(0.0, Y0 ), the second point will be PointD(1.0, Y1 ), etc. If you want or you need (as to append a time series chart, you have to pass your time origin and/or time step value to appropriate function (AddChart, or AppendChartData, or ReplaceChartData.)
std::vector<std::pair<double, double> >
std::vector<double>
std::vector<double>
PointD
PointD(0.0, Y0 )
PointD(1.0, Y1 )
The user is responsible for the last two vectors. They must have the same size; the vector for X must be sorted; the Y values should be arranged in order corresponding to the sorted Xs.
X
Y
The container displays the data series with at least one data point (Obviously, you cannot draw the chart with zero data points). Actually, you need to have more: the Gdiplus::DrawCurve routine draws an ugly curve if it does not receive enough data points. On this occasion, interpolate yourself. Usually 30 - 40 points for 350 pixels are enough. You also can play with the tension to beautify your curve. For the chart without data AddChart will set all chart attributes it can to the function parameters. It will not update container's X- and Y-extensions of the container.
Gdiplus::DrawCurve
Y
If all charts in the container have only one data point with the same X coordinates each, the container will automatically set X extension equal to 1% of the X value. If all charts have have only one data point each withe the same Y coordinates, the Y extension is set to 5% i=of the Y value. The container will adjust its X and Y extensions according to acquired chart data. You also might set the extensions beforehand with functions UpdateExtX and UpdateExtY.
UpdateExtX
UpdateExtY
The different charts might have different X- and Y-extents and different number of data points but usually you want to work with related sets of data series in one container. Usually it means the same number of data points and the same X-extension.
Now it is time to add your charts to the container.
Use the member function of CChartContainer:
CChartContainer
int AddChart(bool bVisible,
bool bShowPnts,
string_t label,
string_t labelY,
int precisionY,
Gdiplus::DashStyle dashStyle,
float penWidth,
float tension,
Gdiplus::Color colChart,
V_CHARTDATAD& vData,
bool bRedraw = false);
Most of the function's parameters are self-explanatory.
If the minimal X-distance between two neighboring data points is big enough, the data points will be encircled by small circles. Sometimes it is not desirable to clutter the picture with these circles. Set bShowPnts = false to hide them. If the chart has only one data point, it will always render as a circle.
bShowPnts = false
By design, each chart must have a unique name and ID. AddChart calculates the IDs automatically. The IDs are unique for each session, which is the time from the entry of the first chart into an empty container to the time when the last chart is deleted from this container.
If the name you entered for the chart is not unique for this session, AddChart will add a suffix to the name. The suffix is a chart's ID. E.g., if you enter the name "Sine Wave" for the chart with ID = 8, and the container already has a chart with this name, AddChart will add a suffix to it: "Sine Wave_8". If you supplied empty strings for the names, AddChart will generate names with the same suffixes like "Chart_0", "Chart_8", etc.
The length of the chart names, and names of the X- and Y- values is limited to 18 characters. If the string length is greater than 18 characters, the string supplied as a parameter to AddChart (and to all other functions dealing with the names) will be truncated to this length. The end of the truncated string will comprise of the delimiter, "^", and the last character of the string. For example, the string "Very, very, strange and long, long, long string" will be truncated to "Very, very, stran^g".
The choice of the tension depends on type of the curve and number of data points. Obviously, the random data are better with a linear curve (tension = 0), but ten points of a sine wave might look good with tension = 0.6, and ugly with tension = 1.0.
In addition, there are three overloads for AddChart that accept the vector of pairs of doubles, std::vector<std::pair<double, double> >&, the time series std::vector<double>&, and two vectors std::vector<double>& X and Y. Use any of them. To add the time series to the container you have to supply the start value of the X coordinate and the value of the step to increase the X value for next points.
AddChart
std::vector<std::pair<double, double> >&
std::vector<double>&
std::vector<double>& X
The chart color, name, name of the Y-values, dash style, pen width, visibility, and "Show/Hide points" attributes you have supplied might be programmatically changed at any time.
The newly added chart has the default Y-formatting function, string_t __stdcall GetLabelValStr(double val, int precision, bool bAddEqSign). It formats the value string as a number with the given precision.
string_t __stdcall GetLabelValStr(double val, int precision, bool bAddEqSign)
The function AddChart returns the ID of the new chart on success, or -1 on failure.
Remember that charts are allocated on heap; the container stores only pointers to the charts. All container functions take pain to delete the charts if it is appropriate, but it is your responsibility to do that if you are trying to get rid of charts outside the container.
You allowed to supply empty data vector to Add Charts. If there is no data points the container will set all chart attributes supplied as AddChart parameters, but will not modify the container X- and Y- extensions and will not display the chart. This chart will be listed in the container popup menu as a chart without data.
Add Charts
It is not always convenient to show chart data points as naked numbers. Suppose you have a chart displaying the average monthly temperature versus months. It is naturally to display the X-coordinates as month names and Y-coordinates as the "0F". This is a case to invoke the user supplied formatting functions.
The formatting function is defined in ChartDef.h as:
typedef string_t (__stdcall *val_label_str_fn)(double val, int precision, bool bAddEqSign);
The function takes a number's value and precision and returns a string. The parameter bAddEqSign, when it is set to true, adds the prefix "=" to the string. The function is a callback that is called when the container is preparing to display values. The application could set the formatting functions calling the container's "Set" functions:
bAddEqSign
void CChartContainer::SetLabXValStrFn(val_label_str_fn pLabValStrFn, bool bRedraw = false);
bool CChartContainer::SetLabYValStrFn(int chartIdx,
val_label_str_fn m_pLabYValStrFn, bool bRedraw = false);
If the container is not able to find the chart with the given chartIdx, it will do nothing and return false.
chartIdx
The default formatting functions are same for the X and Y values:
string_t __stdcall GetLabelValStr(double val, int precision, bool bAddEqSign)
{
sstream_t stream_t;
stream_t << std::setprecision(precision) << val;
return bAddEqSign ? string_t(_T("= ")) + stream_t.str() : stream_t.str();
}
If you want to display month names along the X-axis, you should write something like:
string_t __stdcall GetLabelValStrMonths(double val, int , bool)
{
if (in_range(-0.5, 0.5, val))
return string_t(_T("January"));
else if (in_range(0.5, 1.5, val))
return string_t(_T("February"));
//etc.
}
For Y values it might be:
string_t __stdcall GetLabelValStrGradF(double val, int precision, bool bAddEqSign)
{
sstream_t stream_t;
stream_t << std::setprecision(precision) << val << _T(" <sup>0</sup>F");
return bAddEqSign ? string_t(_T("= ")) + stream_t.str() : stream_t.str();
}
You have to register these functions with
myContainer.SetLabXValStrFn(GetLabelValStrMonth);
myContainer.SetLabYValStrFn(myChartIdx, GetLabelValStrGradF);
So for now you have entered all your charts into your container and have the screen as shown in the first demo snapshot above (without child windows). Let us play with it.
Some operations on the container can be performed only programmatically, from outside of it. They are add, append, truncate, and delete charts to/from the container, and access function to the chart and container attributes like names, precision, etc. For other operations, the container has a built-in popup menu and the handlers for mouse clicks, mouse wheel, and keyboard arrow keys. These operations could also be performed with the interface member functions of the container.
There is a synopsis of the built-in operations:
You can save any group of charts in an XML file. Any chart or group of charts from the file can be loaded back in this container or in any other container. The charts might be added to the container or replace the old container's charts. To save only one chart, select it first. To save a group of charts, hide all the charts you are not going to save (use the popup menu to hide the charts). If one of the visible charts is selected, deselect it. After that, go to the popup menu and select "Save Chart(s)" from the "Save/Print Charts" submenu. The MFC "Save As" dialog will be displayed. Enter a file name or select the file to overwrite, and click the dialog's "Save" button. The file format is proprietary, but you can load the file into MS Excel. (The dialog has a default directory to save the file set to the $(SolutionDir)Charts. If you provide this directory in your project, it will be opened on the start of the dialog.) If you save charts programmatically, you can save all charts, visible and not visible, by passing appropriate value of the bool parameter bAll to the function SaveChartData(string_t pathName, bool bAll) .
bAll to the function SaveChartData(string_t pathName, bool bAll)
$(SolutionDir)Images
LBUTTONUP
CChartContainer::AllowChartPnts(int chartIdx, bool bAllowed, bool bVisibleOnly, bool bRedraw)
chartIdx == -1
Here is a summary of the built-in controls:
MBUTTONDOWN
LBUTTONDOWN
SHIFT + LBUTTONDBLCLK: Sets the first boundary of the vertical extent of the zoom in Y. The red horizontal line marking this boundary will be displayed.
RBUTTONDOWN
First, let us discuss some design solutions.
There are no virtues in placing unrelated data curves in the same window. We suppose to analyze related sets of data. We can expect the X-ranges of related data sets to overlap, albeit not completely. So I have designed the chart container with a common X-scale and X-axis extent for all its charts. The initial X-extent should be, at least, the union of X-ranges for all charts. The user could select any part of the X-extent for his/her perusal by zooming and/or panning the container.
Because there is only one X-axis for all charts, for the X-axis there might be only one common name, one precision value, and one formatting function.
The charts can display very different values along Y-axis. For example, we can have one chart for altitude (Y) vs. distance (X), and the second chart for the temperature (Y) vs. the same distances (X). Therefore, each chart can have its own Y-axis name, Y precision, and the Y formatting function.
Obviously, for better presentation, the Y scales might be different, even very different for the different data sets. Still, the container is implemented with a common Y-scale and Y-extent for all its charts. Again, the initial Y-extent should be the union of Y-extents of all charts. I have provided means for the user to change the onscreen Y-scale for any selected chart to get the best picture he/she wants.
Because there might be more than one data point for one pixel, I provided the means to zoom and pan along X-axis. You can see every data point after an appropriate zooming and panning.
In my opinion, it eliminates the need for zooming/panning along Y-axis. Zoom enough along the X-axis, and read Y-values point by point all way along. But the reader Alberl was wery insistent asking for the zoom along both axes, so I wilt. Alberl suggested to use a rectangle as a border for a zoomed area: you click and drag the mouse to delineate this area. I found it extremely unconvenient for the user: once you started, you will get zoomed when you released the mouse, even the rectangle is in the wrong place and has wrong size. I ended up with the two separate processes for X- and Y- zooming/panning. You do it separately and you can undo them separately.
What about axes? They are placed automatically. The user should not care about them.
How is the user supposed to get information about the X and Y values of the points in the data series? The data point coordinates and chart names are displayed in the container's child windows. The user selects the X-coordinate, and the container should show the values for all data points closest to the point of request.
Many users/readers are asking for new features, like X- and Y-axes with named ticks, default name labels, etc. I understand them, but I want to say: guys, if you decided to use ChartCtls, you should accept the chart model this control is based on. This is an interactive control which assumes the user is sitting before a PC display. The control renders a minimum info by default, but provides an access to the lot of data. If the user needs some details, he/she can easily get them, but he/she must request them by some actions. If you do not want to click mouse button or depress key, it is OK: you can programmatically control the charts from your main application. For example, there is no and will be no axis tiks with labels, but if you cannot live without them, you have access to the container's X- and Y-extensions. Take them, draw tiks and names in some transparent window, and overlay it over the container. But remember, this is an interactive control, and the user is present. Otherwise, why draw charts?
More about design and implementation will follow.
The chart control code consists of seven headers and six source files (add stdafx.h and stdafx.cpp). I think it is too much to include all of them in every C++ project, so I made it a static library. Only two header files, ChartDef.h and ChartContainer.h, must be included in any project using this chart control. Of course, the reference to the library file ChartCtrlLib.lib should be included too.
In farther discussion, we will use the aliases for the STL containers (the file ChartDef.h):
typedef std::basic_string<TCHAR> string_t;
typedef std::basic_stringstream<TCHAR> sstream_t;
typedef std::pair<double, double> PAIR_DBLS;
// Typedefs for data vectors and other STL containers
typedef std::vector<PointD> V_CHARTDATAD;
typedef std::vector<Gdiplus::PointF> V_CHARTDATAF;
typedef std::vector<string_t> V_VALSTRINGS;
typedef std::multimap<int, PointD> MAP_SELPNTSD;
// Used to count multiple Y values for the same X:
// the first member is the iterator to the first occurrence, the second
// is the count of the data points with the same X-coordinate
typedef std::pair<V_CHARTDATAD::iterator, int> PAIR_ITNEAREST;
// Output of some algorithms returning two iterators
typedef std::pair<V_CHARTDATAD::iterator, V_CHARTDATAD::iterator> PAIR_ITS;
typedef std::vector<string_t> V_CHARTNAMES;
// Typedefs for the History container
typedef std::pair <double, double> PAIR_POS;
typedef std::vector<PAIR_POS> V_HIST;
// Typedefs for container chart map
class CChart;
typedef std::map<int, CChart*> MAP_CHARTS;
// For load from file
typedef std::map<string_t, Gdiplus::Color> MAP_CHARTCOLS;
There are nine classes in the library: a class template PointT, classes CChart, CDataWnd, CPageCtrl, CDataView, CChartDataView, CChartXMLSerializer, a struct MatrixD, and a class CChartContainer. The library exports only two classes, PointT and CChartContainer.
CChart
CDataWnd
CPageCtrl
CDataView
CChartDataView
CChartXMLSerializer
MatrixD
The basic representation of a data point in a data series is the instantiation of a class template PointT for doubles (see ChartDef.h):
template <typename T>
class PointT
{
public:
PointT(T x = 0, T y = 0) : X(x), Y(y) {}
PointT(const PointT &pntT) {X = pntT.X; Y = pntT.Y;}
PointT(const Gdiplus::PointF& pntF) {X = static_cast<T>(pntF.X);
Y = static_cast<T>(pntF.Y);}
...................................................
// Conversion function
operator Gdiplus::PointF()
{return Gdiplus::PointF(float(X), float(Y));}
public:
T X;
T Y;
};
typedef PointT<double> PointD;
The class definition also includes overloaded operators =, +, -, *, /, and ==.
Because GDI+ functions accept only REAL (float) floating point numbers, there is the constructor that accepts Gdiplus::PointF as a parameter, and the cast operator to cast PointT to Gdiplus::PointF. You have to make the definition of this class visible at every point of your application where you are calling any container member function that returns or accepts a parameter of type PointD. It means you have to include ChartDef.h.
Gdiplus::PointF
It is a rather dumb class. Mainly, it is used as storage for the chart's data series and attributes.
First, it stores a vector of data points, V_CHARTDATAD in CChart::m_vDataPnts. The vector must be sorted by X-coordinates in ascending order. The vector must have three data points at least, to have at least one data point inside the container's window.
CChart::m_vDataPnts
The chart attributes include min and max values of the X- and Y-coordinates of the data points m_fMinValX, m_fMaxValX, m_fMinValY, and m_fMaxValY. The container uses them to calculate its horizontal and vertical scales. The member m_fLocScaleY stores the multiplier to magnify/decrease the chart's curve along the Y-axis.
m_fMinValX
m_fMaxValX
m_fMinValY
m_fMaxValY
m_fLocScaleY
Other attributes are visuals: chart's color, dash style, tension, and pen width.
The chart has a unique Idx and name. It also has its own precision for the Y-values, the own Y-axis name, and the own formatting function for the Y-values. The lifetime of uniqueness of the chart Idx and name is the lifetime of the session. The container generates the unique Idx and checks the uniqueness of the supplied chart's name when it adds the chart. If the name is already assigner to the other chart, the container will append the supplied name with the suffix that is the chartIdx. For example, if the name "SineWave" is already assigned, and the chart Idx = 8, the chart name will be "SineWave_8". If no name is supplied, the container will generate the name "Chart_4". The names with more than 18 characters will be truncated.
The most important member function of CChart is DrawChart (...). We will discuss it later.
DrawChart (...)
The classes CDataWnd, CPageCtrl, CDataView, CChartDataView, CChartXMLSerializer, and struct MatrixD encapsulate functionality needed for some user actions (see later).
It is, well, a container: std::map<int, CChart*>. This class is also a gateway to all functionality of the chart control. It is the only big class exported by the chart control library. You are never supposed to address all other classes directly: use CCharContainer public member functions instead.
std::map<int, CChart*>
CCharContainer
I think that the best way to describe the inner working and interaction of the chart control classes and points of interest is to consider the tasks the chart control should perform.
The task is performed by the function:
int CChartContainer::AddChart(bool bVisible, bool bShowPnts, string_t label,
string_t labelY, int precisionY,
DashStyle dashStyle, float penWidth, float tension,
Color colChart, V_CHARTDATAD& vData, bool bRedraw)
{
int chartIdx = GetMaxChartIdx() + 1;
bool bAddIdx = false;
if (!label.empty())
{
label = NormalizeString(label, STR_MAXLEN, STR_NORMSIGN);
CChart* twinPtr = FindChartByName(label);
if (twinPtr != NULL)
bAddIdx = true;
}
else
{
label = string_t(_T("Cnart"));
bAddIdx = true;
}
if (bAddIdx)
{
_TCHAR buffer_t[64];
_itot_s(chartIdx, buffer_t, 10); // Chart idx to string
string_t idxStr(buffer_t);
label += string_t(_T("_")) + string_t(buffer_t);
}
CChart* chartPtr = new CChart;
chartPtr->SetChartAttr(bVisible, bShowPnts, chartIdx, label, labelY,
precisionY, dashStyle, penWidth, tension, colChart);
size_t dataSize = vData.size();
// Now transfer data and set min max values
if (dataSize > 0)
{
chartPtr->m_vDataPnts.assign(vData.begin(), vData.end());
chartPtr->m_vDataPnts.shrink_to_fit();
// It is cheaper to sort right away than to look for max/min x and sort later if needed
if (dataSize > 1)
std::sort(chartPtr->m_vDataPnts.begin(),
chartPtr->m_vDataPnts.end(), less_pnt<double, false>());
double minValX = chartPtr->m_vDataPnts.front().X;
double maxValX = chartPtr->m_vDataPnts.back().X;
// Find min and max Y; works even for one-point vector
PAIR_ITS pair_minmaxY =
minmax_element(chartPtr->m_vDataPnts.begin(), chartPtr->m_vDataPnts.end(),
less_pnt<double, true>());
double minValY = pair_minmaxY.first->Y;
double maxValY = pair_minmaxY.second->Y;
// Save in the CChart
chartPtr->SetMinValX(minValX);
chartPtr->SetMaxValX(maxValX);
chartPtr->SetMinValY(minValY);
chartPtr->SetMaxValY(maxValY);
}
// Just in case: idx is unique for this container
if (m_mapCharts.insert(MAP_CHARTS::value_type(
chartPtr->GetChartIdx(), chartPtr)).second == false)
{
delete chartPtr;
return -1;
}
// Now update the container's min maxes, saving the history of X
if (dataSize > 0)
{
// Wil automatically take care of previous one-point charts
UpdateExtX(chartPtr->GetMinValX(), chartPtr->GetMaxValX());
UpdateExtY(chartPtr->GetMinValY(), chartPtr->GetMaxValY());
if (IsWindow(m_hWnd) && m_bTracking && IsLabWndExist(false))
PrepareDataLegend(m_dataLegPntD, m_epsX, m_pDataWnd->m_mapLabs, m_mapSelPntsD, true);
if (bRedraw && IsWindow(m_hWnd)&&IsWindowVisible())
UpdateContainerWnds(-1, true);
}
return chartIdx;
}
The pseudo code for this function is:
vData
SetChartAttr(...)
m_mapCharts
bRedraw == true
Obviously, if the chart has no data points, steps to process them are skipped. Nothing in rendering other charts will be changed.
The added chart has the default formatting function. If you need to replace it, call CChartContainer::SetLabYValStrFn.
CChartContainer::SetLabYValStrFn
Several points in this pseudo code need additional remarks.
The map MAP_CHARTS is a std::map. By definition, the map is sorted by key values. So to generate the unique chart ID for the current session, you just get the key to the last element of m_mapCharts by calling m_mapCharts.rbegin().first and increasing the key value by one. The call to AddChart is the only way to add a chart to the container, so it will always generate a unique ID for the given container.
MAP_CHARTS
m_mapCharts.rbegin().first
Some algorithms we are using in the chart control can operate only on ordered sequences. So the charts data series must be ordered by X-coordinates of the data points. It is much cheaper to call the algorithm std::sort outright than iterate over all elements of the data series to find out whether it was sorted and sort it after that scan if it was not. This is a reason we call the std::sort on the data vector outright.
std::sort
To get the chart's X- and Y-extents, we need to find minimum and maximum values of the X and Y coordinates.
In the sorted by X data vector, the min X is the X-coordinate of the first element, and the max X is the last X. It is no-brainer: just get the chart's m_vDataPnts.front().X and m_vDataPnts.back().X.
m_vDataPnts.front().X
m_vDataPnts.back().X
The min/max values of the Y-coordinates can be located anywhere in the vector. So I used the algorithm:
minmax_element(chartPtr->m_vDataPnts.begin(),
chartPtr->m_vDataPnts.end(), less_pnt<double, true>())
There is no operator '<' for the data point class, so I have to write and use my own predicate, less_pnt. Other algorithms used in this library also need similar predicates, so let us go to less_pnt.
less_pnt
We need a predicate capable to work with X or Y values of chart data points, according to our choice. This seems to be very simple:
template <typename T, bool bY>
struct less_pnt
{
bool operator () (const T& Left, const T& Right)
{
if (bY) return Left.Y < Right.Y;
return Left.X < Right.X;
}
};
Still, what can be passed as a typename? If you pass some class, the template could be instantiated for any class with public members X and Y. It does not matter what types X and Y are: they only have to have the operator <. What will happen if this operator is not defined? The compiler will complain only at the moment of template instantiation for this weird class. It might happen much, much later in the development or long later, during a software upgrade.
typename
So it is better to use same POD data as a template parameter and PointT<T> as an argument to the operator () (thanks to Igor Tandetnik).
PointT<T>
()
Second, because the value of the non-type parameter must be known at compilation time, only one of the branches of the 'if' statement is executed at run time for each particular instantiation of this template. It is unknown whether the compiler will optimize out the other branch. Maybe we are going to have an unnecessary comparison at run time. It is better to optimize it by hand using a partial specialization of the template. We will have:
if
template <typename T, bool bY>
struct less_pnt
{
bool operator () (const PointT<T>& Left, const PointT<T>& Right)
{
return Left.Y < Right.Y;
}
};
// Partial specialization for X
template <typename T> struct less_pnt<T, false>
{
bool operator () (const PointT<T>& Left, const PointT<T>& Right)
{
return Left.X < Right.X;
}
};
The data points in the chart's data vector are in the chart's data space. It is natural to think about this space as a rectangle in the Cartesian (rectangular) coordinate system. The left and right boundaries of this rectangle are the minimal and maximal X-coordinates of the data points; the top and bottom are maximal and minimal Y-coordinates. Because normally in scientific data the Y-axis goes up, the top value is greater than the bottom value.
The data space of the container is a union of the data spaces of all charts in the container. It also is a rectangle. Usually we delegate to the container the calculation of the container data space, but if needed, we can set the boundaries of the container data space from outside the container, using the functions:
bool CChartContainer::UpdateExtX(double minExtX, double maxExtX, bool bRedraw = false)
void CChartContainer::UpdateExtY(double minExtY, double maxExtY, bool bRedraw = false)
(The parameter bRedraw = true forces the immediate update of the container window.)
bRedraw = true
These functions are calculating the minimal and maximum X and Y extensions. For example, if the greatest value of the charts maxX is 10, but you have passed 100 as a parameter maxExtX, the max extension X will be set to 100.
There is an another couple of functions,
PAIR_DBLS SetExtX(bool bRedraw = false);
PAIR_DBLS SetExtY(bool bRedraw = false);
These functions calculate the unions of X- and Y- extebsions of all charts in the container, and set the results as container dimension in the data space.
If there is histor y of X and/or Y zooming/panning, the functions substitute the new extension limits at the front of the history vectors; you will see the new X limits only if you undo all zooming/panning along the X-axis, and the new Y extent after undoing the Y-history.
The origin and axes of the coordinate system might be at any position relative to the data space rectangle: inside, to the left, below the bottom, etc.
We might use the entire container data space or only part of it (think about zooming and panning).
At any given moment, the boundaries of the used container data space are stored in CChartContainer data members:
double m_minExtY; // Min coordinate of the Y axis
double m_maxExtY; // Max coordinate of the Y axis
double m_startX; // Leftmost X coordinate
double m_endX; // Rightmost X coordinate (not included)
We will refer to the pair m_minExtY, m_maxExtY as a container's Y-extent, and to m_startX, m_endX as a container's X-extent.
m_minExtY
m_maxExtY
m_startX
m_endX
When we are drawing charts in the container window, we are drawing in the container's client space. The origin of the coordinate system here is at the left upper corner of the client rectangle, and the Y-axis goes down (top is less than bottom). Therefore, before the drawing starts, we have to map the container data space into the client space.
The mapping into the client space consists of translations and scaling. These transforms are described by a transform matrix. In two-dimensional graphics, it is:
Here a11 = scale X, a22 = scale Y, a31= offset X, a32 = offset Y. For the chart control, the transforms are translations and scaling only, so a12 and a21will always equal to zero.
In GDI+, we have the Matrix class. Unfortunately, the elements of the transform matrix have the type float.
Matrix
float
In theory, the container's data space might be completely or partially out of the range of the type float. Passing out of range data to the transform or to drawing functions of GDI+ could result in errors. So we have to transform the data space into a client space working with double values first, and cast the results to float type next. These casts will always be in the range of the float because the values of the client coordinates (logical pixels) are limited by the range of the type int. Of course, we will lose precision, but this does not matter: the float coordinates will be rounded to pixels eventually. What about a reverse transform, from the client space to the data space? Well, we have to be cautious, always remember about lost precision. For example, if we are looking for a data point that corresponds to a given pixel, we have to look for the data point closest to the pixel transformed into the data space, not equal to it.
int
float
There is no point in making this matrix a generic template class: to transform from the screen to the data space, we have to include the inversion operation in the class. The inversion demands division, so only floating-point numbers will do. We already have Gdipus::Matrix for floats, so there is the transform matrix class for doubles (ChartDef.h):
Gdipus::Matrix
// This MatrixD is only for translation and scaling of double numbers
typedef class MatrixD
{
public:
double m_scX;
double m_scY;
double m_offsX;
double m_offsY;
public:
// The constructor yields an identity matrix by default
MatrixD(double scX = 1.0, double scY = 1.0,
double offsX = 0.0, double offsY = 0.0): m_scX(scX),
m_scY(scY), m_offsX(offsX), m_offsY(offsY) {}
// Transforms
void Translate(double offsX, double offsY)
{m_offsX += offsX*m_scX; m_offsY += offsY*m_scY;}
void Scale(double scX, double scY)
{m_scX *= scX; m_scY *= scY;}
// Operations on matrixD; if the matrix is not invertible,
//returns false; uses explicit formulae for inversion
bool Invert(void);
MatrixD* Clone(void)
{
MatrixD* pMatrix = new MatrixD;
pMatrix->m_scX = m_scX;
pMatrix->m_scY = m_scY;
pMatrix->m_offsX = m_offsX;
pMatrix->m_offsY = m_offsY;
return pMatrix;
}
// Transforms PointD to PointF and PointF to PointD
Gdiplus::PointF TransformToPntF(double locScY, const PointD& pntD);
PointD TransformToPntD(double locScY, const Gdiplus::PointF& pntF);
private:
MatrixD(const MatrixD& src);
MatrixD operator =(const MatrixD& src);
} MATRIX_D;
Gdipus::Matrix allows only a clone operation, so to behave similarly, the copy constructor and the assignment operator in MATRIX_Dare made private.
Gdipus::Matrix
MATRIX_D
The default constructor creates an identity matrix I. To calculate the transform matrix, the chart control must multiply the identity matrix by the scale matrix S and/or the translation matrix T:
I
S
T
That is done by applying Scale and Translate functions to the instance of MATRIX_D. Matrix multiplications are not commutative. We use a MatrixOrderPrepend order in the chart control, when the multiplier is placed to the left of the original matrix. It means that if we want to scale first and translate the scaled results next, we have to multiply in this order:
Scale
Translate
MatrixOrderPrepend
The equivalent of this is:
myMatrix.Translate(...); //First
myMatrix.Scale (...); // Second
The result is the vector:
To invert a matrix, I used explicit formulae for the matrix with only translation and scale members:
a31 = -a31/a11;
a32 = -a32/a22;
a33 = 1.0;
a11 = 1.0/a11;
a22 = 1.0/a22;
a12 = a13= a21 = a23 = 0.
To get the drawing rightly, we have to scale the container data space to the client space and translate the results to the origin in the client space.
First, in what function should we calculate the transform matrix? We must have the correct transform matrix every time we are going to draw the charts. The best place for it is in the CChartContainer::OnPaint(). We always call this function, directly or indirectly, after changes to the container are made.
CChartContainer::OnPaint()
We begin with the container's X- and Y-extensions that are either calculated by the container or set by the application. For the translation in the client space, we need to know the axes origin.
The functions:
PAIR_XAXPOS CChartContainer::GetXAxisPos(RectF rChartF,
double minY, double maxY)
PAIR_YAXPOS CChartContainer::GetYAxisPos(RectF rChartF,
double startX, double endX)
calculate the Y-position of the X-axis and X position of the Y-axis.
The idea is very simple: if the minimum is negative, and maximum is positive, place the axis between them; otherwise, attach it to the appropriate border of the client rectangle. E.g., for the X position of the Y-axis:
if ((startX < 0)&&(endX > 0))
{
double offsYX = rChartF.Width*fabs(startX)/(endX - startX);
horzOffs = rChartF.GetLeft() + float(offsYX);
// Somewhere between minX, maxX
}
else if (startX >= 0)
horzOffs = rChartF.GetLeft();
else if (endX <= 0)
horzOffs = rChartF.GetRight();
We declare offsYX as double because X- and Y-extensions are doubles, but the calculations are in the client space.
offsYX
The axes origin in the client space is Gdiplus::PointF(offsXY, offsYX).
Gdiplus::PointF(offsXY, offsYX)
For the scaling, the function CChartContainer::UpdateScales(drawRF, m_startX, m_endX, m_minExtY, m_maxExtY) does the job. It just calculates:
CChartContainer::UpdateScales(drawRF, m_startX, m_endX, m_minExtY, m_maxExtY)
scX = drawRectWidth/rangeX;
scY = drawRectHeight/rangeY;
Before calculation, we have deflated the client rectangle width and height 10% to make the picture look better.
With the offsets and scales, we are ready to calculate the transform matrix:
MatrixD matrixD;
matrixD.Translate(pntOrigF.X, pntOrigF.Y);
matrixD.Scale(m_scX, -m_scY);
The minus sign in the third line reverses the direction of the Y-axis.
We are not done yet. Remember, when the origin is out of the client rectangle, we just place the axis along the left or right or top or bottom boundaries of the rectangle. In this instance, one more translation is due; this is in the data space:
matrixD.Translate(translateX, translateY);
It is a translation to the borders of the data space: translateX is -startX (left position of the Y-axis) or -endX, and translateY is -minExtY (top position of the X-axis) or -maxExtY. The order of the transforms is: first, if needed, translate the origin in the data space; second, scale; third, move the result to the origin in the client space.
translateX
-startX
-endX
translateY
-minExtY
-maxExtY
Sometimes we need the transform matrix before OnPaint() is called, e.g., to track the data label. The function MatrixD* CChartContainer::GetTransformMatrixD(double startX, double endX, double minY, double maxY) calculates the matrix for a given X- and Y-extents outside the OnPaint().
OnPaint()
MatrixD* CChartContainer::GetTransformMatrixD(double startX, double endX, double minY, double maxY)
OnPaint()
To see the transform code, you have to go to this function or to CChartContainer::OnPaint().
Now we have the transform matrix and can draw the visible charts. This task is performed by the function:
bool CChart::DrawChartCurve(V_CHARTDATAD& vDataPntsD, double startX, double endX,
MatrixD* pMatrixD, GraphicsPath* grPathPtr, Graphics* grPtr, float dpiRatio)
{
if (vDataPntsD.size()== 0)
// Just for safe programming; the function is never called on count zero
return false;
V_CHARTDATAF vDataPntsF;
// Convert the pntsD to the screen pntsF
if (!ConvertChartData(vDataPntsD, vDataPntsF, pMatrixD, startX, endX))
return false;
V_CHARTDATAF::iterator itF = vDataPntsF.begin();
V_CHARTDATAF::pointer ptrDataPntsF = vDataPntsF.data();
size_t vSize = vDataPntsF.size();
// Add the curve to grPath
Pen pen(m_colChart, m_fPenWidth*dpiRatio);
pen.SetDashStyle(m_dashStyle);
if (!m_bShowPnts&&(vSize == 2)) // Are outside or at boundaries of clipping area
{ // Make special semi-transparent dash pen
Color col(SetAlpha(m_colChart, ALPHA_NOPNT));
pen.SetColor(col);
}
if (m_dashStyle != DashStyleCustom)
{
if (vSize > 1)
{
grPtr->DrawCurve(&pen, ptrDataPntsF, vSize, m_fTension);
if (m_bSelected && (dpiRatio == 1.0f)) // Mark the chart as selectes on screen only
{
Pen selPen(Color(SetAlpha(m_colChart, ALPHA_SELECT)),
(m_fPenWidth + PEN_SELWIDTH)*dpiRatio);
grPtr->DrawCurve(&selPen, ptrDataPntsF, vSize, m_fTension);
}
}
// Now add the points
if (m_bShowPnts || (vSize == 1))
{
itF = adjacent_find(vDataPntsF.begin(), vDataPntsF.end() ,
lesser_adjacent_interval<PointF,
false>(PointF(dpiRatio*CHART_PNTSTRSH, 0.0f)));
if (itF == vDataPntsF.end()) // All intervals are greater than CHART_PNTSTRSH
{
itF = vDataPntsF.begin(); // Base
for (; itF != vDataPntsF.end(); ++itF)
{
RectF rPntF = RectFFromCenterF(*itF, dpiRatio*CHART_DTPNTSZ,
dpiRatio*CHART_DTPNTSZ);
grPathPtr->AddEllipse(rPntF);
}
}
}
}
else
{
PointF pntF;
PointF pntFX(dpiRatio*CHART_DTPNTSZ/2, 0.0f);
PointF pntFY(0.0f, dpiRatio*CHART_DTPNTSZ/2););
}
if (vSize == 1)
{
grPathPtr->StartFigure();
grPathPtr->AddEllipse(RectFFromCenterF(pntF, 2.0f*pntFX.X, 2.0f*pntFY.Y));
}
}
if (grPathPtr->GetPointCount() > 0) // Has points to draw
{
pen.SetWidth(1.0f*dpiRatio);
pen.SetDashStyle(DashStyleSolid);
grPtr->DrawPath(&pen, grPathPtr);
if (((m_dashStyle == DashStyleCustom)||(vSize == 1))&& m_bSelected && (dpiRatio == 1.0f))
{
pen.SetColor(Color(SetAlpha(m_colChart, ALPHA_SELECT)));
pen.SetWidth(m_fPenWidth + PEN_SELWIDTH);
grPtr->DrawPath(&pen, grPathPtr);
}
grPathPtr->Reset();
}
return true;
}
The parameters startX, endX might cover the entire container's X-extension or only part of it (after zooming in or panning). The parameter dpiRatio is for adjusting pen widths for printing. We will discuss it later.
startX
endX
dpiRatio
The function DrawChartCurve performs following operations:
DrawChartCurve
pMatrixD
DashStyleCustom
First, let us consider the search for the border data points.
Assume we have the X-extension of the data space -10.0, 10.0, and the space between data points X-coordinates equals 2.0. If startX= -7.0 and endX = +7.0, the leftmost points inside the drawing range has X=-6.0, and the rightmost point inside the range has X=6.0.
If we call Gdiplus::DrawCurve on the inner points only, the curve will run from -6.0 to +6.0, not from -7.0 to +7.0. We will perceive it as a curve with beginning at X=-6.0 and end at X=6.0. However, the curve definitely exists outside of these points. To have a beautiful curve covering all X-extent, we have to extend the data point set to the nearest left and right data points outside or at the limits of the X range.
Gdiplus::DrawCurve
The algorithm and predicate to do this job are in Util.h. The algorithm is used by the function:
PAIR_ITS CChart::GetStartEndDataIterators(V_CHARTDATAD& vDataPnts, double startX, double endX)
{
PAIR_ITS pair_its = find_border_pnts(vDataPnts.begin(),
vDataPnts.end(), not_inside_range<double, false>(startX, endX));
return pair_its;
}
The predicate is:
template <typename T, bool bY> struct not_inside_range
We use only partial specialization for the X-coordinate:
template <typename T >
struct not_inside_range<T, false>
{
T _lhs;
T _rhs;
bool _bFnd;
not_inside_range(T lhs, T rhs) : _lhs(lhs), _rhs(rhs), _bFnd(false) {}
inline std::pair<bool, bool> operator () (const PointT<T>& pntT)
{
bool bLeft = false;
bool bRight = false;
if (pntT.X < _lhs)
bLeft = true;
else if (!_bFnd && (pntT.X == _lhs))
{
bLeft = true;
_bFnd = true;
}
else if (pntT.X >= _rhs)
bRight = true;
std::pair<bool, bool> pair_res(bLeft, bRight);
return pair_res;
}
};
The predicate takes the point's coordinate (here it is pntT.X) and tests it against the left boundary of the interval first. If the coordinate is to the left of or at the left boundary, the predicate returns std::pair(true, false). If the coordinate is to the right of or at the right boundary, the predicate returns std::pair(false, true). If the coordinate is inside the interval, the return value is pair(false, false).
pntT.X
std::pair(true, false)
std::pair(false, true)
pair(false, false)
For a multi-valued function, the predicate selects the first point from the group of points with the same X-coordinates and ignores all other points in the group.
The algorithm iterates over the range of iterators _First, _Last on the ordered sequence. The algorithm refreshes the first iterator in the internal pair of iterators each time the predicate returns pair_res.first = true. On the first return of pair_res.second = true, the algorithm saves the current iterator in the second member of the pair. Because the sequence is ordered, there is no need to continue the test. The algorithm returns this pair of iterators.
_First
on the ordered sequence
pair_res.first = true
pair_res.second = true
Next, we have to map the points between iterators returned by GetStartEndDataIterators from the data space to the client space. The algorithm std::transform with the custom predicate transform_and_cast_to_pntF (ChartDef) does the job:
GetStartEndDataIterators
std::transform
transform_and_cast_to_pntF (ChartDef)
// Predicate to use with the STL algorithm transform
typedef struct transform_and_cast_to_pntF
{
double _locScY;
MatrixD* _pMatrixD;
transform_and_cast_to_pntF(double locScY, MatrixD* pMatrixD) :
_locScY(locScY), _pMatrixD(pMatrixD) {}
inline Gdiplus::PointF operator() (const PointD& pntD)
{
double X = _pMatrixD->m_scX*pntD.X + _pMatrixD->m_offsX;
double Y = _locScY == 1.0 ? pntD.Y : _locScY*pntD.Y;
Y = _pMatrixD->m_scY*Y + _pMatrixD->m_offsY;
return Gdiplus::PointF(float(X), float(Y));
}
} TRANSFORM_TO_PNTF;
The predicate just applies the transform matrix _pMatrixD to each point in the algorithm range and casts the result to Cdiplus::PointF. Before applying the matrix, the predicate multiplies the Y-coordinate of the point by some value. The value is stored in the predicate member _locScY. It allows us to modify the Y- scale of the given chart to change the vertical size of the chart curve without modifying the chart's data space. The predicate gets _locScY and _pMatrixD at the construction time.
_pMatrixD
Cdiplus::PointF
_locScY
All this preparation job, search for border points, transform, and cast, is done in the function Chart::ConvertChartData. The function returns the resulting vector of PointF in the function out parameter std::vector<Gdiplus::PointF> vDataPntsF.
Chart::ConvertChartData
PointF
out
std::vector<Gdiplus::PointF> vDataPntsF
To draw the chart, we just call Gdiplus::DrawCurve on all dash styles except DashStyleCustom.
Gdiplus::DrawCurve accepts only a pointer to the arrays of Cdiplus::PointF. To get the pointer, we call the std::vector::data() function:
std::vector::data()
V_CHARTDATAF::pointer ptrDataPntsF = vDataPntsF.data()
The number of data points per pixel changes when the user changes the container's X-extent. For example, in the demo app, the width of the client rectangle is 476 pixels. If the chart has 1000 data points, it equals 2.1 data points per pixel. Suppose we zoomed in the container, and now only 10 data points are visible in the client rectangle. Now we have 47.6 pixels between adjacent data points. To distinguish between the actual data points and the spline interpolation pixels, we have somehow to mark the actual data points. We do it by drawing the circles around the data points.
How to decide when to draw these circles? We have chosen to draw circles if the minimal distance between adjacent data points is greater than or equal to six pixels. This decision is based entirely on aesthetic considerations: the chart curve with circles looks not too cluttered at this distance.
Therefore, DrawChartCurve applies the algorithm std::adjacent_find with the predicate adjacent_interval_pntF to vDataPntsF(Util.h). If the minimal distance between the adjacent data points is greater or equal to 6.0 pixels, DrawChartCurve adds circles around the data points to the graphics path, and draws the path. We use the minimum as criterion to avoid ambiguity. Otherwise, there is no way to know if some curve segment is empty or overpopulated by data points.
std::adjacent_find
adjacent_interval_pntF
vDataPntsF
DrawChartCurve
Sometimes it is not desirable to draw these circles. The member CChart::m_bShowPnts suppresses drawing of circles if it is set to false.
CChart::m_bShowPnts
false
And what about DashStyleCustom? We reserved it to draw a chart as a set of disconnected data points. Each data point is represented as a small cross. There is no way to make Gdiplus::DrawCurve to draw disconnected crosses with any DashStyle. So we use the DashStyleCustom as a flag to switch to another drawing routine. To draw the set of the points, we add each cross to GraphicsPass instance, grPass. To have the cross's lines disconnected from each other, we have to prefix the insertion of each line of each cross with grPath::StartFigure():
DashStyle
GraphicsPass
grPass
grPath::StartFigure());
}
Graphics* grPtr;
grPtr->DrawPath(&pen, grPath);
To draw without flicker, we use double-buffered drawing: we draw into memory first, and transfer the result onto a display's surface after that. In MFC GDI, it is all about CreateCompatibleDC, selecting a bitmap in this DC, and finally, transfers the bitmap bits from the in-memory DC into the screen DC.
CreateCompatibleDC
The GDI+ analog of this task is:
CPaintDC dc(this); // Device context for painting
Graphics gr(dc.m_hDC); // Graphics object from paintDC
// Although we use only floating-point numbers with Gdiplus
// functions, we start with integers because no Bitmap constructor
// accepts REAL (float) numbers
Rect rGdi;
gr.GetVisibleClipBounds(&rGdi); // Same as GetClentRect
Bitmap clBmp(rGdi.Width, rGdi.Height); // Mem bitmap
Graphics* grPtr = Graphics::FromImage(&clBmp); // In-memory
RectF rGdiF = GdiRectToRectF(rGdi); // From Util.h, to allow
// float numbers
// Draw with grPtr
................................................
gr.DrawImage(&clBmp, rGdi); // Transfer into the screen
delete grPtr; // Free the memory
In addition, when we need to update the container's window, we call the function:
void CChartContainer::RefreshWnd()
This functions calculates the region that excludes areas under the container's children, and calls RedrawWindow() on this region. We use the regions because even double-buffered drawing sometimes blinks.
RedrawWindow()
If the children of CChartContainer are visible, and/or the data table of one of the visible chart is displayed in the data view window, RefreshWnd() will not update these windows. The user should call:
RefreshWnd()
void CChartContainer::UpdateContainerWnds(int chartIdx, bool bMatrix, DATAVIEW_FLAGS dataChange)
{
if (m_bTracking && IsLabWndExist(false))
{
UpdateDataLegend(bMatrix);
}
else
RefreshWnd();
if (IsLabWndVisible(true))
ShowNamesLegend();
UpdateDataView(chartIdx, dataChange);
}
This function updates the data label window if it exists, redraws the chart names label (again, if it is visible), and updates the data view. Because the need for update of the data label might arise when the X-extension of the container was changed with zooming or panning, we will need to recalculate the transformation matrix (bMatrix == true). The flag dataChange tells the data view that the chart's data vector is changed and need a special treatment. The function has default parameters: chartIdx = -1, bMatrix = false, DATAVIEW_FLAGS = F_NODATACHANGE. We are using the defaults if there is no dataView window, no change of the X-extension (e.g. we hide or show the chart). If the data view is present, and the chart name, formatting function, or other chart attributes (but not the data vector) are changed, we have to pass to the function the chart Idx only.
bMatrix == true
dataChange
bMatrix = false
DATAVIEW_FLAGS = F_NODATACHANGE
We see the charts in the container widow, and we want to know exact values of the charts' data points at the selected value X0 of the X-axis. It must be easy for the user: just select X0, and the container will show the X and Y values of the charts' data points closest to this X0. X0 should be selected by left mouse click or programmatically. We will call X0 a request point. If the user has done with it, he can order the container to delete this information or to go to the next X. Otherwise, the container should track X0 when the container client space is changing.
Now let us go to the details. The process consists of three tasks:
Let us start with the first task.
The MFC Framework supplies coordinates of the mouse click in the client space. The charts' data points are in the data space. We cannot work in the client space because the conversion from the doubles of the data space to the floats in the client space could result in loss of precision. E.g., float(1.0e-234) = 0.0f. So we have to work in the data space. To map the mouse point into the data space point, we use the function (ChartDef.h):
PointD MatrixD::TransformToPntD(double locScY, const Gdiplus::PointF& pntF)
{
ENSURE(m_scX*m_scY != 0.0); // The matrix must be invertible
MatrixD* matrixDI = Clone(); // Do not change the original
matrixDI->Invert();
// Map back into data space
double X = pntF.X*matrixDI->m_scX + matrixDI->m_offsX;
double Y = pntF.Y*matrixDI->m_scY + matrixDI->m_offsY;
Y /= locScY; // Correct for the local scaleY
delete matrixDI;
return PointD(X, Y);
}
Many interesting things are going here.
Obviously, to map back from the screen to the data space, we have to invert the transform matrix that was used to transform the data space into the client space. The first line in the body of the function checks whether the matrix is invertible. Second, the Invert function calculates the data members of the inverted matrix. Because we are using the direct matrix in every drawing procedure, we have to operate on the clone of it.
Invert
As you remember, before the direct transform, the X-coordinates of the chart's data points are multiplied by the chart's local scale Y to change the Y-size of the chart's curve on the screen. So after inversion, we must divide the Y-coordinates by locScY.
locScY
The coordinates of the given request point in the data space will not change. If it was X = 4.5, it will be X = 4.5 until the next click or request. In the client space, the position of the given request point could change due to zooming, panning, or if the container window changes its size. So let us to keep the coordinates of the request point in the data space in the data member CChartContainer::m_dataLegPntD. We will use this data member to track the request point on the screen.
CChartContainer::m_dataLegPntD
We will collect all chart data points closest to X0 into std::multimap CChartContainer::MAP_SELPNTSD m_mapSelPntsD. We have to use the multimap because some charts might have several data points with the same X (e.g., the rectangle wave in the demo app).
std::multimap CChartContainer::MAP_SELPNTSD m_mapSelPntsD
What does "closest" mean? Obviously, at the moment of selection, 'closest' means "visually close" to the place of mouse click. In this chart control, "visually close" is defined as the data point located in a six-pixel interval centered on X0. Again, the image of this interval in the data space is a constant for the lifetime of the point of request. We save it in the data member CChartContainer::m_epsX. The value of m_epsX in the client space is always six pixels at the moment of the click, but it should follow to the size of the X-extent of the container window. E.g., the six-pixel interval will be twelve pixels wide if the container X-extent is made a half of the previous extent. For the new click in this new X-extent, the new interval is six pixels again.
CChartContainer::m_epsX
m_epsX
The six-pixel interval might cover a multitude of the chart's data points or cover no data points at all. (In the demo app, it might be 12 or more data points per pixel at X-extent -10.0...10.0.) Therefore, the "closest" chart's data point should be in m_epsX and has the minimal distance from X0. In addition, if several data points satisfy this condition (multi-valued data series), we have to select all of them.
m_epsX
The function GetNearestPoint selects the points:
GetNearestPoint
PAIR_ITNEAREST CChart::GetNearestPointD(const PointD& origPntD, double dist, PointD& selPntD)
{
V_CHARTDATAD::iterator it = m_vDataPnts.begin(),
itE = m_vDataPnts.end();
int nmbMultPntsD = 0;
double leftX = origPntD.X - dist/2.0;
double rightX = origPntD.X + dist/2.0;
// Find the first point in vicinity of the origPntsD.X, if it exists
it = find_if(it, itE,
coord_in_range<double, false>(leftX, rightX));
if (it != itE) // Than find closest to origPntD.X
{
it = find_nearest(it, itE,
nearest_to<double, false>(origPntD, dist));
if (it != itE)
// Always true; will return
// found_if result at least
{
// Now get the number of multi-valued points (the same X's, different Y's)
selPntD = *it;
nmbMultPntsD = count_if(m_vDataPnts.begin(), it,
count_in_range<double, false>(selPntD.X, selPntD.X));
return make_pair(it, nmbMultPntsD + 1);
}
}
return make_pair(itE, 0);
}
In the first element of PAIR_ITNEAREST, the function returns the iterator to the first of the nearest to origPntD.Xdata points of the chart's data vector. The number of data points with the same X-coordinate is returned in the second element of the pair.
PAIR_ITNEAREST
origPntD.X
The function uses three STL algorithms.
First, it applies the algorithm std::find_if to the entire data vector to search for the first data point with X-coordinate in the interval origPntD.X ± dist/2.0. The parameter dist defines the interval in the data space where the "closest" points might be. Usually, it is the same six-pixel "visual close" interval transformed into the data space.
std::find_if
origPntD.X ± dist/2.0
dist
Next is the search for the data point with the minimal distance from X0. The search starts from the iterator returned by std::find_if. The search is conducted in the same interval origPntD.X ±dist/2.0. This algorithm is custom-made with the custom predicate:
origPntD.X ±dist/2.0
//Find closest to X or Y coord of some point of origin; apply to sorted
// sequences only
template<class _InIt, class _Pr>
inline _InIt find_nearest(_InIt _First, _InIt _Last, _Pr _Pred)
{
_DEBUG_RANGE(_First, _Last);
_DEBUG_POINTER(_Pred);
_InIt _NearestIt = _First; // Find first _InIt satisfying _Pred
for (; _First != _Last; ++_First)
{
if (_Pred(*_First))
break;
_NearestIt = _First; // Continue
}
return (_NearestIt);
}
The predicate _Pred compares the distance between adjacent points fabs(dataPntD.X - origPntD.X) with the previous value stored in the predicate's data member. The data vector is sorted by X, so the distance will always decrease when we are moving to origPnt.X, might decrease on the move to the first point after origPnt.X, and always increase after that. On the first iteration that yields increased distance, the predicate will return true. The algorithm immediately breaks the loop and returns the iterator to the previous point that is the closest to origPntD.X. Again, we use the template definition for the search for Y-coordinates, and a partial specialization for the search for X.
_Pred
fabs(dataPntD.X - origPntD.X)
origPnt.X
true
Finally, the algorithm std::count_if counts the number of data points with the same X-coordinates as the closest point has. Keep in mind that only the first search operates on the entire data vector. The second search and counting are performed in the tiny interval dist around the point of request.
std::count_if
All selected data points, visible or not, are stored in the multimap CChartContainer::m_mapSelPntsD. For the lifetime of the given point of request, the multimap does not change if the container does not add or removes charts, or changes the charts data vectors. When the "closest" points are selected, we can discuss how to render them.
CChartContainer::m_mapSelPntsD
To render the selected points, we have to show the request point, the visible selected points, and the X and Y values of them together with the names of the charts they belong to, and names of charts Y-values. It is convenient to show the request point as a vertical data line (should I say the line of request?) that goes through origPntD.X. We will show the visible selected data points as circles around the corresponding data points. It results in the window of a very irregular shape: a line from the top to the bottom of the container client area, a set of circles around the selected data points, and a rectangle with the text strings.
The better way to do it is to use the layered window. Unfortunately, the layered window cannot be a child of the other window, for example, the chart container's window. (It can be owned.) It means that all the work that the Operating System and the MFC Framework are doing when the owner is moving, resizing, or is under the other windows, you have to do by yourself.
Using the child window with the style WS_EX_TRANSPARENT, which covers all area as it is shown on the picture above brings problems with handling the mouse events that occurs over the child, but should be transferred to the container for handling, and other complications.
WS_EX_TRANSPARENT
So I decided to do a little hack: I have left the drawing of the data line and the selected points to the container, and delegated the drawing of the strings with data point info to the container's child. It frees me of basic housekeeping (the child moves, hides, and closes together with its parent). As a price, I accepted the task to remember to draw both in the container client area and in the child client area in the same time, and to refresh both windows when appropriate.
The class CDataWnd is responsible for drawing the data point's info. The container keeps the pointer to CDataWnd* m_pDataWnd as its data member. The container allocates the memory on the heap for this pointer, and passes to m_pDataWnd info strings related to the visible selected data points. After receiving this info, m_pDataWnd is ready to draw the data label.
CDataWnd* m_pDataWnd
m_pDataWnd
m_pDataWnd
What does this data point info consist of?
To make identification of the data points easy, we display the info string in the color of its chart. The short line before the chart's name has the same color, dash style, and pen width as the chart curve has. It means that the container has to pass to m_pDataWnd not only the strings of the values of the data point coordinates, but also the chart and X- and Y-value names, and visual data like color, dash style, and pen width.
This info is passed as a tuple (ChartDef.h):
typedef std::tuple<string_t, string_t,string_t, string_t,
string_t, Gdiplus::Color, Gdiplus::DashStyle, float> TUPLE_LABEL;
Why do we pass five strings instead of one? It is because, for a pretty picture, we want to display the data point info in columns: chart names, X-values name (the same for all charts), formatted X-values, Y-values names for each chart, and corresponding formatted Y-values.
I have a confession to make. At first, I took tuples as an unnecessary gimmick that ISO C++ committee invented to make life harder for us, Joe programmers. I thought that to have structures is enough. But later, I began to appreciate how easy and uniform access to tuple elements might be. I am using enums and the function std::get<>(...) to do this. Compare the verbose access to the members of a structure to this:
std::get<>(...)
enum TUPLE_LIDX {IDX_LNAME, IDX_LNAMEX, IDX_LX, IDX_LNAMEY, IDX_LY, IDX_LCOLOR, IDX_LDASH, IDX_LPEN};
TUPLE_LABEL tuple_label;
get<IDX_LNAME>(tuple_label) = string_t(_T("SineWave_0"));
Gdiplus::Color chartCol = get<IDX_COLOR>(tuple_label);
To get the tuples, the container on each visible point from m_mapSelPntsD, call the function:
m_mapSelPntsD
// Formats string and prepares chart visuals for the screen
TUPLE_LABEL CChart::GetSelValString(const PointD selPntD, string_t nameX,
int precision, val_label_str_fn pLabValXStrFnPtr)
{
TUPLE_LABEL tuple_label;
get<IDX_LNAME>(tuple_label) = m_label;
get<IDX_LNAMEX>(tuple_label) = nameX;
bool bAddEqSign = nameX.empty() ? false : true;
get<IDX_LX>(tuple_label) = pLabValXStrFnPtr(selPntD.X, precision, bAddEqSign);
get<IDX_LNAMEY>(tuple_label) = m_labelY;
bAddEqSign = m_labelY.empty() ? false : true;
get<IDX_LY>(tuple_label) = m_pLabYValStrFn(selPntD.Y, m_precisionY, bAddEqSign);
int alpha = max(m_colChart.GetAlpha(), 128); // TODO: Use definition instead of number
Color labCol = SetAlpha(m_colChart, alpha);
get<IDX_LCOLOR>(tuple_label) = labCol;
get<IDX_LDASH>(tuple_label) = m_dashStyle;
get<IDX_LPEN>(tuple_label) = m_fPenWidth;
return tuple_label;
}
Let us talk about precision. It is a container precision, set by the user or by an external application. The function just passes it to the pointer to the containers formatting function, pLabValXStrFnPtr:
pLabValXStrFnPtr
get<IDX_LX>(tuple_label) = pLabValXStrFnPtr(selPntD.X, precision, bAddEqSign);
The container packs the tuples in multimap std::multimap<int, TUPLE_LABEL>, and passes the multimap to its member CDataWnd* CChartContainer::m_dataWnd. The multimap keys are the chart IDs. We use the multimap because the chart data vector might have multiple data points with the same X and different or the same Y coordinates (think about a rectangle wave).
multimap std::multimap<int, TUPLE_LABEL>
multimap
CDataWnd* CChartContainer::m_dataWnd
The chart container fills this multimap with tuples for the data point's info for visible points of the charts only. So the multimap of the selected data points might have less elements than CDataWnd m_mapLabs, or have no entries at all..
multimap
CDataWnd m_mapLabs, or have no entries at all.
After receiving the multimap, m_dataWndcan start to render itself.
m_dataWnd
The drawing itself is straightforward. First, we need to attach a window to m_pDataWnd, if it was not done before. We call CDataWnd::CreateLegend(CWnd* pParent, CPoint origPnt, bool bData) to do that. It is a wrapper around the MFC function CreateEx.
CDataWnd::CreateLegend(CWnd* pParent, CPoint origPnt, bool bData)
CreateEx
Obviously, the parent is the chart container. The flag bool bData specifies the type of the child: whether it is the data or the names label.
bool bData
We do not know beforehand the selected points and values of the points' X and Y, nor do we know for all the time the set of visible charts and points. It means the size of the label window is also unknown beforehand. To calculate the label window rectangle, we need the paintDC (more correctly, the Graphics object). Therefore, CreateEx is called on zero x, y, width, and height. After creation, we can get the Graphics object from the window's DC, calculate the rectangle and the window position, and move the window to this position. But first, we have to calculate the text rectangle that envelops all strings to be displayed.
paintDC
Graphics
Graphics
We do this iterating over m_mapLabs of the m_pDataWnd. We search separately for the longest chart name, the longest X value, the longest Y name, and the longest Y value strings using the function Gdiplus::MeasureString. Unfortunately, the fonts with the fixed character width are looking not very nice on the screen, so I had to use the font with the variable character width. It means that the MeasureString should be applied to each string, not to the string with the greatest length. It does not matter for the data and names labels because there are only 10-20 charts in the container, but when we have to calculate layout for the data view with the thousands of the data points, there might be a visible delay in displaying the view. The delay is still acceptable for the 1000 - 5000 data points. For the bigger vectors we are displaying the message box "Calculating..." Again, this problem exists for the big data vectors in the data view window only.
m_mapLabs
Gdiplus::MeasureString
The width of the text rectangle is the sum of the maximal widths of the bounding rectangles, returned by MeasureString. The height is the height of the bounding rectangle times the size of CDataWnd::m_mapLabs. The total width should include additional spacing.
MeasureString
CDataWnd::m_mapLabs
Finally, we have to decide how to place the label in reference to the request point. As a rule, we place the label to the right of the request point if this point is in the left half of the container window, and to the left, if the point is to the right half. If there is not enough space to the left of the request point, we will place the left border of the label close to the left border of the client rectangle. The similar is true for the right borders.
The position of the request point and the interval it is centered on are constant in the data space for the lifetime of the given request. The multimap of the selected points is also constant. So we do not need to search for the closest points again.
What changes are the position of the request point and the value of the interval in the client space. For example, assume the X-extension of the container is -10.0...10.0, and the X-coordinate of the point of request is 0.0. Then in the client space, this coordinate is mapped to X = 0.5*clientRect.Width. Let us zoom in the container to the extent -4.0...1.0. Now 0.0 is mapped to 0.8*clientRect.Width.
X = 0.5*clientRect.Width
0.8*clientRect.Width
So we need to map the boundaries of the container's client rectangle into the data space, and pass the selected points that fit into this transformed rectangle and are visible to m_pDataWnd. Actually, only Y boundaries of the client rectangle should be mapped into the data space. The X boundary are always equal to the container's X extent. We also have to take into account for the local scaleY of every visible chart. To update the data window, we use the function:
size_t CChartContainer::UpdateDataLegend(MAP_SELPNTSD& mapSelPntsD, MAP_LABSTR& mapLabStr)
{
mapLabStr.clear();
if (!mapSelPntsD.empty()&& in_range(m_startX, m_endX, m_dataLegPntD.X))
{
CRect clRect;
GetClientRect(&clRect);
CPoint pntLimYL(0, clRect.bottom);
CPoint pntLimYR(0, clRect.top);
PointD pntLimYLD, pntLimYRD;
MousePntToPntD(pntLimYL, pntLimYLD, m_pMatrixD);
MousePntToPntD(pntLimYR, pntLimYRD, m_pMatrixD);
MAP_SELPNTSD::iterator itSel = mapSelPntsD.begin();
MAP_SELPNTSD::iterator itSelE = mapSelPntsD.end();
while(itSel != itSelE)
{
int chartIdx = itSel->first;
CChart* chartPtr = GetChart(chartIdx);
if (chartPtr != NULL)
{
if (chartPtr->IsChartVisible())
{
PointD selPntD = itSel->second;
if (in_range(m_startX, m_endX, selPntD.X)&&
in_range(pntLimYLD.Y, pntLimYRD.Y, selPntD.Y*chartPtr->GetLocScaleY()))
{
TUPLE_LABEL tuple_res = chartPtr->GetSelValString(
selPntD, m_labelX, m_precision, m_pLabValStrFnPtr);
mapLabStr.insert(MAP_LABSTR::value_type(chartIdx, tuple_res));
}
}
++itSel;
}
else
itSel = mapSelPntsD.erase(itSel);
}
}
CPoint origPnt(-1, -1); // Not used on empty mapLabStr
if (!mapLabStr.empty())
{
PointF origPntF = m_pMatrixD->TransformToPntF(1.0, m_dataLegPntD);
origPnt = CPointFromPntF(origPntF);
}
// Recalc dataWnd window rects and show data wnd or will hide it
m_pDataWnd->UpdateDataLegend(mapLabStr, this, origPnt);
return mapLabStr.size();
}
This function iterates over mapSelPntsD. The map element's key is the chart ID, the value is the selected data point. If the chart is visible and the selected data point is in the client rectangle, the function calls GetSelValString for this chart and adds the result to mapLabs. Note that the selected point must be in the client rectangle, not in the epsX interval. The interval was used before, in search for neighboring points.
mapSelPntsD
GetSelValString
mapLabs
epsX
(If mapSelPntsD is about being changed, it is cheaper to set mapSelPntsD from scratch using CChartContainer::PrepareDataLegend(PointD origPntD, double epsX, MAP_LABSTR& mapLabels, MAP_SELPNTSD& mapSelPntsD, bool bChangeMatrix) and m_dataLegPntD and m_epsX.)
mapSelPntsD
CChartContainer::PrepareDataLegend(PointD origPntD, double epsX, MAP_LABSTR& mapLabels, MAP_SELPNTSD& mapSelPntsD, bool bChangeMatrix)
m_dataLegPntD
To show chart names, we use the same technique and the same CDataWnd class. The container's data member is a pointer to the instance of this class, m_pLegWnd. The name string consists of a short line to show color, dash style, and pen width of the chart, and a chart name. The chart names window is a child of the container, and is always located in the upper right corner of the container's window.
CDataWnd
m_pLegWnd
The zooming and panning along X-axis themselves are mundane jobs. You just set the container's new X-extension m_startX, m_endX and ask the container to update its image on the screen. Matter that is more complicated is how to keep history records. We need the history records to undo zooming/panning. We are keeping separate history records for X- and Y- axes. Here we are discussing the X-history.
We are storing the history records as pairs of old m_startX, m_endX in the vector m_vHistX, the CChartContainer data member. We just push_back() the old pair of m_startX,m_endX before we set the new m_startX, m_endX. To undo the action, we will use the saved values to reset m_startX, m_endX.
m_endX
m_vHistX
push_back()
Things get more interesting when we change the full X-extent of the container. It might happen when we add charts, append, or truncate the charts' data vectors, delete charts, or simply change the X-extent.
To understand the problem, let us consider the situation when you want to analyze some part of the chart's curve. You have zoomed in the container and are looking at the curve, when, all of sudden, the application decides to append the chunk of data points to some chart. If the container would update its X-extent immediately, the picture you were so busy analyzing will go down the drain. If it would not update, you will lose the new extent.
The full X-extent of the container is always saved in the first element of the history vector. Therefore, the solution to this problem is to update the first element of the vector and not change the current values of m_startX, m_endX.
The function CChartContainer::UpdateExtX does exactly that:
CChartContainer::UpdateExtX
void CChartContainer::UpdateExtX(double minExtX, double maxExtX, bool bRedraw)
{
if (maxExtX < minExtX) // Possible if from app
return;
double initStartX = GetInitialStartX(); // Old initial m_startX, m_endX
double initEndX = GetInitialEndX();
double startX, endX;
if (initStartX > initEndX) // The container is empty
{
startX = minExtX;
endX = maxExtX;
}
else
{
startX = min(minExtX, initStartX);
endX = max(maxExtX, initEndX);
}
if (startX == endX)
{
endX += fabs(startX*0.01);;
}
if (m_vHistX.size() > 0) // Was zoomed or panned
m_vHistX.front() = make_pair(startX, endX);
else // Has no history
{
m_startX = startX;
m_endX = endX;
}
if (bRedraw)
{
if (m_bTracking&& IsLabWndExist(true))
UpdateDataLegend(false);
else
RefreshWnd();
}
}
Pay attention to this piece of the code:
if (startX == endX)
{
endX += fabs(startX*0.01);;
}
If we have only charts with one data point each, and these data points have the same X - coordinates, the startX = endX. To draw the container, we need some non-zero X - extension. So we artificially set the endX 1% apart from the startX.
startX = endX
The application should decide how and when to notify the user about the X-extent changes if these changes are hidden by zoom or pan modes.
It turned out that designing and coding the vertical zooming and panning is much more complicated than ones for horizontal zoom/panning .
To begin with, we perceive the horizontal and vertical dimensions of a picture differently. Think about a picture of a family reunion: we would forgive a little cropping of the picture from the left or the right, but we implicitly request and expect some clear space above heads of our relatives.
I took it into consideration: initially chart curves fill only 0.8 of the client rectangle height. A position of this drawing space in the client rectangle depends on the position of the X-axis. It is simple: you just calculate the Y-scale as 0.8*clientRect.Height/Yextent.
0.8*clientRect.Height/Yextent
Now enters the vertical zoom. You delineate zoom borders, and you want the picture to fill entire vertical space, entire client rectangle height. So now you have to calculate the Y-scale as clientRect.Height/Yextent.
clientRect.Height/Yextent.
Meanwhile a vertical panning must only shift the drawing space it got from previous operation.
So vertical zooming/panning uses the function:
void CChartContainer::UpdateExtY(double minExtY, double maxExtY, bool bRedraw)
{
if (maxExtY < minExtY) // Possible if from app
return;
double initMinY = GetInitialMinExtY(); // Old initial m_startX, m_endX
double initMaxY = GetInitialMaxExtY();
double startY, endY;
if (initMinY > initMaxY) // The container is empty
{
startY = minExtY;
endY = maxExtY;
}
else
{
startY = min(minExtY, initMinY);
endY = max(maxExtY, initMaxY);
}
if (startY == endY)
{
double delta = fabs(startY*0.01);
startY -= delta*4.0;
endY += delta;
}
if (m_vHistY.size() > 0) // Was zoomed or panned
m_vHistY.front() = make_pair(startY, endY);
else // Has no history
{
m_minExtY = startY;
m_maxExtY = endY;
}
if (bRedraw)
{
if (m_bTracking&& IsLabWndExist(false))
UpdateDataLegend(true);
else
RefreshWnd();
}
}
but it does the trick with the vertical scale in the function:
PAIR_DBLS CChartContainer::UpdateScales(const RectF drawRectF,
double startX, double endX, double minY, double maxY)
{
if (m_mapCharts.empty())
return make_pair(1.0, 1.0);
RectF dRF = drawRectF;
if ((m_chModeY == MODE_FULLY)||(m_chModeY == MODE_MOVEDY)||(m_chModeY == MODE_MOVEY))
dRF.Inflate(0.0f, -0.1f*drawRectF.Height); // Reserve 20% to beautify full picture
double scX = UpdateScaleX(dRF.Width, startX, endX);
double scY = UpdateScaleY(dRF.Height, minY, maxY);
return make_pair(scX, scY);
}
Again, pay attention to correction for startY == endY.
startY == endY
We are not done yet with drawing spaces: we have to tackle problems with undoing vertical zooming/panning. What is the problem? Suppose we have restored previous m_minExtY, m_maxExtY from the history vector m_vHistY. What vertical drawing space we have to use to calculate the scaleY? If we are undoing a chain of actions MoveY1 - ZoomY1 - MoveY2 - ZoomY2 - MoveY3, obviously, up to the ZoomY1 we have to work with full client rectangle height, and return to 0.8H upon undoing ZoomY1. Fortunately, it is easy to pick out moves from zooms: if we pan, values of changes of startY and endY are equal.
m_minExtY, m_maxExtY
m_vHistY
scaleY
startY
endY
So, there is the function:
void CChartContainer::UndoHistStepY(bool bRedraw)
{
if (m_vHistY.empty())
return;
PAIR_POS zh = m_vHistY.back();
m_minExtY = zh.first;
m_maxExtY = zh.second;
if (m_vHistY.size() > 1) // Must check whether it is moves only
{
auto itZ = adjacent_find(m_vHistY.rbegin(), m_vHistY.rend(),
[](const PAIR_POS& lhs, const PAIR_POS& rhs) ->bool
{return (fabs(1.0 - fabs((rhs.first - lhs.first)/(rhs.second - lhs.second))) >
4.0*DBL_EPSILON);});
if (itZ == m_vHistY.rend())
m_chModeY = MODE_MOVEDY;
else
m_chModeY = MODE_ZOOMEDY;
}
else
m_chModeY = MODE_FULLY;
m_vHistY.pop_back();
if (bRedraw && IsWindow(m_hWnd) && IsWindowVisible())
{
if (m_bTracking && (m_pDataWnd != NULL))
UpdateDataLegend(true);
else
RefreshWnd();
}
}
There we are using std::adjasent_find with lambda expression. The expression returns true if changes in minY and maxY are not equal. This algorithm starts from the end of the history vector and returns when it found the zoomY. If there are no zooms saved, you have to work with 0.8H drawing space.
std::adjasent_find
minY
maxY
zoomY
See the measure of equality: it is the difference between 1.0 and the ratio of the difference between two adjacent minY to the difference between two adjacent maxY. The criteria is 4*DBL_EPSILON, the smallest such that 1.0+ DBL_EPSILON !=1.0. I cannot use the differences alone because of quirks of floating-point arithmetic.
DBL_EPSILON
Finally, we have to decide what to do with zooming/panning of empty space. Obviously, it makes no sense to zoom a space without any visible data points, but what about panning? If you are panning along X-axis, there is a chance you are in some valley and will see some data points hidden from view now. But for Y-panning if you do not see any data points in the container's window, you will not see them if you continue in the same direction. So the zooming/panning along Y-axis is blocked, if the new container's extension does not have any visible data point.
The chart data view displays the data vector of the selected chart as a table. You call the data view for the selected chart from the container's popup menu or programmatically.
It might take many rows to display the entire table, so I choose a page structure to display one page at a time against a choice of scrolling. To save the screen's real estate, I squeeze into one page as many rows and columns as possible.
To navigate between pages and print the data, we need buttons. It would be nice to have buttons with bitmaps, but you cannot embed resource files, external icons, and bitmaps in MFC static libraries (see here). So the data view builds the bitmap buttons at run-time (for the same reason, the container's popup menu is also built at request time, upon mouse right click).
All functionality of the data view is implemented in the class CChartDataView. The class is derived from CWnd.
CWnd
In response to the request to display the data view, the container calls:
bool CChartContainer::ShowDataView(CChart* chartPtr, bool bClearMap, bool bRefresh)
{
if (m_pChartDataView == NULL)
m_pChartDataView = new CChartDataView;
if (m_pChartDataView != NULL)
{
if (!IsWindow(m_pChartDataView->m_hWnd))
{
CRect parentWndRect;
GetParent()->GetWindowRect(&parentWndRect); // App main dlg window
CRect workRect;
SystemParametersInfo(SPI_GETWORKAREA, NULL, &workRect, 0);
int leftX = parentWndRect.right + DV_SPACE;
int rightX = leftX + DV_RECTW;
int topY = parentWndRect.top - DV_SPACE;
int bottomY = topY + DV_RECTH;
CRect dataViewRect(leftX, topY, rightX, bottomY);
CRect interRect;
interRect.IntersectRect(&dataViewRect, workRect);
if (interRect != dataViewRect)
{
dataViewRect.right = workRect.right - DV_SPACE;
dataViewRect.left = max(dataViewRect.right - DV_RECTW, workRect.left + DV_SPACE);
dataViewRect.top = workRect.top + DV_SPACE;
dataViewRect.bottom = min(dataViewRect.top + DV_RECTH, workRect.bottom - DV_SPACE);
}
BOOL bRes = m_pChartDataView->CreateEx(0,
AfxRegisterWndClass(CS_HREDRAW|CS_VREDRAW|CS_SAVEBITS),
_T("Chart Data View"),
WS_POPUPWINDOW|WS_CAPTION|WS_MINIMIZEBOX|WS_VISIBLE,
dataViewRect.left, dataViewRect.top,
dataViewRect.Width(), dataViewRect.Height(),
NULL,
NULL,
NULL);
if (!bRes)
{
delete m_pChartDataView;
m_pChartDataView = NULL;
return false;
}
}
else if (m_pChartDataView->IsIconic())
m_pChartDataView->ShowWindow(SW_RESTORE);
int chartIdx = chartPtr->GetChartIdx();
m_pChartDataView->ShowWaitMessage(chartIdx, chartPtr->m_vDataPnts.size());
m_pChartDataView->InitParams(chartPtr, bClearMap, this);
if (m_dataViewChartIdx != chartIdx)
{
m_dataViewChartIdx = chartIdx;
bClearMap =true;
}
if (bClearMap)
{
m_mapDataViewPntsD.clear();
if (bRefresh)
RefreshWnd();
}
}
return true;
}
The interesting points there are calculation of the view's location, creation of the controls in the data view, and communication between the data view and the container.
I wanted to set the size of the data view window close to letter format, 8.5"x11", to get WYSIWYG printing. However, this format is too big for most monitors. I chose dimensions DV_RECTW = 710 and DV_RECTH = 874 pixels. At 96 pixels per inch, it equals 7.4"x9.1".
DV_RECTW = 710
DV_RECTH = 874
With the size defined, I try to place the data view rectangle 50 pixels to the right and above the application's main window, which is the parent of the container. Next, I use SystemParametersInfo with SPI_GETWORKAREA to get the working area of the display. (VS Help: "The work area is the portion of the screen not obscured by the system taskbar or by the application desktop toolbars".) If the intersection of the working area and the newly minted data view rectangle were less than this rectangle, I would move the rectangle to the left and adjust its vertical position. So if there is enough space, the data view window does not overlap the app main window.
SystemParametersInfo
SPI_GETWORKAREA
The data view window is created as a popup window to allow some leeway in positioning it on the screen.
After view creation, the container calls the function to pass the chart's data to the view.
If the window is already created and at some moment was minimized before a new call to ShowWindow, we have a problem: the minimized window has empty window rectangle. It will crash the function CalcLayout in InitParams. So there is the line in ShowWindow:
ShowWindow
CalcLayout
InitParams
else if (m_pChartDataView->IsIconic())
m_pChartDataView->ShowWindow(SW_RESTORE);
The function CChartDataView::InitParams initializes the data view:
CChartDataView::InitParams
void CChartDataView::InitParams(const CChart* chartPtr, bool bClearMap, const CChartContainer* pHost)
{
m_chartIdx = chartPtr->GetChartIdx();
m_precision = pHost->GetContainerPrecisionX();
m_precisionY = chartPtr->GetPrecisionY();
m_label = chartPtr->GetChartName();
string_t tmpStr = pHost->GetAxisXName();
m_labelX = tmpStr.empty() ? string_t(_T("X")) : tmpStr;
tmpStr = chartPtr->GetAxisYName();
m_labelY = tmpStr.empty() ? string_t(_T("Y")) : tmpStr;
m_pXLabelStrFn = pHost->GetLabXValStrFnPtr();
m_pYLabelStrFn = chartPtr->GetLabYValStrFnPtr();
m_vDataPnts = chartPtr->m_vDataPnts;
m_vStrX.resize(m_vDataPnts.size());
transform(m_vDataPnts.begin(), m_vDataPnts.end(), m_vStrX.begin(),
nmb_to_string<double, false>(m_precision, m_pXLabelStrFn));
m_vStrY.resize(m_vDataPnts.size());
transform(m_vDataPnts.begin(), m_vDataPnts.end(), m_vStrY.begin(),
nmb_to_string<double, true>(m_precisionY, m_pYLabelStrFn));
m_currPageID = 0;
SetOwner((CWnd*)pHost);
m_vRows.clear();
if (bClearMap)
m_mapSelCells.clear();
else
UpdateDataIdx();
CalcLayout();
m_header = GetTableHeader(); // Set the header string
CreateChildren();
bool bEnableLeft = m_currPageID == 0 ? false : true;
bool bEnableR);
}
The container is set as an owner of the data view, so the view will automatically hide, set visible, and close with the container.
To accelerate the drawing, we provide two auxiliary vectors of strings, m_vStrX for the X values, and m_vStrY for the Y values of the chart data vector. We use the algorithm std::transform with the custom-made predicate template <typename T, bool bY> struct nmb_to_string (see Util.h).
m_vStrX
m_vStrY
<typename T, bool bY> struct nmb_to_string
The navigation buttons are instances of the class CPageCtrl : piblic CButton. The buttons are created as children of m_pDataView. The drawing of the buttons' bitmaps is embedded into CPageCtrl::OnPaint (see DataView.cpp for details).
CPageCtrl : piblic CButton
m_pDataView
CPageCtrl::OnPaint
Now let us go to communication between the data view and the container. We need to inform the container when we select/deselect a cell in the table. The container then will show/hide the data point, selected in the data view, on the chart's curve in the container window. In addition, the data view needs info to modify the data view if the container name, the chart's data vector, or/and X- and Y- axes names, precision, or/and formatting functions are changed in the container.
The data view has a copy of the data vector of the chart it displays, CDataView::m_vDataPnts.
CDataView::m_vDataPnts
It also keeps the data points for all selected cells in the map CDataView::m_mapSelCells. The key of the map element is the cell's ID. The data view updates the map when the selection changes.
CDataView::m_mapSelCells
The container has the copy of this map in CChartContainer::m_mapDataViewPntsD. After change of the selection in the data view, the data view calls the container's function:
CChartContainer::m_mapDataViewPntsD
CChartContainer* pContainer = static_cast<CChartContainer*>(GetOwner());
pContainer->UpdateDataViewPnts(m_chartIdx, dataID, dataPntD, bAdd)
The container uses m_mapDataViewPntsD to draw circles around data points selected in the data view data points. It enables you to see exactly where a particular point sits on the chart's curve.
m_mapDataViewPntsD
Obviously, changes in chart attributes such as the name of Y-values, the Y-precision, and the Y-formatting function might change the data view layout. The same is true for the name of X-values and X-formatting function, and changes of the chart's data vector (e.g. appended or truncated). Changes of the chart and container names influence page headers only. We found more convenient to recalculate only affected parts of the layout. To do this we use the function CChartContainer::UpdateDataView. This function calls CDataView::UpdateParams:
CChartContainer::UpdateDataView
CDataView::UpdateParams
bool CChartDataView::UpdateParams(const CChart* chartPtr, int flagsData)
{
bool bRes = false;
int flags = 0;
size_t dataOffset = 0;
int chartIdx = chartPtr->GetChartIdx();
if (chartIdx == m_chartIdx)
{
CChartContainer* pHost = dynamic_cast<CChartContainer*>(GetOwner());
ENSURE(pHost != NULL);
if (!chartPtr->HasData())
{
pHost->DestroyChartDataView();
return true;
}
m_label = chartPtr->GetChartName(); // Page header must be changed with new date stamp
int precisionX = pHost->GetContainerPrecisionX();
if (m_precision != precisionX) // PrecisionX: entire X column is changing
{
m_precision = precisionX;
flags |= F_VALX;
}
int precisionY = chartPtr->GetPrecisionY();
if (m_precisionY != precisionY) // PrecisionY: entire X column is changing
{
m_precisionY = precisionY;
flags |= F_VALY;
}
string_t tmpStr = pHost->GetAxisXName();
string_t labelX = tmpStr.empty() ? string_t(_T("X")) : tmpStr;
if (m_labelX != labelX) // X-axis name: column header and width might change
{
m_labelX = labelX;
flags |= F_NAMEX;
}
tmpStr = chartPtr->GetAxisYName();
string_t labelY = tmpStr.empty() ? string_t(_T("Y")) : tmpStr;
if (m_labelY != labelY) // Y-axis name: column header and width might change
{
m_labelY = labelY;
flags |= F_NAMEY;
}
val_label_str_fn pXLabelStrFn = pHost->GetLabXValStrFnPtr();
if (m_pXLabelStrFn != pXLabelStrFn) // Entire X-column should be changed
{
m_pXLabelStrFn = pXLabelStrFn;
flags |= F_VALX;
}
val_label_str_fn pYLabelStrFn = chartPtr->GetLabYValStrFnPtr();
if (m_pYLabelStrFn != pYLabelStrFn) // Entire Y-column should be changed
{
m_pYLabelStrFn = pYLabelStrFn;
flags |= F_VALY;
}
if (flagsData != F_NODATACHANGE)
{
size_t endOffs = 0;
switch (flagsData)
{
case F_APPEND:
endOffs = OnChartAppended(chartPtr->m_vDataPnts);
if (!(flags & (F_VALX|F_VALY|F_DSIZE)))
{
dataOffset = endOffs;
}
flags |= (F_VALX|F_VALY|F_DSIZE);
break;
case F_TRUNCATE:
endOffs = OnChartTruncated(chartPtr->m_vDataPnts);
if (!(flags & (F_VALX|F_VALY|F_DSIZE)))
{
dataOffset = 0;
}
flags |= (F_VALX|F_VALY|F_DSIZE);
break;
case F_REPLACE:
case F_REPLACE|F_HASCELLSMAP:
dataOffset = OnChartDataReplaced(
chartPtr->m_vDataPnts, flags&F_HASCELLSMAP ? true : false);
flags |= (F_VALX|F_VALY|F_DSIZE);
break;
}
}
else
{
if (flags & F_VALX)
{
transform(m_vDataPnts.begin() + dataOffset, m_vDataPnts.end(),
m_vStrX.begin() + dataOffset, nmb_to_string<double,
false>(m_precision, m_pXLabelStrFn));
}
if (flags & F_VALY)
{
transform(m_vDataPnts.begin() + dataOffset, m_vDataPnts.end(),
m_vStrY.begin() + dataOffset, nmb_to_string<double,
true>(m_precisionY, m_pYLabelStrFn));
}
}
if (IsIconic())
ShowWindow(SW_RESTORE);
// Set the header string
m_header = GetTableHeader();
m_vRows.clear();
if ((flags != 0)&&(dataOffset != m_vDataPnts.size()))
CalcLayout(flags, dataOffset);
if (flagsData != F_NODATACHANGE)
{
if (flagsData & F_TRUNCATE)
{
if (m_nPages <= m_currPageID)
m_currPageID = 0;
}
else if ((flagsData & F_APPEND) == 0)
m_currPageID = 0;
}
else
m_currPageID = 0;
bool bEnableLeft = m_currPageID == 0 ? false : true;
bool bEnableRight = (m_currPage);
bRes = true;
}
return bRes;
}
This function looks for the changed chart's attributes and sets appropriate flags. The flags control the tasks to be performed by the data view to reflect the changes. The flagsData that are passed to the function as a parameter are informing the function about changes in the chart data vector. This information is used to calculate the page the data view will show after the data view is updated. It might be the old page if the old page still keeps some data points, or the first page if the old page was truncated. I refer you to the ChartDataView.cpp for farther details.
flagsData
You can print the container window from the container's popup menu or programmatically. You can also print the chart data tables from the data view window.
Let us begin with the container.
First, let me say that what we are going to print is not WYSIWYG. If the user decides to print only one chart, we will print only one selected chart. Otherwise, we will print all visible charts. Second, on the screen, to get to details, we always can move charts, zoom in, hide the data and name labels, etc. The printout is forever. So to not obscure chart curves, we will not show the data and names windows. Instead, we will print the chart info below the container window. To make measurements and calculations with the printout possible, we will include the Y-scale value in the chart info, and always print the X-axis labels. Third, the body of the printing is implemented as a static function CChartContainer::PrintCharts. We did so to allow printing from a working thread.
CChartContainer::PrintCharts
In the article KB133275, Microsoft explains how to print from a class other than an MFC CView. In addition, there is a tutorial on GDI Printing, GDI+ Printing on Internet. The tutorial is overcomplicated; Microsoft does not mention GDI+.
CView
Still, I followed the framework of the Microsoft sample code.
The code for printing is in the function CChartContainer::PrintCharts(CChartContaner* pContainer, float dpiRatioX, HDC printDC) (ChartContainer.cpp).
CChartContainer::PrintCharts(CChartContaner* pContainer, float dpiRatioX, HDC printDC)
The application must prepare parameters and pass them to the function. The code should look like:
.................................................
int scrDpiX = GetScreenDpi();
SendNotification(CODE_PRINTING);
CChartContainer* pContainer = CloneChartContainer(string_t(_T("")), true);
SendNotification(CODE_PRINTED);
PrintCharts(pContainer, scrDpiX, printDlg.GetPrinterDC());
delete pContainer;
First, we clone the container. The clone inherits the name and the state of the ancestor. No window is attached to the clone: we do not need it. Before and after cloning we send notifications to the container's parent. It can use them in multithreading environment or might ignore them altogether.
We use the clone because the size of the printing area (page) is different from the size of the ancestor's client rectangle. Our drawing functions use the transform matrix of the container, so we have to recalculate the container's transform matrix for printing. We also are going to change the state of the clone to allow printing the X-axis labels.
Second, we calculate the ancestor's screen resolution in dots per inch, calling:
int CChartContainer::GetScreenDpi(void)
{
CPaintDC containerDC(this);
int scrDpiX = containerDC.GetDeviceCaps(LOGPIXELSX);
int scrDpiY = containerDC.GetDeviceCaps(LOGPIXELSY);
ENSURE(scrDpiX == scrDpiY);
return scrDpiX;
}
I will explain why we need it for printing in a moment.
Third, we need the printer DC.
If we started with the MFC dialog CPrintDialog, after the printer is selected, and the OK button is clicked, the handle to the printer DC is:
CPrintDialog
HDC printDC = printDlg.GetPrinterDC();
Now we can call PrintCharts.
PrintCharts
We get the pointer to the CDC and attach the printerDC to it, following KB133275:
printerDC
CDC* pDC = new CDC;
pDC->Attach(printDC);
We create a Gdiplus::Graphics object and set the document units:
Gdiplus::Graphics
Graphics* grPtr = new Graphics(printDC);
grPtr->SetPageUnit(UnitDocument);
This mode displays 300 DPI per inch.
After page units are set, all GDI+ functions will understand any value passed to them as the UnitDocument value. For example, if we are setting a pen's width to two, it is inch on the screen and inch on the paper. So we have to correct values of all literals used in printing.
UnitDocument
We use the scrDpiX parameter to get dpiRatioX = 300.0f/scrDpiX. For the screen, this ratio is 1.0, so if we need to adjust the pen width for printing, we should write pen.SetWIdth(width*dpiRatio).
scrDpiX
dpiRatioX = 300.0f/scrDpiX
pen.SetWIdth(width*dpiRatio)
And the last preparation job: get the client rectangle:
RectF rGdiF;
grPtr->GetVisibleClipBounds(&rGdiF);
// The same as the clip rect
Finally, begin printing:
pDC->StartDoc(pContainer->m_name.c_str());
// MFC functions
pDC->StartPage();
I have mentioned earlier that the chart info strings are printed below the container window. The chart info consists of the chart name, vertical scale for this chart in data space (Y units per screen inch), X-axis name, X value string, Y-axis names, and Y value strings of the data points displayed in the data label in the ancestor. If there is no selected points, or the selected point is out of view (as a result of zooming/panning) instead of the X and Y names and value strings we print charts' minimal and maximal Y values. The short line before the info string has the same color, dash style, and pen width as the chart has. It helps to identify the charts easily.
If there are too many charts in the container, the chart info lines might continue to the next page. Every page has a header: a name of the container, and a time when the printing started.
In drawing functions for the printing headers and chart info, we use the points unit to set the font size. Because we have set the page unit to UnitDocument, the font size is the same, no matter what the printer resolution is.
After we have done with the printing, we should clean up:
// End printing
pDC->EndPage();
pDC->EndDoc();
delete grPtr;
pDC->Detach();
delete pDC;
Note: Sometimes you might get circles around the data points that are not visible on the screen because the printed page size is greater than the container client window. If you do not need them, hide them using the container popup menu or call the function CChartContainer::ShowChartPoints(int chartIdx, bool bShow, bool bRedraw) with chartIdx = -1 and bShow = false before printing. After printing restore the ShowChartPoints state.
CChartContainer::ShowChartPoints(int chartIdx, bool bShow, bool bRedraw)
bShow = false
ShowChartPoints
The printing of the data view is similar.
You can save the chart's data vector. You also can save the selected or all visible charts or all chatrts together with their visual attributes and data series into an XML file.
To get the chart data vector, you call one of the overloads of the function:
CChartContainer::ExportChartData(string_t chartName, V_CHARTDATAD& vDataPnts);
The overloads substitute std::vector<std::pair<double, double> > or a pair of vectors std::vector<double>& vX, std::vector<double>& vY instead of a vector of data points V_CHARTDATAD& vDataPnts. Because the chart ID is an internal parameter of the chart control, we select the container's chart by name.
std::vector<double>& vX
std::vector<double>& vY
V_CHARTDATAD& vDataPnts
To save the data into an XML file, we use the function HRESULT CChartContainer::SaveChartData(string_t pathName, bool bAll) .
HRESULT CChartContainer::SaveChartData(string_t pathName, bool bAll)
Actually, all functionality related to the conversion to and from XML files is placed in the class CChartsXMLSerializer.
CChartsXMLSerializer
The static function CChartsXMLSerializer::ChartDataToXML(pathName.c_str(), pContainer, chartName, bAll) converts chart attributes and data vectors to XML. SaveChartData provides parameters for this function.
CChartsXMLSerializer::ChartDataToXML(pathName.c_str(), pContainer, chartName, bAll)
SaveChartData
First of all, it processes the pathName. If the name is an empty string, the user is presented with the MFC CFileDialog to set the path and the XML file name.
pathName
CFileDialog
Second, the SaveCharts clones the container to make multithreading possible. Before and after cloning the function sends notification the the container's parent. We clone the container because conversion to XML could take long time for big data vectors. XML converter is working with the clone, not with the container itself.
SaveCharts
Finally, the function looks at the container charts to pass the chart names to the serializer. The parameter bAll tells what charts are to be saved. If bAll = true, the chart names must be the names of the visible charts. If the chart is not visible, nothing will be saved. If bAll = false, the chart names might be names of any charts, visible or not. If the chartName is an empty string, all visible charts or all container's chart will be saved, depending on the parameter bAll; it the name of the existing chart is passed, only this chart will be saved. The SaveChart looks for a selected chart. If there is one, its name is passed to the converter function and only it will be saved. On this occasion bAll make no difference, because only visible chart can be selected.
bAll = true
bAll = false
chartName
bA
ll
SaveChart
bAll
You can call the SaveChartData from the container's popup menu or directly from your application. The popup menu automatically calls the function with empty pathName and bAll = false. Your application can use any combination of these parameters. The empty pathName means that the user will be presenter with an instance of the MFC CFileDialog to choice the path and file name. Remember, bAll = true means that all charts in the container will saved if there is no selected chart; bAll = false means only visible chart(s) are for saving.
pathName
MFC CFileDialog
The user or the programmer can control the selection of charts indirectly, by selecting the chart to save or make visible all charts he wants to save.
I used MSXML6 to do the job. The structure of the XML file is shown above. Note that the XML schema in the version 1.1 is changed. The container cannot load XML file saved in the version 1.0 into version 1.1.
To load charts from an XML file, you have to call the function:
HRESULT CChartContainer::LoadCharts(LPCTSTR fileName, const MAP_CHARTCOLS& mapContent)
{
HRESULT hr = CChartsXMLSerializer::XMLToCharts(fileName, this, mapContent);
if (hr == S_OK)
{
if (IsLabWndExist(false))
PrepareDataLegend(m_dataLegPntD, m_epsX,
m_pDataWnd->m_mapLabs,
m_mapSelPntsD, NULL);
UpdateContainerWnds();
}
return hr;
It calls:
HRESULTULT CChartsXMLSerializer::XMLToCharts(LPCTSTR fileName,
CChartContainer* pContainer, const MAP_CHARTCOLS& mapContent)
XMLToCharts reads the XML file using MSXML6, and adds the chart(s) to the container.
XMLToCharts
We have a couple of problems here. First, the XML file might keep several charts, but we might not want to load all of them. Second, the names and visuals of the charts being loaded might mix up with the names and visuals of the charts already in the container. All we know from outset is the name of the XML file.
To get more information about charts in the file, use the functions:
HRESULT CChartContainer::GetChartNamesFromXMLFile(LPCTSTR fileName, MAP_CHARTCOLS& mapContent)
HRESULT CChartContainer::GetChartNamesFromXMLFile(LPCTSTR fileName, MAP_NAMES& mapNames)
The last function was introduced in the version 1.1.
The functions are wrappers around the functions from CChartsXMLSerializer with the same names and signatures. The wrapping spares you from including an additional header, ChartXMLSerializer.h, in your project.
CChartsXMLSerializer
The firs function, GetChartNamesFromXMLFile, retrieves the chart names and colors from the file, and stores them in the MAP_CHRTCOLS. Map keys are the chart names, values are chart colors. Given the map, you can erase the unwanted charts from the map, and change the colors of the charts you decided to load into the container. After the map is adjusted, you pass it to LoadCharts. Of course, you can fill the map manually if you know the chart's names and colors. You also can change the chart colors after loading the charts. The chart name could be automatically changed inside LoadCharts if the container already has the chart with the same name. The name in the XML file will not change.
GetChartNamesFromXMLFile
MAP_CHRTCOLS
LoadCharts
The second function retrieves names: the chart names, the names of the X- and Y-axes, and the samples of the formatted X and Y value strings. It gives you opportunity to write and include in your application appropriate formatting functions and register them with your chart container.
If you are loading charts into an already populated container, and the container is in tracking mode, you have to update the data and name labels. It turns out that it is computationally cheaper to prepare the new data legend from scratch, than to change the existing map of the selected data points.
The function UpdateContainerWnds resets the container window and labels to their current state.
UpdateContainerWnds
If you want just replace the container's charts with the charts from an XML file, just use the function HRESULT CChartContainer::ReplaceContainerCharts(LPCTSTR fileName). No problems with colors, names, etc.
HRESULT CChartContainer::ReplaceContainerCharts(LPCTSTR fileName)
The general schema of things is very simple: produce a bitmap with charts drawn into it, and save the bitmap in any picture format your OS supports using Gdiplus::Save(const WCHAR* filename, const CLSID* clsidEncoder, const EncoderParameters* encoderParams). As I have mentioned above, all drawing in the container's function OnPaint() is done into memory bitmap first to avoid flickering, so this part of the task can be done using the code from OnPaint(). Nevertheless, there is a problem: the name and data labels are displayed as the container children. The child windows have their own OnPaint() and will not be shown in the parent's bitmap. The solution is to use the code from the childs bitmaps. Draw the children into the main bitmap, but carefully position the children layout rectangles on the main bitmap. To see details, look into CChartContainer::DrawContainerToBmp(Graphics*rGdi, Bitmap& bmp) in ChartContainer.cpp.
Gdiplus::Save(const WCHAR* filename, const CLSID* clsidEncoder, const EncoderParameters* encoderParams)
CChartContainer::DrawContainerToBmp(Graphics*rGdi, Bitmap& bmp)
I think enumeration of the supported picture formats is also interesting. Here is the code (before if (pathName.empty()):
if (pathName.empty())
Status CChartContainer::SaveContainerImage(string_t pathName)
{
if (!HasChartWithData(-1,true)) // Repeat to provide for standalone use
return GenericError;
Status status = Aborted;
UINT num; // number of image encoders
UINT size; // size, in bytes, of the image encoder array
// How many encoders are there? How big (in bytes) is the array of all ImageCodecInfo objects?
GetImageEncodersSize(&num, &size);
// Create a buffer large enough to hold the array of ImageCodecInfo objects
// that will be returned by GetImageEncoders.
ImageCodecInfo* pImageCodecInfo = (ImageCodecInfo*)(malloc(size));;
// GetImageEncoders creates an array of ImageCodecInfo objects
// and copies that array into a previously allocated buffer.
GetImageEncoders(num, size, pImageCodecInfo);
// Get filter string
sstream_t stream_t;
string_t str_t, tmp_t;
string_t szFilter;
CLSID clsID;
typedef std::map<string_t, CLSID> MAP_CLSID;
typedef MAP_CLSID::value_type TYPE_VALCLSID;
typedef MAP_CLSID::iterator IT_CLSID;
MAP_CLSID mapCLSID;
for(UINT j = 0; j < num; ++j)
{
stream_t << pImageCodecInfo[j].MimeType <<_T("\n");
getline(stream_t, str_t);
size_t delPos = str_t.find(TCHAR('/'), 0);
str_t.erase(0, delPos + 1);
clsID = pImageCodecInfo[j].Clsid;
mapCLSID.insert(TYPE_VALCLSID(str_t, clsID));
tmp_t = str_t;
std::transform(tmp_t.begin(), tmp_t.end(), tmp_t.begin(),
[](const TCHAR&tch) ->TCHAR {return (TCHAR)toupper(tch);});
szFilter += tmp_t + string_t(_T(" File|*.")) + str_t + string_t(_T("|"));
}
szFilter += string_t(_T("|"));
free(pImageCodecInfo);
if (pathName.empty()) // Let the user choose
{
TCHAR szWorkDirPath[255];
GetModuleFileName(NULL, szWorkDirPath, 255);
PathRemoveFileSpec(szWorkDirPath);
string_t dirStr(szWorkDirPath);
size_t lastSlash = dirStr.find_last_of(_T("\\")) + 1;
dirStr.erase(lastSlash, dirStr.size() - lastSlash);
dirStr += string_t(_T("Images"));
szFilter += string_t(_T("|"));
CFileDialog fileDlg(FALSE, _T("BMP File"), _T("*.bmp"),
OFN_HIDEREADONLY|OFN_OVERWRITEPROMPT|OFN_NOCHANGEDIR|OFN_EXPLORER,
szFilter.c_str(), this);
fileDlg.m_ofn.lpstrInitialDir = dirStr.c_str();
fileDlg.m_ofn.lpstrTitle = _T("Save As Image");
string_t strTitle(_T("Save "));
if (fileDlg.DoModal() == IDOK)
{
pathName = string_t(fileDlg.GetPathName());
}
else
return Ok;
}
if (pathName.empty())
return InvalidParameter;
size_t pos = pathName.find(_T("."));
if (pos == string_t::npos)
return GenericError;
string_t szExt = pathName.substr(pos);
pos = szFilter.find(szExt);
if (pos == string_t::npos)
return UnknownImageFormat;
szExt.erase(0, 1);
IT_CLSID it = mapCLSID.find(szExt);
if (it != mapCLSID.end())
{
SendNotification(CODE_SAVEIMAGE);
CRect clR;
GetClientRect(&clR);
Rect rGdi = CRectToGdiRect(clR);
Bitmap bmp(rGdi.Width, rGdi.Height);
DrawContainerToBmp(rGdi, bmp);
clsID = it->second;
status = bmp.Save(pathName.c_str(), &clsID);
SendNotification(CODE_SAVEDIMAGE);
}
else status = UnknownImageFormat;
return status;
}
Like saving charts to XML file, the function receives path to a file to save the image. The path might be an empty string; it it is the case, the CFileDialog is displayed to ask the user to select the path and the file name. I had problems using _tupr functions, so I took the transform algorithm with a lambda expression to convert the picture format string to upper case. Of course, only what is visible in the container's window is saved.
_tupr
transform
When the size of the container's parent window is changing, the container's window might not receive the WM_SIZE message. E.g., it happens in the dialog-based application.
WM_SIZE
You have to call CChartContainer::OnChangedSize(int cx, int cy) from the appropriate handler of the parent or the owner of the container to change the container window size (see the "Clone Container" in the demo).
CChartContainer::OnChangedSize(int cx, int cy)
There is a little trick in OnChangedSize. When the parent window is changing its size, it is not a continuous process: here and there, the user unintentionally interrupts the smooth movement of the mouse. If the data and/or name labels are displayed, they will flicker. So upon every call, OnChangedSize hides the visible labels, redraws the container, and restarts the timer. The timer's delay is 50 ms, small enough to be not an annoyance, but big enough to keep the timer running between WM_SIZE interrupts. Finally, 50 ms after the sizing ends, the timer procedure redraws the labels on the screen.
OnChangedSize
In my practice, sometimes I was using the ChartCtrl ( a clone of it) without a parent, as a poppup window with a resizable borders. On this occasion, the clone was receiving and must process WM_SIZE messages. So now ths chart container has it own OnSize handler. But remember, this handler is not called if the chart container is a child of the CDialog parent.
OnSize
CDialog
All API functions are in the ChartContainer.h file. I am going to mention only the most important of them.
First, there are chart interface functions: AddChart, AppendChartData, ReplaceChartData, TruncateChart, and RemoveChart.
TruncateChart
RemoveChart
I have discussed AddChart, AppendChartData, and TruncateChart in the chapter "Add charts to the chart container" above.
The function:
bool ReplaceChartData(int chartIdx, V_CHARTDATAD& vData, bool bClip = false,
bool bUpdate = false,bool bVerbose = false, bool bRedraw = false);
replaces the old chart data with the new data vector. The parameter bUpdate defines whether the container X- and Y- extensions should be recalculates. If bClip == true, only data points inside the chart's old X-range are copied from vData. If bVerbose == true, the warning about loss of old data points is displayed. The function returns true on success.
bUpdate
bClip == true
bVerbose == true
There are three overloads for the three other types of data: the time series, the vector of std::pair<double, double>, and for two vectors of X and Y values. For the time series overload you should provide the start X coordinate and X step.
std::pair<double, double>
The chart data vector could be replaced with empty vector. It makes possible to start anew with the same chart visuals.
The function:
bool RemoveChart(int chartIdx, bool bCorrectMinMax, bool bRedraw)
does just that: removes the chart from the container. If bCorrectMinMax == true, the container calculates and sets new m_startX and m_endX boundaries of the X-axis. The function returns true on success.
bCorrectMinMax == true
m_startX
Access to chart data members is permitted only via container member functions. Mostly, you have to pass to these functions the chart's ID. The charts are known outside the container by their names. To get the chart ID, you should use the container member function:
int GetChartIdx(string_t chartName)
or store and remember the value returned by AddChart.
Here string_t is an alias of std::basic_string<TCHAR>.
The chart ID cannot be a negative value. The ID value -1 has a special meaning: if it is returned by a "Get" function, it means failure (e.g., chart does not exist). When it is passed to a "Set" function, it means "All charts in this container" or "All visible charts".
Get
Set
The functions that change the appearance of the container often have a parameter bool bRedraw. If it is set to true, the container will be redrawn.
bool bRedraw
Unfortunately, there are too many member functions in the container to discuss all of them. Please look at ChartContainer.h and ChartContainer.cpp.
As I mentioned before, the container sends notifications to its parent when the user changes the container's X-extension or chart's "Show/Hide data points" flag or/and visibility from the popup menu. It makes possible for the parent to react to these actions. The notification is a standard MFC/Win32 process. The container sends WM_NOTIFY message to the parent:
LRESULT CChartContainer::SendNotification(UINT code, int chartIdx)
{
NMCHART nmchart;
nmchart.hdr.hwndFrom = m_hWnd;
nmchart.hdr.idFrom = GetDlgCtrlID();
nmchart.hdr.code = code;
nmchart.chartIdx = chartIdx;
switch (code)
{
case CODE_VISIBILITY: nmchart.bState = IsChartVisible(chartIdx); break;
case CODE_SHOWPNTS: nmchart.bState = AreChartPntsAllowed(chartIdx).first; break;
case CODE_EXTX:
case CODE_EXTY:
nmchart.minX = GetStartX();
nmchart.maxX = GetEndX();
nmchart.minY = GetMinY();
nmchart.maxY = GetMaxY();
break;
case CODE_REFRESH: nmchart.minX = GetInitialStartX();
nmchart.maxX = GetInitialEndX();
nmchart.minY = GetInitialMinExtY();
nmchart.maxY = GetInitialMaxExtY();
break;
case CODE_SAVEIMAGE:
case CODE_SAVEDIMAGE:
case CODE_SAVECHARTS:
case CODE_SAVEDCHARTS:
case CODE_PRINTING:
case CODE_PRINTED:
case CODE_SCY: break;
case CODE_TRACKING: nmchart.bState = m_bTracking;
break;
default: return 0;
}
CWnd* parentPtr = (CWnd*)GetParent();
if (parentPtr != NULL)
return parentPtr->SendMessage(WM_NOTIFY, WPARAM(nmchart.hdr.hwndFrom), LPARAM(&nmchart));
return 0;
}
The notification codes and extension of the NMHDR structure are defined in ChartDef.h:
NMHDR
typedef struct tagNMCHART
{
NMHDR hdr;
int chartIdx;
bool bState;
double minX;
double maxX;
double minY;
double maxY;
} NMCHART, *PNMCHART;
// Codes: Toggle Visibility
#define CODE_VISIBILITY 1U
// Show points
#define CODE_SHOWPNTS 2U
// Ext X was changed
#define CODE_EXTX 3U
#define CODE_EXTY 4u
#define CODE_REFRESH 5U
// Save (to use in multithreading)
#define CODE_SAVEIMAGE 6U
#define CODE_SAVEDIMAGE 7U
#define CODE_SAVECHARTS 8U
#define CODE_SAVEDCHARTS 9U
// Printing
#define CODE_PRINTING 10U
#define CODE_PRINTED 11U
// Scale change
#define CODE_SCY 12U
// For enabling tracking from popup menu
#define CODE_TRACKING 14U
As usual, the parent should implement the MFC handler for notification. For example:
BEGIN_MESSAGE_MAP(CChartCtrlDemoDlg, CDialogEx)
.................................
ON_NOTIFY(CODE_VISIBILITY, IDC_STCHARTCONTAINER, OnChartVisibilityChanged)
...................................
END_MESSAGE_MAP()
afx_msg void OnChartVisibilityChanged(NMHDR*, LRESULT*);
In an application (.exe), all version information is in the version resource in the .rc file. As we already know, resource files cannot be included in the static library file. So I used stand-ins: according to MS recommendations I placed at the beginning of ChartDef.h definitions:
// Version
//
#define FILEVER 2,0,1,1
#define PRODUCTVER 2,0,1,1
#define STRFILEVER _T("2.0.1.1")
#define STRPRODUCTVER _T("2.0.1.1")
#define STRCOMPNAME _T("geoyar")
#define STRFILEDESCRIPTION _T("ChartCtrlLib")
#define STRFILENAME _T("ChartCtrlLib.lib")
#define STRPRODNAME _T("ChartCtrlLib")In the same file, I have defined access functions:
// Retrieve version info
inline string_t GetLibFileVersion(void){return string_t(STRFILEVER);}
inline string_t GetLibProductVersion(void) {return string_t(STRPRODUCTVER);}
inline string_t GetLibCompName(void) {return string_t(STRCOMPNAME);}
inline string_t GetLibFileDescr(void) {return string_t(STRFILEDESCRIPTION);}
inline string_t GetLibFileName(void) {return string_t(STRFILENAME );}
inline string_t GetLibProdName(void) {return string_t(STRPRODNAME);}
So at first glance on ChartDef.h you will get information about the version of the ChartCtrlLib.lib. Your application can use the version access functions to do the same.
It is a dialog-based application. The chart container is a control in the main application dialog.
All controls to manipulate the container (add/change/remove/append/truncate/delete/load from an XML file) are in a tab control to the right of the container window. The tabs of the control are shown below.
The application has data generators to generate the sinusoid, sin(x)/x, exponent, rectangle wave (multi-valued function), and random data series.
.
Tab 1 is the "Add Chart" tab. It is the default tab. You will see it first when you start the demo.
The "Add Chart" tab has edit boxes to enter the chart name and/or Y value name, controls to set visual attributes, chart's X-extent, and a number of data points in the data series. There is a slide to set Y-precision. The Y-multiplier slider sets the order of magnitude of the Y-coordinates of the data series. E.g., if the slider is set to -2, all Y-coordinates will be multiplied by 10-2. If you do not enter the chart name, the application would generate the names for you.
Tab 0 is the "Container Properties" tab.
The control group "Set Colors" is enabled only if the container is empty. The "Precision" and "Set Range X" sliders and "X-Axis Name" edit box are enabled only if the container has at least one chart in it. Of course, in real life, you could change the colors of the container elements at any time. However, in the demo, I decided to allow the change of the colors only on an empty container to make it easy to set the right chart colors later when you already know the container colors. The X-axis name edit control allows changing the X-axis name. The name is the same for all charts in the container. The default name is "X".
When you are on this tab, the user input to the container is blocked: you can zoom/pan, invoke the popup menu, etc. I did it to make possible to undo the changes you have applied when you clicked on the button "Apply". You can undo one step or go to the state you had when you have open the tab.
The user input is unblocked when you quit this tab.
Tab 2 is for changing attributes of charts already in the container. You select the chart from the list box and, using tab controls, set the names of chart and its Y-values, Y-precision, and visuals: color, dash style, the pen width, and tension. You can undo all changes you did by selecting the chart in the list box and clicking the button "Undo" while you are staying on this tab. Switch to any other tab will clear the change history. The chart name must be unique for the given session. The chart and Y-name must have less than 28 characters. If you would violate these rules, the container "Set" function will truncate the entered names or/and add suffixes to them.
Tab 3 is the "Append Chart" tab. The list box control is shown for info only. For this demo, you cannot select one or several charts to append; all charts in the container will be appended. There is the checkbox "Animate". If it is checked, the "Append" command imitates an oscilloscope (sort of). You can discard the changes to the container with "Undo Append".
Tab 4 is the "Truncate Chart" tab. Select the chart, the start and end X coordinates, and truncate the chart. If the checkbox "Recalc Scales" is checked, the container will be forced to recalculate its X- and Y-scales to shrink to the new maximal ranges of its X and Y extensions. The checkbox "Keep Range X" can be activated only after the first successful truncation. If it is checked, it will lock the new X-extension to truncate all other charts to the same X-extension. You can select the chart and restore it to the initial state using the button "Undo".
Tab 5 is the "Remove Chart" tab. Just select and remove (delete) the chart. Again, "Recalculate Scales" updates the container's X- and Y-extents.
Tab 6 is the "Clone/Load" tab.
The "Clone" button copies the container in the new popup window with the resizable border. The owner of that window is the source container in the main application dialog.
The controls in the "Load from XML file" group are doing just that: they help to select and load chart(s) from the XML file. When you select the file, the chart names from the file are displayed in the multi-selection list box. Select the charts. Use the list box below and the color button to change the chart colors, if it is needed, and click "Apply Load".
In the tabs, I use the SliderGdiCtrl controls as sliders. To position the slider's thumb exactly where you want, left click on the slider to set the focus to it, and use arrow keys to move the thumb.
SliderGdiCtrl
If the data view window is visible or minimized when the user changes chart attributes, appends, truncates, or remove chart, the data view will be updated automatically.
I suggest the following scenario to play:
The demo source code can be used as a reference design:
Task
Reference Header
Reference source file
Change colors of cont. elements
DlgGenProp.h
DlgGenProp.cpp
Set precision and X-extent
Add charts
DlgAddChart.h
DlgAddChart.cpp
Change chart attributes
DlgChangeChart.h
DlgChangeChart.cpp
Append charts
DlgAppendChart.h
DlgAppendChart.cpp
Truncate chart
DlgTruncate.h
DlgTruncate.cpp
Remove chart
DlgRemoveChart.h
DlgRemoveChart.cpp
Load chart from XML file
DlgMisc.h
DlgMisc.cpp
Clone container
DlgMisc.h, DlgCharts.h
DlgMisc.cpp, DlgCharts.cpp
The file ChartCtrlLib.zip includes all source files for the ChartCtrlLib static library. It includes:
Changes and additions:
ChartContainer
Changes and additions
pntNmb >= 3
DrawLabel(..)
Changes and additions:
SaveChart.
|
http://www.codeproject.com/Articles/317712/An-MFC-Chart-Control-with-Enhanced-User-Interface?fid=1680217&df=90&mpp=10&sort=Position&spc=None&tid=4380156
|
CC-MAIN-2014-23
|
refinedweb
| 21,575
| 55.54
|
Byte arrays, typed values, binary reader, and fwrite
I was trying to read a binary file created from a native app using the C# BinaryReader class but kept getting weird numbers. When I checked the hex in visual studio I saw that the bytes were backwards from what I expected, indicating endianess issues. This threw me for a loop since I was writing the file from C++ on the same machine that I was reading the file in C# in. Also, I wasn’t sending any data over the network so I was a little confused. Endianess is usually an issue across machine architectures or over the network.
The issue is that I ran into an endianess problem when writing values byte by byte, versus by using the actual data type of an object. Let me demonstrate the issue
What happens if I write 65297 (0xFF11) using C++
#include "stdafx.h" #include "fstream" int _tmain(int argc, _TCHAR* argv[]) { char buffer[] = { 0xFF, 0x11 }; auto _stream = fopen("test2.out","wb"); fwrite(buffer, 1, sizeof(buffer), _stream); fclose(_stream); }
And read it in using the following C# code
public void ReadBinary() { using (var reader = new BinaryReader(new FileStream(@"test2.out", FileMode.Open))) { // read two bytes and print them out in hex foreach (var b in reader.ReadBytes(2)) { Console.Write("{0:X}", b); } Console.WriteLine(); // go back to the beginning reader.BaseStream.Seek(0, SeekOrigin.Begin); // read a two byte short and print it out in hex var val = reader.ReadUInt16(); Console.WriteLine("{0:X}", val); } }
What would you expect I get? You might think we get the same thing both times, a 16 bit unsigned integer (2 bytes) and reading two bytes from the file should be the same right?
Actually, I got
FF11 <-- reading in two bytes 11FF <-- reading in a two byte short
What gives?
Turns out that since I’m on a little endian system (intel x86), when you read data as a typed structure it will always read little endian. The binary reader class in C# reads little endian, and fwrite in C++ will write little endian, as long as you aren’t writing a value byte by byte.
When you write a value byte by byte it doesn’t go through the correct endianess conversion. This means that you should make sure to use consistent write semantics. If you are going to write values byte by byte always write them byte by byte. If you are going to use typed data, always write with typed data. If you mix the write paradigms you can get into weird situations where some numbers are “big endian” (by writing it byte by byte), and some other values are little endian (by using typed data).
Here’s a good quote from the ibm blog on writing endianness independent code summarizing the effect:
Endianness does matter when you use a type cast that depends on a certain endian being in use.
If you do happen to need to write byte by byte, and you want to read values in directly as casted types in C#, you can make use of Jon Skeet’s MiscUtil which contains a big endian and little endian binary reader/writer class. By using the big endian reader you can now read files where you wrote them from C++ byte by byte.
Here is a fixed version
using (var reader = new EndianBinaryReader(new BigEndianBitConverter(), new FileStream(@test2.out", FileMode.Open))) { foreach (var b in reader.ReadBytes(2)) { Console.Write("{0:X}", b); } Console.WriteLine(); reader.BaseStream.Seek(0, SeekOrigin.Begin); var val = reader.ReadUInt16(); Console.WriteLine("{0:X}", val); }
Which spits out
FF11 FF11
|
http://onoffswitch.net/byte-arrays-values-binary-reader-fwrite/
|
CC-MAIN-2018-13
|
refinedweb
| 607
| 61.77
|
> From: João Távora <address@hidden> > Date: Mon, 28 Jan 2019 21:38:29 +0000 > Cc: emacs-devel <address@hidden> > > But really, is there a list of reserved symbols somewhere? Not that I'm aware of. But I added to the manual the description of the role of the 'face' property of the face synbol. > And if this is reserved for presumably "internal" purposes, couldn't > that symbol be renamed to 'internal--face-id' or something like > that? Or are there too many references? It's too late for that, I think. Instead, packages should IMO try to keep the global namespace clean in the property domain as well, thus defining properties whose names have the prefix of the package name. > > > Wouldn't another (perhaps uglier, but easier) fix amount to renaming the > > > face 'flymake-error-face'? > > Not sure how that would help in this matter. > > As far as I can understand, this is because flymake-error, the flymake error > type, is conflated with flymake-error, the face. No, the problem is that each face has its numeric face ID stored as the value of the face symbol's 'face' property. So, no matter what is the face symbol, its 'face' property should not be touched.
|
https://lists.gnu.org/archive/html/emacs-devel/2019-01/msg00723.html
|
CC-MAIN-2021-10
|
refinedweb
| 205
| 63.9
|
This chapter contains a series of operations designed to be helpful in setting up the HTTP security for the servlet engine. The topics we cover are the following:
Before you define and program the security settings in the OSE, you must be proficient at setting up and manipulating the JNDI structure. Verify your knowledge against the following checklist:
If you are unsure of any of the above topics, review the diagrams and examples describing how the OSE works with the JNDI namespace and how Web browsers interact with the OSE. A link follows each topic to a specific discussion. A complete example list is located in Appendix B, "Examples".
After you learn the basic security set up in this chapter, you can safely change the configuration of your Web service security from the default set up.
In HTTP security, access to a protected resource is composed of two parts: authentication of valid credentials and authorization of the user. Authentication is the validation of submitted credentials establishing that a user is known and validated by the system. Authorization is the determination of whether an authorized user is allowed to perform the requested action.
There are four stages to declare when establishing security measures:
rootof the servlet context.
Use these steps to ensure the correct base information is in use defining HTTP security for your Web resources. Without any one of these steps, security will either be non-existent or will deny access to protected resources.
A principal is a generic term for either a user or a group. The realm is an object in the service, which contains the declared principals. Figure 8-1 shows the position of the realm object at the top level of the Web service, at the same level as the
config object for the service.
Groups contain other principals (users or other groups). Individual members of a group inherit the permission of the group object.
Users are single objects. Unlike a group, there are no subsets of other principals belonging to a user.
Each realm defines a separate set of principals. The realm and its implementation are core to all of HTTP Security. There can be multiple realms within a service. The realm is the source of:
Because the realm is the source of all principals, it also plays a key role in:
By default, there are three implementations of realms, named in HTTP Security:
These names are just shortcuts. When instantiating the realm, use the appropriate name when declaring the realm class in the JNDI namespace.
The DBUSER realm is a principal definition derived from the users and roles defined within the database. There are several implications.
create user "joe" identified by welcome;
then the username would be
joe, in lowercase.
You set the security levels with the
realm command, using the different flags and options. You can see the complete definition of all valid realm uses in the Oracle Java Tools Reference. To see a list of the different choices, execute the
realm command from the session shell.
$ realm usage: realm realm list -d <webServiceRoot> realm echo [0|1] realm secure -s <servletContextPath> realm map -s <servletContextPath> [- add|remove <path>] [-scheme <auth>:<realm>] realm publish -d <webServiceRoot> [- add|remove <realmName>] [-type RDBMS | DBUSER | JNDI] realm user -d <webServiceRoot> -realm <realmName> [- add|remove <userName> [-p <user password>]] realm group -d <webServiceRoot> -realm <realmName> [- add|remove <groupName> [-p <group password>]] realm parent -d <webServiceRoot> -realm <realmName> [-group <groupName> [- add|remove <principalName>]] [-query <principalName>] realm perm -d <webServiceRoot> -realm <realmName> -s <servletContextPath> -name <principalName> [-path <path> (+|-) <permList>]
Underscoring its central role,
realm is the start of all security commands in the shell. The following sections depict example commands creating and managing realms from the shell.
To create a RDBMS realm, type:
realm publish -w /myService -add testRealm -type RDBMS
For JNDI and DBUSER, use those titles as the type argument.
To remove a realm, type:
realm publish -w /myService -remove testRealm
Realm declarations reside in the JNDI namespace. Deploying customized realms, once written, requires only slight customization of the namespace entry.
To publish a custom realm, type:
realm publish -w /myService -add testRealm -classname foo.bar.MyRealm
To create a
user, type:
realm user -w /myService -realm testRealm1 -add user1 -p upswd1
To create a group, type:
realm group -w /myService -realm testRealm1 -add group1 -p gpswd1
In either of the above commands, if the password is left blank, the principal name is used instead.
To delete a
user, type:
realm user -w /myService -realm testRealm1 -remove user1
To delete a
group, type:
realm group -w /myService -realm testRealm1 -remove group1
To list
users of a realm, type:
realm user -w /myService -realm testRealm1
To list
groups of a realm, type:
realm group -w /myService -realm testRealm1
To add a principal to a
group, type:
realm parent -w /myService -realm testRealm -group group1 -add user1
To remove a principal from a
group, type:
realm parent -w /myService -realm testRealm -group group1 -remove user1
To list principals within a
group, type:
realm parent -w /myService -realm testRealm -group group1
To query which groups a principal is a member of, type:
realm parent -w /myService -realm testRealm -q user1
(All realms do not support this query option.)
If a service has any realms declared, they are located in a
realms sub-context of the service. If it is a JNDI realm, there is additional sub-contexts within the
realms context that contain its principal declarations.
If
/realms is removed, all realm definitions are removed along with it. However any external resources (such as table entries) would remain. Using the session shell
realm tool is much safer for efficient realm management.
Removing subcontexts of
realms can affect any JNDI type realms. The RDBMS realm is defined to use the following tables:
OSE HTTP security resource protection is local to the servlet context. To declare a resource protected, two pieces of information must be supplied, embodied in a protection scheme. A scheme is of the form:
<authType>:<realmName>
There are two valid authentication types:
Although Digest is far more secure than Basic, not all browsers support it. From looking at typical installations, IE5 supports it; Netscape 4.7 does not.
You can also declare resources not to be protected. This is useful when the servlet context root is to be protected. However, when the root is protected, the error pages, being part of the tree, are also protected. Delivering an error page is part of the authentication process. If the error page is protected, cycles develop, and the desired behavior is not observed.
Instead of letting the error page default as part of the tree, explicitly declare the error pages as not being protected. Use a protection scheme of <NONE>. For example:
realm map -s /myService/contexts/myContext -a /system/* -scheme <NONE>
realm map -s /myService/myService/contexts/myContext -a /* -scheme basic:testRealm1
The protected path is local to the servlet context. Internally, that path is normalized, enabling stable, predictable patterns for matching. This may cause the internal representation to differ from the original path used to create the protection scheme. HTTP Security will use
When declarations are made, as shown in the previous example, the paths are matched to realms as in the following examples:
/doc/index.html -> testRealm1
/doc/foo -> testRealm3
/doc -> testRealm2
/doc/ -> testRealm2
/doc/index -> testRealm3
To remove the protection of a path, type:
realm map -s /myService/contexts/myContext -r /doc/index.html
To list all protected paths within a servlet context, type:
realm map -s /myService/contexts/myContext
To explicitly declare a path not to be protected, type:
realm map -s /myService/contexts/myContext -a /system/* -scheme <NONE>
To list all protected paths within a servlet context, type:
realm map -s /myService/contexts/myContext
The JNDI entry for protection mappings is located in a subdirectory, policy, of the servlet context. Within that sub-context is an entry,
httpMapping, which creates the object responsible for handling the security servlet protection mapping. By default, this object is used as an index into the JAVA$HTTP$REALM$MAPPING$ table. The HTTP realm mapping table contains all the mapped paths. Simple JNDI entry manipulation can introduce a customized version of
HttpProtectionMapping.
Permissions are the most involved of all HTTP Security declarations because they tie domain-scoped entities with servlet context-scoped entities and reside in the servlet context themselves.
A permission declaration consists of several pieces:
Given all the pieces that are being tied into one permission declaration, it is easy to see why these are the most complicated declarations.
Of those pieces, only the HTTP actions have not been talked about yet. HTTP security permissions concern only valid HTTP request methods: GET, POST, PUT, DELETE, HEAD, TRACE, OPTIONS.
To declare a granted permission on
/foo/index.html for user1 for GET and POST, type:
realm perm -w /myService -realm testRealm1 -s /myService/contexts/myContext -n user1 -u /foo/index.html + get,post
To declare a denied permission on /foo/* for user1 for PUT and DELETE, type:
realm perm -w /myService -realm testRealm1 -s /myService/contexts/myContext -n user1 -u /foo/* - put,delete
To clear granted permissions on /foo/index.html for user1, type:
realm perm -w /myService -realm testRealm1 -s /myService/contexts/myContext -n user1 -u /foo/index.html +
To list all permissions for a user, type:
realm perm -w /myService -realm testRealm1 -s /myService/contexts/myContext -n user1
More Details
In the policy subcontext of a servlet context, there will be an entry, config. This is the entry used to create the object responsible for all permission declaration checks. Again, the object is used as a key into the permissions table,
JAVA$HTTP$REALM$POLICY$.
Any customization is allowed as long as the PrivilegedServlet interface is implemented.The main responsibility of this servlet is to either:
or
After authentication and authorization have taken place, the servlet must set specific authenticated principal values on the request itself. This is the user information that can be retrieved from the request by any executing servlet.
To create a security servlet, type:
realm secure -s /myService/contexts/myContext
There are several layers of suspected problems to eliminate when debugging HTTP Security. This minimal checklist can help you start your trouble shooting quest.
|
http://docs.oracle.com/cd/A87860_01/doc/java.817/a83720/securadm.htm#20175
|
crawl-003
|
refinedweb
| 1,701
| 51.18
|
0
Hello! google has turned me up at this forum so many times i think it would be stupid to ignore the hints & not get registered! Im having some problems with objects. Here is my code:
#include <iostream> #include <string> using namespace std; class planet{ public: int moons; int mass; }mercury, venus, earth, mars, jupiter, saturn, uranus, neptune; int main() { string input; cout << "enter a planet:"; getline(cin, input); cout << input << " has " << input.moons << " moons\n"; cout << input << " has a mass of " << input.mass; return 0; } /*i would like to know the quickest way to acheive this effect - the closest my knowledge allows involves nesting 8 if statements for each planet name */
Ive been pointed a couple of times in the direction of maps, but ive no idea how id have to implement them in this scenario. Any help would be much appreciated! Ive run into this before & its been annoying me for the past 5 hours. i really cant understand maps.
|
https://www.daniweb.com/programming/software-development/threads/195683/object-trouble
|
CC-MAIN-2017-17
|
refinedweb
| 162
| 68.3
|
There is a charter somewhere that states all tutorial series must start with the ubiquitous Hello World tutorial, and who am I to break the charter? So that is exactly what we are going to do here, a simple Hello World. This is as much about getting up and running with PS Studio than it is about C# coding, as there are a couple small gotchas. By the end of this tutorial you should be able to create, configure and run an application. In future tutorials, I will assume you have these abilities.
If you haven’t already, download and install PlayStation®Suite SDK from here. The install is pretty straight forward, take the defaults, next next next, done. If you want some idea of what you just installed check out this post. Now fire up PssStudio from your start menu. Once loaded, select File->New-Solution, like this:
In the “New Solution” dialog, on the left hand side expand C# then select PlayStation Suite. Select PlayStation Suite Application on the right, then fill in whatever name you want ( I am using HelloWorld ). This should automatically fill in Solution Name, it is your choice if you want to create a subdirectory or not, in this case I will. Fill out the dialog like this:
Click OK, and your solution will be created. Now we need to add an app.cfg file to your application, or it will fail to run. Note, this is not that same as a .NET application configuration file. In the Solution Explorer, right click your project name ( HelloWorld in my case ), then select Add->New File… like such:
Choose Misc on the left, then Empty Text File, name it app.cfg and click New.
The newly created file will open in the text editor, fill it in with the following:
memory: resource_heap_size : 16384 managed_heap_size : 16384 input: gamepad : false touch : false motion : false
This file is telling what kind of machine your application is targeting. If you forget this step, the simulator pss.exe will just max out a CPU core, never responding. Now in the Solution panel, locate and double click AppMain.cs, this is where our application code will reside.
The actual process of doing Hello World on Vita is actually incredibly involved, as you need to do almost everything yourself. In this example however, I am going to take advantage of GameEngine2D, which is an included 2D Game engine that makes many of the drudge worthy tasks much easier. That means we need to add a reference to GameEngine2D. Adding a reference is simply saying “I am going to use the code included in this library” In order to add a reference to GameEngine2D, right click on References and choose Edit References… like such:
Then a dialog will pop up, locate Sce.Pss.HighLevel.GameEngine2D and check the box to it’s left. Then click OK.
Now we can add a using entry to tell our code to use the newly referenced library. This is where we run into a bug in MonoDevelop ( PS Studio ). Look at the following auto-completion screenshot:
Hmmm… where is HighLevel? We added the reference, it should be there. Well, on adding a new reference it seems the intellisense/auto-completion isn’t updated properly. I live by auto-complete so this is a big deal to me. Simply close and re-open your solution and presto:
Thankfully, you don’t really need to add references all that often, so while it’s inconvenient to have to exit and restart, its not the end of the world. PS Studio loads quickly enough to barely notice.
Alright, finally time for the code!
using System; using System.Collections.Generic; using Sce.Pss.Core; using Sce.Pss.Core.Environment; using Sce.Pss.Core.Graphics; using Sce.Pss.Core.Input; using Sce.Pss.HighLevel.GameEngine2D; using Sce.Pss.HighLevel.GameEngine2D.Base; using Sce.Pss.Core.Imaging; namespace HelloWorld { public class AppMain { public static void Main (string[] args) { Director.Initialize(); Scene scene = new Scene(); scene.Camera.SetViewFromViewport(); var width = Director.Instance.GL.Context.GetViewport().Width; var height = Director.Instance.GL.Context.GetViewport().Height; Image img = new Image(ImageMode.Rgba, new ImageSize(width,height), new ImageColor(255,0,0,0)); img.DrawText("Hello World", new ImageColor(255,0,0,255), new Font(FontAlias.System,170,FontStyle.Regular), new ImagePosition(0,150)); Texture2D texture = new Texture2D(width,height,false, PixelFormat.Rgba); texture.SetPixels(0,img.ToBuffer()); img.Dispose(); TextureInfo ti = new TextureInfo(); ti.Texture = texture; SpriteUV sprite = new SpriteUV(); sprite.TextureInfo = ti; sprite.Quad.S = ti.TextureSizef; sprite.CenterSprite(); sprite.Position = scene.Camera.CalcBounds().Center; scene.AddChild(sprite); Director.Instance.RunWithScene(scene); } } }
PHEW! Pretty long for a Hello World eh? Truth of the matter is, had I not used GameEngine2D it would have easily been 4 or 5 times longer! We will cover what is going on behind the scenes ( the stuff GameEngine2D is handling ) in a later post. For now, just assume this is the way it works, at least while you are getting started. Now lets take a quick walk through the code and look at what’s happening here.
First we add our additional includes, Sce.Pss.HighLevel.GameEngine2D and Sce.Pss.HighLevel.GameEngine2D.Base. Our app consists of a single method, Main, as the event loop is actually managed by the Director.
Speaking of which, that is what the first line does, initializes the Director singleton. A singleton is a globally available object that is allocated on it’s first use. Director is the heart of your game, even though most of the complexity is hidden away. Remember, you have full source code for GameEngine2D available if you want to peek behind the curtain.
Next up we create a Scene. Again, Scene is an abstraction provided by GameEngine2D which can be thought of as a collection of visible “stuff”, the bits and pieces that compose what we want to display to the user and the Camera used to display them. We then call SetViewFromViewport() which creates a camera sized to the window. Generally without GameEngine2D, you would have to do this yourself, setting up an orthographic project.
The next two lines get the width and height, as reported by the Director’s OpenGL context. We then create an Image the same dimensions as our viewport. The Image class is used to hold binary image data, such as from a PNG or JPG file, but in this case we are create a new blank image. Once we create our Image, we call it’s DrawText method to draw our “Hello World” message. ImageColor represents the color we want to draw the image ( red in this case ), Font represents the font to draw the text in ( there aren’t really many options here ), while ImagePosition represents the location within the image to draw the text at.
Now that we have created an image, we need to create a texture out of it. Textures are most often thought of as the images mapped around 3D objects and in a way, that is still what we are doing here. GPUs have no concept of image files or pixels, they deal only with textures. That is why we copy our image data into the texture, using the SetPixels method of Texture2D and the ToBuffer() method of Image, to turn the images pixel data into an array Texture2D can use. At this point we are done with our Image, so we Dispose() it.
We next create a TextureInfo object and assign our newly created Texture to it. TextureInfo caches UV location information about the texture, as well as taking ownership of the texture itself. More importantly, it’s a TextureInfo object that Sprite expects, so that is what we create. Speaking of sprite, that is what we create next in the form of a SpriteUV.
Since modern GPUs don’t really deal with pixels anymore, everything is pretty much made out of textured polygons and SpriteUV is no exception. It is essentially a rectangular polygon that faces the camera with our image textured on it, all sprites are. Next up we set the sprite’s Quad ( the rectangle the texture is plastered to ) to be equal to the size of our texture, which in this case is the same size as our view. We now position our sprite smack in the middle of the view.
Now that we created our Image data, copied that image data into a Texture2D that was then assigned to a TextureInfo object, which in turn was assigned to a SpriteUV, we are now ready to add that fully textured sprite to our scene, which we do by calling AddChild(). Yeah, it sounds very convoluted, but you will find it natural very soon and generally you just load your textures from file anyways, greatly simplifying this process.
Anyways, now that we have our scene populated with our texture, which is the same size as the screen, we go ahead and tell the Director to do it’s thing, via RunWithScene(). You can think of this as the game loop, although the internals of what it’s doing are hidden from you.
Don’t worry, it’s really not as complicated as it seems. Now lets take a look at the fruits of our labour. To run it in PS Studio, select the Run menu and either choose Start Without Debugging or Start Debugging, depending if you want to be able to debug or not, like such:
And presto, all our hard work resulted in…
May not be much to look at, but you successfully created a game you can run on a Vita!
Speaking of which, if you need details of running on an actual device instead of in the simulator, read this post.
The complete project file for this tutorial are available for download here.
Continue on to Hello World Part 2: Hello Harder.
|
https://gamefromscratch.com/playstation-mobile-sdk-tutorial-1-hello-world/
|
CC-MAIN-2021-39
|
refinedweb
| 1,651
| 64.71
|
Damon Payne
The PFX team has announced their TechEd sessions. I'm there.
In addition to concurrency, the 64bit realm is another obvious place where the PC/Server world is going. It’s faster than 32bit, we can address more memory, and we can have more accurate hardware floating point. Ideally, it should be easy for everyone except low-level framework authors, device driver authors, and the like to migrate their applications to 64bit. I mean this for Native applications too. If you had to support both “interface X” and “interface Y” in your application, you’d create a level of indirection to do so, perhaps a Strategy pattern.
If you are a .NET developer, all of your programs should magically become real 64bit applications when running on a 64bit .net framework on a 64bit machine. Handy!
I have not written a non-trivial C program in quite some time, so I preface this statement with the disclaimer that it may be total crap: If you are a native developer you should also be able to effortlessly port your app to work with 64bit systems. If you are NOT hard-coding any address sizes, struct sizes and the like, a simple recompile should work. Using sizeof(), size_t, etc. the compiler becomes your level of indirection in a way.
Today I sat down to start testing TestDrivenàxUnitàNCover; I pretty much need TestDriven and NCover to do serious work. xUnit.Net ships with an easy installer that tells TestDriven it exists. It reports success. It doesn’t work. I happen to have the xUnit code on hand so I run it through the debugger. TestDriven looks at a registry key (HKEY_LOCAL_MACHINE\SOFTWARE\MutantDesign\TestDriven.NET\TestRunners) for a list of assemblies containing a class that implements its test runner interface and their locations. The following code returns “true”:
public static bool IsRunnerInstalled
{
get
{
using (RegistryKey runnerKey = Registry.LocalMachine.OpenSubKey(@"SOFTWARE\MutantDesign\TestDriven.NET\TestRunners\xunit", true))
return (runnerKey != null);
}
}
The problem is, inspection using Regedit shows that the key is clearly not there. What’s going on? Just on a hunch, I thought I’d google for some 64bit issues. Since there’s \Program Files\ and \Program Files(x86)\ on my 64bit machine, I wondered if there was a second 64bit/32bit registry somewhere. Almost…
Microsoft, in their wisdom, decided that the registry needs a layer of indirection for 32bit applications. What purpose this serves I cannot fathom, but you can read all about it here:
So, something you THINK is in one place may really be below a “Wow6432Node” registry key. As I’ve already stated, though, .NET programs should magically be 64bit on a 64bit framework. There is a way to break this though, and sure enough, the xunit.installer assembly is marked with a Platform Target of x86 in its build properties. Since the default is the magical “Any CPU” I assume there was a good reason to change this. My missing registry key, sure enough, is being created in HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\MutantDesign\TestDriven.NET\TestRunners. TestDriven is looking in the Correct location and not finding what it needs. I copied the Wow6432 data up into the 64bit area of the registry, disabled and re-loaded TestDriven in the VS2008 add-in manager, and now I’m in business. Xunit does not appear in the “Test With” menu but Run Tests and Test with debugger now run my XUnit tests.
Test withàCoverage using NCover works too. I can now ditch NUnit. Looking at the xUnit à TestDriven code, I’m sure I can get concurrent xUnit tests running from Visual Studio using TestDriven whenever I’m ready.
Native Windows developers may know what issue this registry redirection practice solves, but it’s lost on me. Maybe it’s a thunking limitation, I don’t know. At any rate, it’s something to be aware of when combining 32 and 64 bit code.
So, five years ago when my daughter was brand new, I was a runner. In the time since then I've gotten into the worst shape of my life and keep meaning to do something about that. By looking at me, you wouldn't necessarily think that I've got a got or anything, but it's there; I also have various pants that don't fit. The necessary motivation came recently when everyone at CarSpot decided to have an Al's Run team. Al's Run 2003 was pretty much the last time I intentionally did exercise of any kind. I have gone jogging a total of One Times so far this year and two miles was unbelievably painful. Combined with a newborn who's not sleeping very much I am not in a good place to start training, but today I bought some actual running shoes so now I feel committed. When I quit I was able to do an 8k with an average mile time of around 7:45; that was a long time ago and I didn't like wine and steaks as much as I do now. I'm going to pick a 5k in July as an intermediate goal.
I've long been a fan of Casey Chestnut, who calls his blog Brains-n-Brawn. Now that I'm running again maybe I can claim "Smarts and Swiftness" ?
Hanselminutes show #112 brings together people from xUnit.Net, NUnit, an MBUnit to discuss unit testing frameworks. The whole show is worth listening to, but they especially mentioned running tests in parallel, which of course I've done some work on:
The other thing thing mention is the other potential intersection of Unit Testing and Concurrency: testing for thread deadlocks, etc. I have been working for a few weeks on an article and some semantics (no working code this time due to the scope) that deal with exactly this problem space. I should have it published this week.?. In the rest of this article we’ll look at the modifications needed to use Payneallel.ForEach with the unit tests.
We start our modifications in the XUnit GUI, which is refreshingly straightforward. The first thing to do is make it easy to choose concurrent execution. The XUnit GUI now looks like this, following the sequential execution of the control group:
The “Run concurrently” checkbox is my addition. When you click the Run button:
void OnClick_Run(object sender, EventArgs e)
_totalCount = 0;
_testCount = GetTestCount();
ResetUI(_testCount);
buttonGo.Enabled = false;
if (_concurrentChk.Checked)
ThreadStart ts = new ThreadStart(RunAsync);
Thread t = new Thread(ts);
t.Name = "xUnitAsyncThread";
t.Start();
textResults.AppendText("Running Async...\r\n");
else
wrapper.RunAssembly(TestCallback);
Our xUnit ExecutorWrapper is “wrapper”. In order to keep from screwing around with the GUI thread, we run XUnit on a new thread, which will in turn create many other threads using Payneallel. By default, Payneallel will block the calling thread until all operations are done, however we cannot both block the GUI thread AND allow it to update itself as test results are available. The RunAsync method is simple:
void RunAsync()
wrapper.BeginRunAssembly(TestCallback);
My next modification is to the ExecutorWrapper class. I tried to make my changes to XUnit additive only, adding functionality by adding methods rather than modifying things that already work for sequential execution.
public void BeginRunAssembly(Action<XmlNode> callback)
XmlNodeCallbackWrapper wrapper = new XmlNodeCallbackWrapper(callback);
CreateObject("XUnit.Sdk.Executor+RunAssemblyParallel", executor, wrapper);
I see no reason not to keep running the test in a separate AppDomain. We have added another inner class to Executor, the RunAssemblyParallel class.
Through experimentation I found that this would be the appropriate place to introduce parallel execution, at the Class level as I said previously. This class is almost a copy of the RunAssembly class included with XUnit:
public class RunAssemblyParallel : MarshalByRefObject
/// <summary/>
public RunAssemblyParallel(Executor executor, object _handler)
DoParallel(executor, _handler);
protected void DoParallel(Executor executor, object _handler)
ICallbackEventHandler handler = _handler as ICallbackEventHandler;
AssemblyResult results = new AssemblyResult(new Uri(executor.assembly.CodeBase).LocalPath);
Action<Type> doOne = delegate(Type type)
{
ITestClassCommand testClassCommand = TestClassCommandFactory.Make(type);
if (testClassCommand != null)
{
ClassResult classResult = TestClassCommandRunner.Execute(testClassCommand,
null,
result => OnTestResult(result, handler));
results.Add(classResult);
}
};
Type[] exportedTypes = executor.assembly.GetExportedTypes();
int count = exportedTypes.Length;
//Parallel Test execution
Stopwatch sw = new Stopwatch();
sw.Start();
Payneallel.ForEach<Type>(exportedTypes, doOne, true);
sw.Stop();
Console.WriteLine("Time elapsed: " + sw.Elapsed);
results.ExecutionTime = sw.Elapsed.TotalSeconds;
OnTestResult(results, handler);
}
Like the TPL, Payneallel likes an Action<T> to execute. In the vanilla XUnit version of this code, there is no StopWatch and there is a regular foreach() block instead of Payneallel.ForEach. The stopwatch is important because I can no longer trust XUnit to time the execution! For a long time I ran and re-ran my tests and the Parallel code was always slower than the sequential version. Then I had a “pwop” moment and found the following line of code:
ExecutionTime += child.ExecutionTime;
Whoops! We can’t just add the execution time of the children (from TimedCommand) when some of the commands are running at the same time.
With the Timing issue solved, I was successfully executing unit tests concurrently and saving a lot of time doing so. Here is the same set of unit tests ran using my new Concurrent xUnit hack.
I’ll take 27 seconds over 51 seconds any day, and I have not done any optimization work yet, nor constructed a test case where the tests are nearly 4x faster on a four processor machine, but I expect to be able to get there. As I mentioned before, the Class is the unit of concurrency with this experiment, so the amount of time saved will depend heavily on how the test cases are structured. A more ideal method would be to first get a list of all of the individual methods marked with [Fact] and use the parallel semantics on that list instead.
I have a side project that is woefully under unit tested, code that I inherited. I write unit tests for the code I touch as I refactor it. The unit tests will involve a lot of database access, calculations, and Presenter mocking. I can’t disclose what this codebase is just yet, but I am in the process of testing TestDrivenàXUnitàNCover. I depend heavily on NCover and I really can’t imagine manually trying to determine what I’ve got test coverage on anymore. If this test is successful, I will eventually be able to report on how this concurrent unit testing works on 100,000 lines of code 99% covered by thousands of unit tests. This should be a sufficient test case to prove this idea is sound.
As the years go by and we still don’t have 5ghz machines, designing frameworks with concurrency in mind will become increasingly important.
Well,)
foreach (Type type in exportedTypes)
result => OnTestResult(result, handler));
results.Add(classResult);.
Looks like I'm going to TechEd!
All my concurrent dreams are coming true, concurrently!
First I had some early success getting concurrent unit testing working, details are forthcoming. I did just have a new son so I'm behind on publishing the latest research. Next, Scott Hanselman answers my call for concurrent builds here:
There is an old Channel 9 video where the MSBuild team talks about concurrent builds. You will note that they decided on a Tree Based Scheme to manage the dependencies between build targets. I guess I'm not so Crazy After All, am I? My problem with the MSBuild concurrency is that it wasn't working in Visual Studio and I felt it should be enabled inside Visual Studio by default, since developers spend a lot of time building solutions using CTRL+SHIFT+B. Scott has answered this concern here:
It's a hack as he points out, but we're getting there. Ideally this would be made available in a Service Pack to VS2008, but I suspect we'll be waiting until the next version unfortunately.
Theme by Mads Kristensen
Damon Payne is a Microsoft MVP specializing in Smart Client solution architecture.
Tweetses by damonpayne
Get notified when a new post is published.
|
http://www.damonpayne.com/2008/05/default.aspx
|
CC-MAIN-2013-48
|
refinedweb
| 2,017
| 55.74
|
1total miles driven per day;
2.cost per gallon of gasoline;
3.average miles per gallon
4.parking fees per day
5. tolls per day
Using c++ application create a program that calculate daily driven cost, so that you estimate the how much money could be by car pooling, which also other advantages such as reducing carbone emmisions and reducing traffick
Here is what I was able to do.
1. // application to calculate pool savings 2 #include <iostream> 3. using namespace std; 4 5. int main() 6{ 7double miles, gas, average, parking, tolls, total; 8cout miles; 9cout gas; 10. cout average; 11. cout parking; 12. cout tolls; 13 14 total=(miles*gas)/ (average+parking+tolls) 15. cout
This post has been edited by modi123_1: 11 February 2014 - 12:17 PM
Reason for edit:: please use code tags
|
http://www.dreamincode.net/forums/topic/339796-car-pool-saving-calculator-using-c-application/
|
CC-MAIN-2016-40
|
refinedweb
| 137
| 68.97
|
derelict_extras-bass ~master
A dynamic binding to the BASS library.
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
DerelictBASS
Warning: this an unofficial Derelict binding.
A dynamic binding to BASS for the D Programming Language.
Please see the pages Building and Linking Derelict and Using Derelict, or information on how to build DerelictBASS and load the BASS library at run time. In the meantime, here's some sample code.
import derelict.bass.bass; void main() { // Load the BASS library. DerelictBASS.load(); // Now BASS functions can be called. ... }
Thanks
Based on prior work from h3r3tic and !!M.
- Registered by ponce
- ~master released 5 years ago
- p0nce/DerelictBASS
- github.com/p0nce/DerelictBASS
- Boost
- Authors:
-
- Dependencies:
- derelict-util
- Versions:
- Show all 6 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
243 downloads total
- Score:
- 1.3
- Short URL:
- derelict_extras-bass.dub.pm
|
https://code.dlang.org/packages/derelict_extras-bass/~master
|
CC-MAIN-2022-40
|
refinedweb
| 163
| 57.87
|
The GroupedList Control Container
This post is part two of a short series on extending the Winforms Listview control. If you missed the previous post, you can review it HERE. Also, the Source Code for this project can be found in my GitHub repo.
In our previous post, we examined the first component of what I am calling the “GroupedList Control” – essentially, a list of contained and extended Listview controls which act as independent groups. Individual ListGroups (which is how I refer to them) may contain independent column headers, and are expandable/collapsible, much like what I believe is called a “slider” control.
A brief note – I am posting somewhat abbreviated code here. I have omitted many common overloads and other features we might discuss in a future post. For now, the code posted here contains only the very core functionality under discussion. The Source, however, contains all my work so far on this control.
Also note – the GroupedListControl arose out of my need for a quick-and-dirty combination of the functionality of the Winforms Listview and a Treeview. A group of columnar lists which could be independently expanded or collapsed.
A Quick Look at a Very Plain Demo:
In the last post, we had assembled our basic ListGroup component, which is essentially an extension of the Winforms Listview control, modified to handle some events related to column and item addition and removal. Where we left off, it was time to assemble our container, the GroupedListControl.
I figured the quickest way to accomplish what I needed (remember – under the gun, here) would be to extend the FlowLayoutPanel such that I could use this ready-made container to manage a collection of ListGroup controls, stack them vertically, and such. There were a few issues with this approach that we will discuss in a bit. First, let’s look at the basic code required to bring the control to life:
The GroupedList Control – Basic Code:
public class GroupListControl : FlowLayoutPanel { public GroupListControl() { // Default configuration. Adapt to suit your needs: this.FlowDirection = System.Windows.Forms.FlowDirection.TopDown; this.AutoScroll = true; this.WrapContents = false; // Add a local handler for the ControlAdded Event. this.ControlAdded += new ControlEventHandler(GroupListControl_ControlAdded); } void GroupListControl_ControlAdded(object sender, ControlEventArgs e) { ListGroup lg = (ListGroup)e.Control; lg.Width = this.Width; lg.GroupCollapsed += new ListGroup.GroupExpansionHandler(lg_GroupCollapsed); lg.GroupExpanded += new ListGroup.GroupExpansionHandler(lg_GroupExpanded); } public bool SingleItemOnlyExpansion { get; set; } void lg_GroupExpanded(object sender, EventArgs e) { // Grab a reference to the ListGroup which sent the message: ListGroup expanded = (ListGroup)sender; // If Single item only expansion, collapse all ListGroups in except // the one currently exanding: if (this.SingleItemOnlyExpansion) { this.SuspendLayout(); foreach (ListGroup lg in this.Controls) { if (!lg.Equals(expanded)) lg.Collapse(); } this.ResumeLayout(true); } } void lg_GroupCollapsed(object sender, EventArgs e) { // No need. } public void ExpandAll() { foreach (ListGroup lg in this.Controls) { lg.Expand(); } } public void CollapseAll() { foreach (ListGroup lg in this.Controls) { lg.Collapse(); } } }
Of particular note here is the GroupListControl_ControlAdded Event Handler. Sadly, when one adds controls to the FlowLayoutPanel Controls collection, they are just that. The Controls property of the FlowLayout panel represents a ControlCollection object, which accepts a parameter of type (wait for it . . . ) Control.
I wanted MY GroupedListControl to contain a collection of ListGroup objects. However, I have not yet figured out a way to do this while retaining the functionality of the FlowLayout panel. As far as I can tell, we can’t narrow the type requirement of the native ControlCollection. One option I considered would be to add a new method to the class, named AddListGroup, which could then accept a parameter of type ListGroup, and pass THAT to the Controls.Add(Control) method. However, that seems a bit mindless, as the Controls.Add(0 method would remain publicly exposed, thus creating opportunity for confusion.
For now, I decided that those using this control will have to realize that passing anything other than a ListGroup object as the parameter will likely be disappointed in the performance of the control! It is less than elegant, but I didn’t have time to figure out a more elegant solution, and for the moment it works. I would love to hear suggestions for improvement.
The next thing to notice about the GroupListControl_ControlAdded method is that for each ListGroup we add, we are subscribing to the GroupExpanded and GroupCollapsed events sourced by each individual ListGroup. This is mainly because there are use cases in which we might want to limit group expansion to a single group at a time, such that expanding one group collapses any other expanded group. This is accomplished by providing the boolean SingleItemOnlyExpansion property. The GroupListControl_ControlAdded method checks the state of this property, and if true, collapses any expanded groups which are not equal to (as in, referencing the same object instance as) the current group (the “sender” in the method’s signature).
The last thing to note is the manner in which we set the width of each ListGroup in the GroupListControl_ControlAdded method. I tried setting the Dock property instead, and ran into difficulties with that.
Given the code above, you would think all that was pretty simple, no? Yeah. Right. A problem arose in the form of ugly scrollbars. The code above will run, and do everything represented. However, for the GroupedListControl to look like anything other than ass, we need to do something about the horizontal scrollbar which appears at the bottom of the GroupedListControl , due to the width of each ListGroup being essentially the same as the container control. This, I must say was initially giving me pains. The FlowLayoutPanel does not, apparently, afford us the ability to control the appearance of the horizontal and vertical scrollbars individually.
Some research on the interwebs yielded, after no small amount of digging, the following solution. Sadly, it involves Windows messages and API calls, neither of which I am particularly well-versed in. More sadly, I seem to have misplaced the link to where I found the solution. If YOU know where the concept below came from, please forward me a link, so I can link back, and attribute properly.
Add the following code to the end of the GroupedListControl class:); } [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool ShowScrollBar(IntPtr hWnd, int wBar, bool bShow);
The above code essentially listens to windows messages, and when it “hears” one related to showing scrollbars in the FlowLayoutPanel base class, performs the appropriate action. in this case, some sort of WinAPI magic related to NOT showing the horizontal scrollbar.
Note that we WANT the vertical scrollbar to show up, anytime the height of the collected ListGroups exceeds the height of the GroupedList client area. But I decided I would prefer to have the horizontal scrolling option available within each individual ListGroup where needed, without the extra screen clutter of another horizontal scrollbar a the bottom of the container control..
I will try to follow up with a post about adding Right-Click detection for the ListGroup column Headers in a day or two. This enables us to deploy a different ContextMenuStrip when the user right-clicks on a columnheader vs. the standard context menu for the ListView Control.
Thanks for reading do far . . .
|
http://johnatten.com/2012/05/11/extending-c-listview-with-collapsible-groups-(part-ii)/
|
CC-MAIN-2022-27
|
refinedweb
| 1,196
| 53.61
|
Download a file from a URL and store in a specific directory using Java
In this tutorial, you learn how to download a file from a URL using the Java IO package.
You use the BufferedInputStream class to read the contents of a file, and the BufferedOutputStream and FileOutputStream classes to write the content onto a file on your computer. You use the URL and HttpURLConnection classes to open the file on a specified address, establish a connection, and get the input stream for the BufferedInputStream to read.
Downloading the file from a URL from the internet using Java:
Steps:
- Create a URL object using the syntax shown below and pass the link as the parameter.
URL url = new URL(urlLink);
- Create an HTTP URL connection.
HttpURLConnection http = (HttpURLConnection)url.openConnection();
- Get the file size of the file that you want to download.
double filesize = (double)http.getContentLengthLong();
- Get the input stream from the HTTP connection, and create an object from the BufferedInputStream class to read the same.
BufferedInputStream input = new BufferedInputStream(http.getInputStream());
- Save the downloaded file onto your computer in the specified location by using the FileOutputStream Class.
FileOutputStream ouput = new FileOutputStream(fileLoc);
- Create an object of BufferedOutputStream to write the data onto the output file. Pass the object of the FileOutputStream as the first parameter, and pass the length of the output stream buffer, 1024 bytes as the second parameter.
BufferedOutputStream bufferOut = new BufferedOutputStream(ouput, 1024);
- Using a while loop, read the content from the input stream and write it into the output file.
- The required file is saved in the specified directory on your computer after the download is completed.
Working of the while loop:
- Read the data from the BufferedInputStream object ‘input’ into the buffer variable using the read() method, by setting the offset to 0 and the number of bytes to read as 1024.
input.read(buffer, 0, 1024);
- Write the data stored in the buffer variable onto the output file using the write() method of the BufferedOutputStream class. Set the offset to 0 and the number of bytes to write as readbyte, which is the number of bytes read in the previous step.
bufferOut.write(buffer,0,readbyte);
- Calculate the percentage of the file downloaded/written in each iteration and print it.
Formula: TotalDownload / filesize
(where TotalDownload is the total number of bytes written onto the output file after each iteration).
- Repeat the preceding steps until the while loop terminates.
Note: The while loop terminates after reaching the end of the input stream, and the number of bytes reads less than zero.
Java Code:
import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.net.HttpURLConnection; import java.net.URL; public class downloadFIleFromURL { public void download(String urlLink, File fileLoc) { try { byte[] buffer = new byte[1024]; double TotalDownload = 0.00; int readbyte = 0; //Stores the number of bytes written in each iteration. double percentOfDownload = 0.00; URL url = new URL(urlLink); HttpURLConnection http = (HttpURLConnection)url.openConnection(); double filesize = (double)http.getContentLengthLong(); BufferedInputStream input = new BufferedInputStream(http.getInputStream()); FileOutputStream ouputfile = new FileOutputStream(fileLoc); BufferedOutputStream bufferOut = new BufferedOutputStream(ouputfile, 1024); while((readbyte = input.read(buffer, 0, 1024)) >= 0) { //Writing the content onto the file. bufferOut.write(buffer,0,readbyte); //TotalDownload is the total bytes written onto the file. TotalDownload += readbyte; //Calculating the percentage of download. percentOfDownload = (TotalDownload*100)/filesize; //Formatting the percentage up to 2 decimal points. String percent = String.format("%.2f", percentOfDownload); System.out.println("Downloaded "+ percent + "%"); } System.out.println("Your download is now complete."); bufferOut.close(); input.close(); } catch(IOException e){ e.printStackTrace(); } } public static void main(String[] args) { //Please provide the correct URL of what you want to download, and the correct directory with a name and extension to save the downloaded file in. String link = ""; File fileLoc = new File("D:\\MyDownloads\\HarryPotter.pdf"); downloadFIleFromURL d = new downloadFIleFromURL(); d.download(link, fileLoc); } }
Note: The extension of the downloaded file saved on the computer should be the same as the type of file you are downloading. For example, in the program above, we are downloading a PDF file and hence we save the file with the .pdf extension on the computer.
Output: Download PDF file, JPG file from URL using Java
- Downloading a PDF file from the URL provided:
Provide the required URL, and the directory and file name with its extension to save the file on your computer. Considering the details provided in the above program, the PDF file to be downloaded is shown below:
The output of the program will be:
Downloaded 1.07% Downloaded 5.27% Downloaded 12.19% Downloaded 25.94% Downloaded 48.66% Downloaded 76.65% Downloaded 88.55% Downloaded 94.79% Downloaded 98.89% Downloaded 99.96% Downloaded 100.00% Your download is now complete.
The downloaded PDF file will be stored in the specified directory, with the specified name.
- Downloading an image from the URL provided:
Provide the URL of the image address in the above program, and give the directory, file name, and file extension to save the file on your computer.
Suppose you want to download the following image:
Provide the download link of the image, and the directory with the file’s name and extension:
//Providing the required link and location: String link = ""; File fileLoc = new File("D:\\MyDownloads\\RogerFederer.jpg");
The output of the program will be:
Downloaded 1.54% Downloaded 18.53% Downloaded 32.42% Downloaded 54.04% Downloaded 64.85% Downloaded 75.66% Downloaded 94.18% Downloaded 98.82% Downloaded 100.00% Your download is now complete.
The downloaded image will be stored in the specified directory, with the specified name.
This is how you download the file from a URL. Hope this tutorial helped!
Also read: How to Parse JSON from URL in Java
|
https://www.codespeedy.com/download-a-file-from-a-url-using-java/
|
CC-MAIN-2021-43
|
refinedweb
| 967
| 52.05
|
Basetypes, Collections, Diagnostics, IO, RegEx...
This time I’m going to focus on one class in this blog post.
System.Collections.Generic.List<T> contains some special methods that
exist on this type. Those methods take a System.Predicate which is essentially
a delegate that allows us to filter based on a certain criteria. This means we
can selectively carry out operations only on those member that meet the
criteria we choose. Let’s see an example so that we can better understand this.
Say we have a List<String> names; object representing a bunch of names. If
we want to remove from the list all the names that begin with “a” we may have
to write something like this:
for (int i = names.Count - 1; i >= 0; i--) {
if (names[i].StartsWith("a", StringComparison.OrdinalIgnoreCase)) {
names.RemoveAt(i);
}
}
This works well but we can do a little better using
predicates. First, we have to define the static method to be the predicate to
validate that a name starts with an “a”. This would be:
static bool StartsWithA(string name) {
return name.StartsWith("a", StringComparison.OrdinalIgnoreCase);
Which is essentially the same as the statement we had
above. Separating it into a static method has a few advantages, practically the
ability to reuse the same predicate in various places and being able to
separate the logic for other uses. Now once we have this predicate removing all
names that start with an “a” is as simple as calling names.RemoveAll(StartsWithA);
You can still write the whole thing in one line if you so
choose by using an anonymous delegate this way:
names.RemoveAll(delegate(string name) {
return (name.StartsWith("a", StringComparison.OrdinalIgnoreCase));
});
Now, let’s take a different example and explore a different
method. Say we have a list of integers and we want to find all the primes in
the list. Given that we have List<Int32> numbers; we can use the
numbers.FindAll(IsPrime); to get back the filtered list of primes. This assumes
we have a predicate in this form:
static bool IsPrime(Int32 number) {
// TODO: Implement this predicate...
Some other useful predicates include:
Finding elements
Find — is used to find the first element that fulfils
the condition. If none are found it returns the default value for the given
type T (the value you get by calling the default constructor).
FindIndex — returns the first index of an item
matching the condition. There are also two additional overloads that are used
to start from a certain index or count multiple occurrences.
FindLast — returns the last element in the list
matching the condition. Again returning the default value for T if none were
found.
FindLastIndex — returns the last index of an item
matching the condition. Again, it has 3 overloads.
Evaluating conditions
TrueForAll — returns true if the predicate is met for
all elements in the list. You could easily use many of the other methods
discussed above to obtain this information but this is a little more elegant.
Exists — this could have been called
TrueForAtLeastOne as it basically determines if there exists an element in the
list that meets the condition determined by the predicate.
Now, let’s take a look at how to combine our knowledge of
predicates with a new C# language feature of 3.5 called extension
methods. We could extend IList<T> to have a couple of new methods
that take delegates. The first one we called ReplaceIf() which will replace
every element that meets the predicate condition with a certain element. This
is how it looks like:
public static void ReplaceIf<T>(this IList<T> list, Predicate<T> predicate, T replacement) {
for (int i = 0; i < list.Count; i++) {
if (predicate(list[i])) {
list[i] = replacement;
}
Since this is an extension method, the first argument has
the “this” keyword following by the type we want to extend. The second argument
is the predicate that we’ll test and third is the replacement element we’ll use
to replace all items in the list. Note that this is a generic method since
we’re extending IList<T> for every possible type T so we need to have T
available as a generic type in the method body. Now with the new LINQ types we
don’t really need to use Predicate anymore. We have uber-generic functions and
method delegates called Func and Action that we can use for any purpose. I
would recommend using them. So now it would then look like this:
public static void ReplaceIf<T>(this IList<T> list, Func<T, bool> predicate, T replacement) {
In the second example we’ll use a converter to convert the
element that meet the predicate condition instead of simply replacing them with
a constant.
public static void ConvertIf<T>(this IList<T> list, Predicate<T> predicate, Converter<T, T> convert) {
T item = list[i];
if (predicate(item)) {
list[i] = convert(item);
This is very similar to the previous example only instead of
using the fixed value t we’re suing a Converter to convert the value to
something else.
Again, with LINQ we would use Func instead of Converter and
Predicate, and this would be instead:
public static void ConvertIf<T>(this IList<T> list, Func<T, bool> predicate, Func<T, T> convert) {
Now, if we have a List<Double> ld and we
want to invert all the numbers in the list we can write it this way:
ld.ConvertIf(delegate(double d) { return (d != 0); }, delegate(double d) { return 1 / d; });
The code can be expressed even more succinctly using lambda
expressions, another new C# 3.0 language feature included as part of .NET
3.5:
ld.ConvertIf(d => d != 0, d => 1 / d);
For more information about Collections please read the
recent MSDN article I wrote about Collections
Best Practices.
It's very odd that Array and List<T> define a ForEach method, but the LINQ extensions on IEnumerable<T> do not have a ForEach... any ideas why?
Uh, that first sample:
"for (int i = 0; i < names.Count; i++) {
if (names[i].StartsWith("a", StringComparison.OrdinalIgnoreCase)) {
names.RemoveAt(i);
}
"
will not "work well" - it will skip every second consecutive element that starts with "a"!
Michael, I too was wondering where ForEach went... Suspicions fall on some kind of crazy concurrency support innovation in .NET 4....
You can write your own ForEach extension.
public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
{
foreach (T item in source) action(item);
// e.g.
names.ForEach(i => i = i.ToUpper());
My ever-growing and sinking suspicion is that all the new BCL and C# features were made almost entirely to enable LINQ and not as general features. Since LINQ doesn't need to iterate, why add it? (Just end up with some confused developer who wonders why he can't use ForEach in his LINQ-to-SQL query.)
Limitation in the List<T>>Find implementation...
|
http://blogs.msdn.com/bclteam/archive/2007/08/24/bcl-refresher-list-t-predicates-inbar-gazit.aspx
|
crawl-002
|
refinedweb
| 1,154
| 63.8
|
Regular expression syntax is fairly similar across many environments. However, the way you use regular expressions varies greatly. For example, once you've crafted your regular expression, how do you use it to find a match or replace text? It's easy to find detailed API documentation, once you know what API to look up. Figuring out where to start is often the hardest part.
This article assumes you're familiar with regular expressions and want to work with regular expressions in C++ using the Technical Report 1 (TR1) proposed extensions to the C++ Standard Library. It's a quick start guide, briefly answering some of the first questions you're likely to ask. For more details, see Getting started with C++ TR1 regular expressions or dive into the documentation that comes with your implementation.
A: Support for TR1 extensions in Visual Studio 2008 is added as a feature pack. Other implementations include the Boost and Dinkumware. The GNU compiler
gcc added support for TR1 regular expressions in version 4.3.0.
A: It depends on your implementation. Visual Studio 2008 supports these options:
basic,
extended,
ECMAScript,
awk,
grep,
egrep.
A:
<regex>
A:
std::tr1
This is the namespace for the
regex class and functions such as
regex_search. Flags are contained in the nested namespace
std::tr1::regex_constants.
A: Construct a
regex object and pass it to
regex_search.
For example:
std::string str = "Hello world"; std::tr1::regex rx("ello"); assert( regex_search(str.begin(), str.end(), rx) );
The function
regex_search returns
true because
str contains the pattern
ello. Note that
regex_match would return
false in the example above because it tests whether the entire
string matches the regular expression.
regex_search behaves more like most people expect when testing for a match.
A: Use a form of
regex_search that takes a
match_result object as a parameter.
For example, the following code searches for
<h> tags and prints the level and tag contents.
std::tr1::cmatch res; str = "<h2>Egg prices</h2>"; std::tr1::regex rx("<h(.)>([^<]+)"); std::tr1::regex_search(str.c_str(), res, rx); std::cout << res[1] << ". " << res[2] << "\n";
This code would print
2. Egg prices. The example uses
cmatch, a
typedef provided by the library for
match_results<const char* cmatch>.
A: Use
regex_replace..
A: The function
regex_replace does global replacements by default.
A: Use the
format_first_only flag with
regex_replace.
The fully qualified name for the flag is
std::tr1::regex_constants::format_first_only and would be the fourth argument to
regex_replace.
A: Use the
icase flag as a parameter to the
regex constructor.
The fully qualified name of the flag is
std::tr1::regex_constants::icase.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/string/TR1Regex.aspx
|
crawl-002
|
refinedweb
| 441
| 67.55
|
Static Initializers and Lazy Instantiation
Many of my favorite articles deal with the author addressing reader inquiries. A previous article of mine, entitled "Global Variables in Java with the Singleton Pattern", generated quite a bit of e-mail. Most of it centered on the question of initialization: Why use Lazy Instantiation when static initialization is more efficient? Well, there are pros and cons to both, and there is no firm answer.
Listing 1
static MyObject object1 = new MyObject()
In Listing 1, the static keyword is used to indicate that this particular variable, namely , is a class variable rather than an instance variable. This means that there will be only one copy of the variable for the class rather than one for each instance of the class. The code segment in Listing 1 not only creates the variable, it also contains a static initializer that gives the object a value. This static initialization occurs automatically when the class containing the variable is first accessed.
Here, first accessed is defined to be the first time one of the following events occurs:
- An instance of the class is created via a constructor.
- A static method that is defined in the class (not inherited) is called.
- A static variable that is declared in the class (not inherited) is assigned or otherwise accessed. This does not include the static initializer, which occurs at compile time.
The process of Lazy Instantiation, on the other hand, doesn't automatically allocate variables. It enables the creation of objects to be put off until they are actually needed. Instead of being created when the program first starts, the creation of variables managed via Lazy Instantiation is deferred until they are actually required.
Which is better? Lets start off with some examples.
Here is an example of static initialization:
Listing 2
public class MyClass { static MyObject object1 = new MyObject(); void doProcess() { object1.process(); } }In Listing 2, is created and its value is initialized when the class is first accessed. After some period of time, will be used via the method.
Here is a similar code segment making use of Lazy Instantiation.
Listing 3
public class MyClass { MyObject object1; void doProcess() { If (object1 == null) object1 = new MyObject(); object1.process(); } }
The code in Listing 3 performs the same function as in Listing 2 except that the variable is being managed via Lazy Instantiation. When is created, nothing is done with other than implicitly setting its reference to null. The method makes use of and checks to see if it exists before using it. If it does not exist, an instance is created and then used.
Now back to the big question: Which is better?
Different Purposes
When it comes to speed, the static version wins. Static methods are quicker than normal ones, as there is no code execution overhead. Classes implementing Lazy Instantiation contain management code that needs to be executed whenever access to a variable is required. In Listing 3, overhead is incurred every time object 1 is accessed, because it first needs to be checked for a null value.
For ease of programming, static wins again. Only a single line of code is required to implement a variable with a static initializer. As mentioned above, Lazy Instantiation requires that an object be checked for a null value every time it is accessed. Also, multithreaded applications would require use of the double-checked locking pattern, which results in additional code. (See previous article on Singletons for more information on this.)
Okay, so there are several reasons for using a static initializer, as my readers have mentioned. However, there are several good reasons to use Lazy Instantiation instead.
The first reason for Lazy Instantiation concerns the initialization of the managed object's value. For a static initializer, the initial value is determined at compile time. When the developer puts to code the line that creates the object, he or she also determines what the initial value is going to be. While this is fine for many uses, there are times when it just isn't good enough. Lazy Instantiation, however, initializes the value of an object when the object is created. Since the object is being created at runtime, there is much more data available which can be used to calculate an appropriate initial value. One example of this would be the placing of a user name and id in a session variable.
Secondly, items that are static cannot be used to implement an interface. Interfaces in Java are powerful programming constructs that allow classes to inherit method definitions. Static objects suffer a loss of a great deal of programming power by not being able to implement interfaces.
The most important reason of all for using Lazy Instantiation, however, is that of keeping resources under control. Static objects are always created, whether they are needed or not. If the object is never used, then space is wasted holding the object and time is wasted creating it. Objects managed by Lazy Instantiation will only be created if they are needed and do not suffer these problems. This is especially important if the resources used are valuable, such as a database connection. Why tie up a database connection if it is never going to be used?
Also, since static objects are created during the initialization of a Java program, a lot of processing needs to be done before the program can actually begin. Software built with Lazy Instantiation defers the creation of objects and thus starts up a lot faster. This makes the Java program appear more agile and responsive, a definite bonus if you are trying to sell the software to a client.
Summary
The pros and cons of static initializers versus Lazy Instantiation must be weighed to determine which is best for a given use. Both are powerful, and both are examples of good programming, but they serve different purposes.
About the Author
With over thirteen years' experience in application development, Wiebe de Jong is a Web developer for IMRglobal Ltd. in Vancouver, B.C., Canada. He develops Internet and intranet applications for clients using Java, UML, and XML. He also teaches.
|
http://www.developer.com/tech/article.php/626421/Static-Initializers-and-Lazy-Instantiation.htm
|
CC-MAIN-2014-15
|
refinedweb
| 1,020
| 55.13
|
In this tutorial, you will learn about C programming storage class: Auto, Static, Register and Extern.
C programming storage class
In C programming, properties of variables include
variable_name,
variable_type,
variable_size and
variable_value.
We already know that variable in an identifier. Identifier has other properties such as storage class, storage duration, scope and linkage.
In C, there are four types of storage class:
- Auto
- Static
- Extern
Storage class of variable in C determines following things:
- Lifetime of the variable i.e. time period during which variable exist in computer memory.
- Scope of the variable i.e. availability of variable value.
Auto Storage Class/Local Variables
Local variables are declared within function body and have automatic storage duration.
For representing automatic storage duration keyword
auto is used.
Memory for the local variables is created when the function is invoked or active, and destroyed or de-allocated when a block is exited or control moves out of the function.
By default, local variables have automatic storage duration.
#include <stdio.h> int main () { int x; //local variable .... return 0; } int func_name () { int y; //local variable .... }
In the above program, both
x and
y are local variables but
x is only available to
main function whereas
y is only available to
func_name function.
Register Storage Class
Register variables are stored in CPU registers and are available within the declared function.
The lifetime of register variable remains only when control is within the block.
Keyword
register is used to define register storage class. Register variables are accessed faster because it is stored in CPU register.
#include <stdio.h> int main() { register int x; //register variable for (x = 1; x <= 5; x++) { printf("\n%d", x); } }
Static Storage Class
Static variables are defined by keyword
static. For static variables, memory is allocated only once and storage duration remains until the program terminates.
How do static and auto variables works?
#include <stdio.h> int main() { increment(); increment(); increment(); return 0; } void increment() { auto int x = 1; printf("%d\n", x); x = x + 1; }
#include <stdio.h> int main() { increment(); increment(); increment(); return 0; } void increment() { static int x = 1; printf("%d\n", x); x = x + 1; }
Explanation
In the above programs,
increment() function is called three times from the
main.
The only difference is the storage class of variable x.
Like
auto variables,
static variables are local to the function in which they are declared but the difference is that the storage duration of
static variables remains until the end of the program as mentioned earlier.
External Storage Class
External variables are declared outside the body of the function. If they are declared in the global declaration section, it can be used by all the functions in the program.
#include <stdio.h> int x = 5; //global variable int main () { ...... ...... return 0; }
Note the difference in following
extern int x; int x = 5;
In the above example, the first statement is the declaration of the variable and the second statement is the definition.
Note: Automatic storage helps in preserving memory because they are created when the function is started and destroyed when the function is exited.
|
http://www.trytoprogram.com/c-programming/c-programming-storage-class/
|
CC-MAIN-2019-30
|
refinedweb
| 517
| 55.03
|
Tutorials Inheritance in C# and .NET Console Application REST client Working with LINQ Microservices hosted in Docker C# Tutorials 5/23/2017 1 min to read Edit Online. Inheritance: demonstrates how class and interface inheritance provide code reuse in C#. String Interpolation: demonstrates many of the uses for the $ string interpolation in C#. Using Attributes: how to create and use attributes in C#. Inheritance in C# and .NET 5/23/2017 25 min to read Edit Online
IntroductionThis tutorial introduces you to inheritance in C#. Inheritance is a feature of object-oriented programming languagesthat allows you to define a base class that provides specific functionality (data and behavior) and to define derivedclasses that either inherit or override that functionality.
PrerequisitesThis tutorial assumes that you've installed .NET Core. For installation instructions, see .NET Core installation guide.You also need a code editor. This tutorial uses Visual Studio Code, although you can use any code editor of yourchoice.
using System;
public class A { private int value = 10;
public class B : A { public int GetValue() { return this.value; } } }
public class C : A { // public int GetValue() // { // return this.value; // } }
public class B : A { }
Derived classes can also override inherited members by providing an alternate implementation. In order to be ableto override a member, the member in the base class must be marked with the virtual keyword. By default, baseclass members are not marked as virtual and cannot be overridden. Attempting to override a non-virtualmember, as the following example does, generates compiler error CS0506: " cannot override inherited memberbecause it is not marked virtual, abstract, or override.
public class A { public void Method1() { // Do something. } }
public class B : A { public override void Method1() // Generates CS0506. { // Do something else. } }
In some cases, a derived class must override the base class implementation. Base class members marked with theabstract keyword require that derived classes override them. Attempting to compile the following examplegenerates compiler error CS0534, " does not implement inherited abstract member ', because class B provides noimplementation for A.Method1 .
Implicit inheritanceBesides any types that they may inherit from through single inheritance, all types in the .NET type system implicitlyinherit from Object or a type derived from it. This ensures that common functionality is available to any type.To see what implicit inheritance means, let's define a new class, SimpleClass , that is simply an empty classdefinition:
We can then use reflection (which lets us inspect a type's metadata to get information about that type) to get a listof the members that belong to the SimpleClass type. Although we haven't defined any members in our SimpleClass class, output from the example indicates that it actually has nine members. One of these is aparameterless (or default) constructor that is automatically supplied for the SimpleClass type by the C# compiler.The eight seven are members of Object, the type from which all classes and interfaces in the .NET type systemultimately implicitly inherit. using System; using System.Reflection;
} } } // The example displays the following output: // Type SimpleClass has 9 members: // ToString (Method): Public, Declared by System.Object // Equals (Method): Public, Declared by System.Object // Equals (Method): Public Static, Declared by System.Object // ReferenceEquals (Method): Public Static, Declared by System.Object // GetHashCode (Method): Public, Declared by System.Object // GetType (Method): Public, Declared by System.Object // Finalize (Method): Internal, Declared by System.Object // MemberwiseClone (Method): Internal, Declared by System.Object // .ctor (Constructor): Public, Declared by SimpleClass
Implicit inheritance from the Object class makes these methods available to the SimpleClass class: The public ToString method, which converts a SimpleClass object to its string representation, the fully qualified type name. In this case, the ToString method returns the string "SimpleClass". Three methods that test for equality of two objects: the public instance Equals(Object) method, the public static Equals(Object, Object) method, and the public static ReferenceEquals(Object, Object) method. By default, these methods test for reference equality; that is, to be equal, two object variables must refer to the same object. The public GetHashCode method, which computes a value that allows an instance of the type to be used in hashed collections. The public GetType method, which returns a Type object that represents the SimpleClass type. The protected Finalize() method, which is designed to release unmanaged resources before an object's memory is reclaimed by the garbage collector. The protected MemberwiseClone() method, which creates a shallow clone of the current object.Because of implicit inheritance, we can call any inherited member from a SimpleClass object just as if it wasactually a member defined in the SimpleClass class. For instance, the following example calls the SimpleClass.ToString method, which SimpleClass inherits from Object.
The following table lists the categories of types that you can create in C# and the types from which they implicitlyinherit. Each base type makes a different set of members available through inheritance to implicitly derived types.
class Object
if (model == null) throw new ArgumentNullException("The model cannot be null."); else if (String.IsNullOrWhiteSpace(make)) throw new ArgumentException("model cannot be an empty string or have space characters only."); Model = model;
In this case, we should not rely on inheritance to represent specific car makes and models. For example, we do notneed to define a Packard type to represent automobiles manufactured by the Packard Motor Car Company.Instead, we can represent them by creating an Automobile object with the appropriate values passed to its classconstructor, as the following example does.
An is-a relationship based on inheritance is best applied to a base class and to derived classes that add additionalmembers to the base class or that require additional functionality not present in the base class.
The following example shows the source code for the Publication class, as well as a PublicationType enumerationthat is returned by the Publication.PublicationType property. In addition to the members that it inherits fromObject, the Publication class defines the following unique members and member overrides:
if (title == null) throw new ArgumentNullException("The title cannot be null."); else if (String.IsNullOrWhiteSpace(title)) throw new ArgumentException("The title cannot consist only of whitespace."); Title = title;
Type = type; }
A constructor Because the Publication class is abstract , it cannot be instantiated directly from code like the following:
However, its instance constructor can be called directly from derived class constructors, as the source code for the Book class shows. Two publication-related properties Title is a read-only String property whose value is supplied by calling the Publication constructor, which stores the value in a private field named pubTitle . Pages is a read-write Int32 property that indicates how many total pages the publication has. The value is stored in a private field named totalPages . It must be a positive number or an ArgumentOutOfRangeException is thrown. Publisher-related members Two read-only properties, Publisher and Type , return the value of the private pubName and pubType fields. The values are originally supplied by the call to the Publication class constructor. private copyrName and copyrDate fields. The values can be retrieved from the CopyrightName and CopyrightDate properties.
The following figure illustrates the relationship between our base Publication class and its implicitly inheritedObject class.
public Book(string title, string isbn, string author, string publisher) : base(title, publisher,PublicationType.Book) { // isbn argument must be a 10- or 13-character numeric string without "-" characters. // We could also determine whether the ISBN is valid by comparing its checksum digit // with a computed checksum. // if (! String.IsNullOrEmpty(isbn)) { // Determine if ISBN length is correct. if (! (isbn.Length == 10 | isbn.Length == 13)) throw new ArgumentException("The ISBN must be a 10- or 13-character numeric string."); ulong nISBN = 0; if (! UInt64.TryParse(isbn, out nISBN)) throw new ArgumentException("The ISBN can consist of numeric characters only."); } ISBN = isbn;
Author = author; }
if (currency.Length != 3) throw new ArgumentException("The ISO currency symbol is a 3-character string."); Currency = currency;
return oldValue; }
public override string ToString() => $"{(String.IsNullOrEmpty(Author) ? "" : Author + ", ")}{Title}";}In addition to the members that it inherits from Publication , the Book class defines the following uniquemembers and member overrides: Two constructors The two Book constructors share three common parameters. Two, title and publisher, correspond to parameters of the Publication constructor. The third is author, which is stored to a private authorName field. One constructor includes an isbn parameter, which is stored to the private id field. The first constructor uses the this keyword to call the other constructor. This is a common pattern in defining constructors; constructors with fewer parameters provide default values when calling the constructor with the greatest number of parameters. The second constructor uses the base keyword to pass the title and publisher name to the base class constructor. If you don't make an explicit call to a base class constructor in your source code, the C# compiler automatically supplies a call to the base class' default or parameterless constructor. A read-only ISBN property, which returns the Book object's International Standard Book Number, a unique 10- or 13-digit number. The ISBN is supplied as an argument to one of the Book constructors and is stored in the private id field. A read-only Author property. The author name is supplied as an argument to both Book constructors and is stored in the private authorName field. Two read-only price-related properties, Price and Currency . Their values are provided as arguments in a SetPrice method call. The price is stored in a private field, bookPrice . The Currency property is the three- digit ISO currency symbol (for example, USD for the U.S. dollar) and is stored in the private ISOCurrencySymbol field. ISO currency symbols can be retrieved from the ISOCurrencySymbol property.
A SetPrice method, which sets the values of the bookPrice and ISOCurrencySymbol fields. These are the values returned by the Price and Currency properties. Overrides to the ToString method (inherited from Publication ) and the Equals(Object) and GetHashCode() methods (inherited from @System.Object). Unless it is overridden, the Equals(Object) method tests for reference equality. That is, two object variables are considered to be equal if they refer to the same object. In the case of the Book class, on the other hand, two Book objects should be equal if they have the same ISBN. When you override the Equals(Object) method, you must also override the GetHashCode() method, which returns a value that the runtime uses to store items in hashed collections for efficient retrieval. The hash code should return a value that's consistent with the test for equality. Since we've overridden Equals(Object) to return true if the ISBN properties of two Book objects are equal, we return the hash code computed by calling the GetHashCode() method of the string returned by the ISBN property.
The following figure illustrates the relationship between the Book class and Publication , its base class.We can now instantiate a Book object, invoke both its unique and inherited members, and pass it as an argumentto a method that expects a parameter of type Publication or of type Book , as the following example shows. using System; using static System.Console;
var book2 = new Book("The Tempest", "Classic Works Press", "Shakespeare, William"); Write($"{book.Title} and {book2.Title} are the same publication: " + $"{((Publication) book).Equals(book2)}"); }
We can then derive some classes from Shape that represent specific shapes. The following example defines threeclasses, Triangle , Rectangle , and Circle . Each uses a formula unique for that particular shape to compute thearea and perimeter. Some of the derived classes also define properties, such as Rectangle.Diagonal and Circle.Diameter , that are unique to the shape that they represent. using System;
The following example uses objects derived from Shape . It instantiates an array of objects derived from Shape andcalls the static methods of the Shape class, which wraps return Shape property values. Note that the runtimeretrieves values from the overridden properties of the derived types. The example also casts each Shape object inthe array to its derived type and, if the cast succeeds, retrieves properties of that particular subclass of Shape . using System;
See alsoClasses and objectsInheritance (C# Programming Guide) Console Application 5/23/2017 11 min to read Edit Online
IntroductionThis tutorial teaches you a number of features in .NET Core and the C# language. Youll learn: The basics of the .NET Core Command Line Interface (CLI) The structure of a C# Console Application Console I/O The basics of File I/O APIs in .NET Core The basics of the Task Asynchronous Programming Model in .NET CoreYoull build an application that reads a text file, and echoes the contents of that text file to the console. The output tothe console will be paced to match reading it aloud. You can speed up or slow down the pace by pressing the < or> keys.There are a lot of features in this tutorial. Lets build them one by one.
PrerequisitesYoull need to setup your machine to run .NET core. You can find the installation instructions on the .NET Core page.You can run this application on Windows, Linux, macOS or in a Docker container. Youll need to install your favoritecode editor.
This statement tells the compiler that any types from the System namespace are in scope. Like other ObjectOriented languages you may have used, C# uses namespaces to organize types. This hello world program is nodifferent. You can see that the program is enclosed in the ConsoleApplication namespace. Thats not a verydescriptive name, so change it to TeleprompterConsole :
namespace TeleprompterConsole
This method uses types from two new namespaces. For this to compile youll need to add the following two lines tothe top of the file:
using System.Collections.Generic; using System.IO;
The IEnumerable<T> interface is defined in the System.Collections.Generic namespace. The File class is defined inthe System.IO namespace.This method is a special type of C# method called an Enumerator method. Enumerator methods return sequencesthat are evaluated lazily. That means each item in the sequence is generated as it is requested by the codeconsuming the sequence. Enumerator methods are methods that contain one or more yield return statements.The object returned by the ReadFrom method contains the code to generate each item in the sequence. In thisexample, that involves reading the next line of text from the source file, and returning that string. Each time thecalling code requests the next item from the sequence, the code reads the next line of text from the file and returnsit. When the file has been completely read, the sequence indicates that there are no more items.There are two other C# syntax elements that may be new to you. The using statement in this method managesresource cleanup. The variable that is initialized in the using statement ( reader , in this example) must implementthe IDisposable interface. The IDisposable interface defines a single method, Dispose , that should be called whenthe resource should be released. The compiler generates that call when execution reaches the closing brace of the using statement. The compiler-generated code ensures that the resource is released even if an exception is thrownfrom the code in the block defined by the using statement.The reader variable is defined using the var keyword. var defines an implicitly typed local variable. That meansthe type of the variable is determined by the compile time type of the object assigned to the variable. Here, that isthe return value from @System.IO.File.OpenText, which is a StreamReader object.Now, lets fill in the code to read the file in the Main method:
Run the program (using dotnet run and you can see every line printed out to the console).
Next, you need to modify how you consume the lines of the file, and add a delay after writing each word. Replacethe Console.WriteLine(line) statement in the Main method with the following block:
Console.Write(line); if (!string.IsNullOrWhiteSpace(line)) { var pause = Task.Delay(200); // Synchronously waiting on a task is an // anti-pattern. This will get fixed in later // steps. pause.Wait(); }
The Task class is in the System.Threading.Tasks namespace, so you need to add that using statement at the topof file:
using System.Threading.Tasks;
Run the sample, and check the output. Now, each single word is printed, followed by a 200 ms delay. However, thedisplayed output shows some issues because the source text file has several lines that have more than 80characters without a line break. That can be hard to read while it's scrolling by. Thats easy to fix. Youll just keeptrack of the length of each line, and generate a new line whenever the line length reaches a certain threshold.Declare a local variable after the declaration of words that holds the line length:
var lineLength = 0;
Then, add the following code after the yield return word + " "; statement (before the closing brace): lineLength += word.Length + 1; if (lineLength > 70) { yield return Environment.NewLine; lineLength = 0; }
Run the sample, and youll be able to read aloud at its pre-configured pace.
Async TasksIn this final step, youll add the code to write the output asynchronously in one task, while also running anothertask to read input from the user if they want to speed up or slow down the text display. This has a few steps in itand by the end, youll have all the updates that you need. The first step is to create an asynchronous Task returningmethod that represents the code youve created so far to read and display the file.Add this method to your Program class (its taken from the body of your Main method):
Youll notice two changes. First, in the body of the method, instead of calling Wait() to synchronously wait for a taskto finish, this version uses the await keyword. In order to do that, you need to add the async modifier to themethod signature. This method returns a Task . Notice that there are no return statements that return a Taskobject. Instead, that Task object is created by code the compiler generates when you use the await operator. Youcan imagine that this method returns when it reaches an await . The returned Task indicates that the work has notcompleted. The method resumes when the awaited task completes. When it has executed to completion, thereturned Task indicates that it is complete. Calling code can monitor that returned Task to determine when it hascompleted.You can call this new method in your Main method:
ShowTeleprompter().Wait();
Here, in Main , the code does synchronously wait. You should use the await operator instead of synchronouslywaiting whenever possible. But, in a console applications Main method, you cannot use the await operator. Thatwould result in the application exiting before all tasks have completed.Next, you need to write the second asynchronous method to read from the Console and watch for the < and >keys. Heres the method you add for that task: private static async Task GetInput() { var delay = 200; Action work = () => { do { var key = Console.ReadKey(true); if (key.KeyChar == '>') { delay -= 10; } else if (key.KeyChar == '<') { delay += 10; } } while (true); }; await Task.Run(work); }
This creates a lambda expression to represent an Action delegate that reads a key from the Console and modifies alocal variable representing the delay when the user presses the < or > keys. This method uses ReadKey() to blockand wait for the user to press a key.To finish this feature, you need to create a new async Task returning method that starts both of these tasks ( GetInput and ShowTeleprompter ), and also manages the shared data between these two tasks.
Its time to create a class that can handle the shared data between these two tasks. This class contains two publicproperties: the delay, and a flag to indicate that the file has been completely read:
namespace TeleprompterConsole { internal class TelePrompterConfig { private object lockHandle = new object(); public int DelayInMilliseconds { get; private set; } = 200;
Put that class in a new file, and enclose that class in the TeleprompterConsole namespace as shown above. Youllalso need to add a using static statement so that you can reference the Min and Max method without theenclosing class or namespace names. A using static statement imports the methods from one class. This is incontrast with the using statements used up to this point that have imported all classes from a namespace.
The other language feature thats new is the lock statement. This statement ensures that only a single thread canbe in that code at any given time. If one thread is in the locked section, other threads must wait for the first threadto exit that section. The lock statement uses an object that guards the lock section. This class follows a standardidiom to lock a private object in the class.Next, you need to update the ShowTeleprompter and GetInput methods to use the new config object. Write onefinal Task returning async method to start both tasks and exit when the first task finishes:
The one new method here is the WhenAny(Task[]) call. That creates a Task that finishes as soon as any of the tasksin its argument list completes.Next, you need to update both the ShowTeleprompter and GetInput methods to use the config object for thedelay:
This new version of ShowTeleprompter calls a new method in the TeleprompterConfig class. Now, you need toupdate Main to call RunTeleprompter instead of ShowTeleprompter :
RunTeleprompter().Wait();
To finish, you'll need to add the SetDone method, and the Done property to the TelePrompterConfig class: public bool Done => done;
ConclusionThis tutorial showed you a number of the features around the C# language and the .NET Core libraries related toworking in Console applications. You can build on this knowledge to explore more about the language, and theclasses introduced here. Youve seen the basics of File and Console I/O, blocking and non-blocking use of the Taskbased Asynchronous programming model, a tour of the C# language and how C# programs are organized and the.NET Core Command Line Interface and tools. REST client 5/23/2017 14 min to read Edit Online
IntroductionThis tutorial teaches you a number of features in .NET Core and the C# language. Youll learn: The basics of the .NET Core Command Line Interface (CLI). An overview of C# Language features. Managing dependencies with NuGet HTTP Communications Processing JSON information Managing configuration with Attributes.Youll build an application that issues HTTP Requests to a REST service on GitHub. You'll read information in JSONformat, and convert that JSON packet into C# objects. Finally, you'll see how to work with C# objects.There are a lot of features in this tutorial. Lets build them one by one.If you prefer to follow along with the final sample for this topic, you can download it. For download instructions, seeSamples and Tutorials.
PrerequisitesYoull need to set up your machine to run .NET core. You can find the installation instructions on the .NET Corepage. You can run this application on Windows, Linux, macOS or in a Docker container. Youll need to install yourfavorite code editor. The descriptions below use Visual Studio Code, which is an open source, cross platform editor.However, you can use whatever tools you are comfortable with.
<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup> <PackageReference Include="System.Runtime.Serialization.Json" Version="4.3.0" /> </ItemGroup>
Most code editors will provide completion for different versions of these libraries. You'll usually want to use thelatest version of any package that you add. However, it is important to make sure that the versions of all packagesmatch, and that they also match the version of the .NET Core Application framework.After you've made these changes, you should run dotnet restore again so that the package is installed on yoursystem.
You'll need to add a using statement at the top of your Main method so that the C# compiler recognizes the Tasktype:
If you build your project at this point, you'll get a warning generated for this method, because it does not containany await operators and will run synchronously. Ignore that for now; you'll add await operators as you fill in themethodshouldn't exit the program before that task finishes. Therefore, you must use the Wait method to block and waitfor the task to finish: public static void Main(string[] args) { ProcessRepositories().Wait(); }
Now, you have a program that does nothing, but does it asynchronously. Let's go back to the ProcessRepositoriesmethod and fill in a first version of it:'). First, you create a new HttpClient. This object handles therequest and the responses. The next few lines set up the HttpClient for this request. First, it is configured to acceptthe GitHub JSON responses. This format is simply JSON. The next line adds a User Agent header to all requestsfrom this object. These two headers are checked by the GitHub server code, and are necessary to retrieveinformation from GitHub.After you've configured the HttpClient, you make a web request and retrieve the response. In this first version, youuse the GetStringAsync(String) convenience method. This convenience method starts a task that makes the webrequest, and then when the request returns, it reads the response stream and extracts the content from the stream.The body of the response is returned as a String. The string is available when the task completes.The final two lines of this method await that task, and then print the response to the console. Build the app, and runit. The build warning is gone now, because the ProcessRepositories now does contain an await operator. You'llsee a long display of JSON formatted text.
namespace WebAPIClient { public class repo { public string name; } }
Put the above code in a new file called 'repo.cs'. This version of the class represents the simplest path to processJSON data. The class name and the member name match the names used in the JSON packet, instead of followingC# conventions. You'll fix that by providing some configuration attributes later. This class demonstrates anotherimportant feature of JSON serialization and deserialization: Not all the fields in the JSON packet are part of thisclass. The JSON serializer will ignore information that is not included in the class type being used. This featuremakes it easier to create types that work with only a subset of the fields in the JSON packet.Now that you've created the type, let's deserialize it. You'll need to create a DataContractJsonSerializer object. Thisobject must know the CLR type expected for the JSON packet it retrieves. The packet from GitHub contains asequence of repositories, so a List<repo> is the correct type. Add the following line to your ProcessRepositoriesmethod:
You're using two new namespaces, so you'll need to add those as well:
using System.Collections.Generic; using System.Runtime.Serialization.Json;
Next, you'll use the serializer to convert JSON into C# objects. Replace the call to GetStringAsync(String) in yourProcessRepositories method with the following two lines:
Notice that you're now using GetStreamAsync(String) instead of GetStringAsync(String). The serializer uses astream instead of a string as its source. Let's explain a couple features of the C# language that are being used in thesecond line above. The argument to ReadObject(Stream) is an await expression. Await expressions can appearalmost anywhere in your code, even though up to now, you've only seen them as part of an assignment statement.Secondly, the as operator converts from the compile time type of object to List<repo> . The declaration ofReadObject(Stream) declares that it returns an object of type System.Object. ReadObject(Stream) will return thetype you specified when you constructed it ( List<repo> in this tutorial). If the conversion does not succeed, the asoperator evaluates to null , instead of throwing an exception.You're almost done with this section. Now that you've converted the JSON to C# objects, let's display the name ofeach repository. Replace the lines that read:
Compile and run the application. It will print out the names of the repositories that are part of the .NET Foundation.
Controlling SerializationBeforeclasses:
After you save the file, run dotnet restore to retrieve this package.Next, open the repo.cs file. Let's change the name to use Pascal Case, and fully spell out the name Repository . Westill want to map JSON 'repo' nodes to this type, so you'll need to add the DataContract attribute to the classdeclaration. You'll set the Name property of the attribute to the name of the JSON nodes that map to this type:
[DataContract(Name="repo")] public class Repository
The DataContractAttribute is a member of the System.Runtime.Serialization namespace, so you'll need to add theappropriate using statement at the top of the file:
using System.Runtime.Serialization;
You changed the name of the repo class to Repository , so you'll need to make the same name change inProgram.cs (some editors may support a rename refactoring that will make this change automatically:)
// ...
Next, let's make the same change with the name field by using the DataMemberAttribute class. Make the followingchangessame output as before. Before we process more properties from the web server, let's make one more change to the Repository class. The Name member is a publicly accessible field. That's not a good object-oriented practice, solet's change it to a property. For our purposes, we don't need any specific code to run when getting or setting theproperty, but changing to a property makes it easier to add those changes later without breaking any code thatuses the Repository class.Remove the field definition, and replace it with an auto-implemented property:
The compiler generates the body of the get and set accessors, as well as a private field to store the name. Itwould be similar to the following code that you could type by hand:
Let's make one more change before adding new features. The ProcessRepositories method can do the async workand return a collection of the repositories. Let's return the List<Repository> from that method, and move the codethat writes the information into the Main method.Change the signature of ProcessRepositories to return a task whose result is a list of Repository objects:
Then, just return the repositories after processing the JSON response:
The compiler generates the Task<T> object for the return because you've marked this method as async . Then, let'smodify the Main method so that it captures those results and writes each repository name to the console. Your Main method now looks like this:
Accessing the Result property of a Task blocks until the task has completed. Normally, you would prefer to awaitthe completion of the task, as in the ProcessRepositories method, but that isn't allowed in the Main method.
target type. The Uri type may be new to you. It represents a URI, or in this case, a URL. In the case of the Uri and int types, if the JSON packet contains data that does not convert to the target type, the serialization action willthrow an exception.Once you've added these, update the Main method to display those elements:
As a final step, let's add the information for the last push operation. This information is formatted in this fashion inthe JSON response:
2016-02-08T21:27:00Z
That format does not follow any of the standard .NET DateTime formats. Because of that, you'll need to write acustom conversion method. You also probably don't want the raw string exposed to users of the Repository class.Attributes can help control that as well. First, define a private property that will hold the string representation ofthe date time in your Repository class:
[DataMember(Name="pushed_at")] private string JsonDate { get; set; }
The DataMember attribute informs the serializer that this should be processed, even though it is not a publicmember. Next, you need to write a public read-only property that converts the string to a valid DateTime object,and returns thatnot be read to or written from any JSON object. This property contains only a get accessor. There is no setaccessor. That's how you define a read-only property in C#. (Yes, you can create write-only properties in C#, buttheir value is limited.) The ParseExact(String, String, IFormatProvider) method parses a string and creates aDateTime object using a provided date format, and adds additional metadata to the DateTime using a CultureInfoobject. If the parse operation fails, the property accessor throws an exception.To use InvariantCulture, you will need to add the System.Globalization namespace to the using statements in repo.cs :
using System.Globalization;
Finally, add one more output statement in the console, and you're ready to build and run this app again:
Console.WriteLine(repo.LastPush);
ConclusionThis tutorial showed you how to make web requests, parse the result, and display properties of those results.You've also added new packages as dependencies in your project. You've seen some of the features of the C#language that support object-oriented techniques. Working with LINQ 5/23/2017 14 min to read Edit Online
IntroductionThis tutorial teaches you a number of features in .NET Core and the C# language. Youll learn: How to generate sequences with LINQ How to write methods that can be easily used in LINQ queries. How to distinguish between eager and lazy evaluation.You'll learn these techniques by building an application that demonstrates one of the basic skills of any magician:the faro shuffle. Briefly, a faro shuffle is a technique where you split a card deck exactly in half, then the shuffleinterleaves each one card from each half to rebuild the original deck.Magicians use this technique because every card is in a known location after each shuffle, and the order is arepeating pattern.For our purposes, it is a light hearted look at manipulating sequences of data. The application you'll build willconstruct a card deck, and then perform a sequence of shuffles, writing the sequence out each time. You'll alsocompare the updated order to the original order.This tutorial has multiple steps. After each step, you can run the application and see the progress. You can also seethe completed sample in the dotnet/docs GitHub repository. For download instructions, see Samples and Tutorials.
PrerequisitesYoull need to setup your machine to run .NET core. You can find the installation instructions on the .NET Core page.You can run this application on Windows, Ubuntu Linux, OS X or in a Docker container. Youll need to install yourfavorite code editor. The descriptions below use Visual Studio Code which is an open source, cross platform editor.However, you can use whatever tools you are comfortable with.
The multiple from clauses produce a SelectMany , which creates a single sequence from combining each elementin the first sequence with each element in the second sequence. The order is important for our purposes. The firstelement in the first source sequence (Suits) is combined with every element in the second sequence (Values). Thisproduces all thirteen cards of first suit. That process is repeated with each element in the first sequence (Suits). Theend result is a deck of cards ordered by suits, followed by values.Next, you'll need to build the Suits() and Ranks() methods. Let's start with a really simple set of iterator methodsthat generate the sequence as an enumerable of strings:
These two methods both utilize the yield return syntax to produce a sequence as they run. The compiler builds anobject that implements IEnumerable<T> and generates the sequence of strings as they are requested.Go ahead and run the sample you've built at this point. It will display all 52 cards in the deck. You may find it veryhelpful to run this sample under a debugger to observe how the Suits() and Values() methods execute. You canclearly see that each string in each sequence is generated only as it is needed.Manipulating the OrderNext, let's build a utility method that can perform the shuffle. The first step is to split the deck in two. The Take()and Skip() methods that are part of the LINQ APIs provide that feature for us:
The shuffle method doesn't exist in the standard library, so you'll have to write your own. This new methodillustrates several techniques that you'll use with LINQ-based programs, so let's explain each part of the method insteps.The signature for the method creates an extension method:
An extension method is a special purpose static method. You can see the addition of the this modifier on the firstargument to the method. That means you call the method as though it were a member method of the type of thefirst argument.Extension methods can be declared only inside static classes, so let's create a new static class called extensionsfor this functionality. You'll add more extension methods as you continue this tutorial, and those will be placed inthe same class.This method declaration also follows a standard idiom where the input and output types are IEnumerable<T> . Thatpractice enables LINQ methods to be chained together to perform more complex queries. using System.Collections.Generic;
namespace LinqFaroShuffle { public static class Extensions { public static IEnumerable<T> InterleaveSequenceWith<T> (this IEnumerable<T> first, IEnumerable<T> second) { // implementation coming. } } }
You will be enumerating both sequences at once, interleaving the elements, and creating one object. Writing a LINQmethod that works with two sequences requires that you understand how IEnumerable works.The IEnumerable interface has one method: GetEnumerator() . The object returned by GetEnumerator() has amethod to move to the next element, and a property that retrieves the current element in the sequence. You willuse those two members to enumerate the collection and return the elements. This Interleave method will be aniterator method, so instead of building a collection and returning the collection, you'll use the yield return syntaxshown above.Here's the implementation of that method:
Now that you've written this method, go back to the Main method and shuffle the deck once:
return true; }
This shows a second Linq idiom: terminal methods. They take a sequence as input (or in this case, two sequences),and return a single scalar value. These methods, when they are used, are always the final method of a query. (Hencethe name).You can see this in action when you use it to determine when the deck is back in its original order. Put the shufflecode inside a loop, and stop when the sequence is back in its original order by applying the SequenceEquals()method. You can see it would always be the final method in any query, because it returns a single value instead of asequence:
var times = 0; var shuffle = startingDeck;
do { shuffle = shuffle.Take(26).InterleaveSequenceWith(shuffle.Skip(26));
Console.WriteLine(); times++; } while (!startingDeck.SequenceEquals(shuffle));
Console.WriteLine(times);
Run the sample, and see how the deck rearranges on each shuffle, until it returns to its original configuration after8 iterations.
OptimizationsThe sample you've built so far executes an in shuffle, where the top and bottom cards stay the same on each run.Let's make one change, and run an out shuffle, where all 52 cards change position. For an out shuffle, youinterleave the deck so that the first card in the bottom half becomes the first card in the deck. That means the lastcard in the top half becomes the bottom card. That's just a one line change. Update the call to shuffle to change theorder of the top and bottom halves of the deck:
shuffle = shuffle.Skip(26).InterleaveSequenceWith(shuffle.Take(26));
Run the program again, and you'll see that it takes 52 iterations for the deck to reorder itself. You'll also start tonotice some serious performance degradations as the program continues to run.There are a number of reasons for this. Let's tackle one of the major causes: inefficient use of lazy evaluation.LINQ queries are evaluated lazily. The sequences are generated only as the elements are requested. Usually, that's amajor benefit of LINQ. However, in a use such as this program, this causes exponential growth in execution time.The original deck was generated using a LINQ query. Each shuffle is generated by performing three LINQ querieson the previous deck. All these are performed lazily. That also means they are performed again each time thesequence is requested. By the time you get to the 52nd iteration, you're regenerating the original deck many, manytimes. Let's write a log to demonstrate this behavior. Then, you'll fix it.Here's a log method that can be appended to any query to mark that the query executed.
return sequence; }
Console.WriteLine(); var times = 0; var shuffle = startingDeck;
do { /* shuffle = shuffle.Take(26) .LogQuery("Top Half") .InterleaveSequenceWith(shuffle.Skip(26) .LogQuery("Bottom Half")) .LogQuery("Shuffle"); */
shuffle = shuffle.Skip(26) .LogQuery("Bottom Half") .InterleaveSequenceWith(shuffle.Take(26).LogQuery("Top Half")) .LogQuery("Shuffle");
times++; Console.WriteLine(times); } while (!startingDeck.SequenceEquals(shuffle));
Console.WriteLine(times); }
Notice that you don't log every time you access a query. You log only when you create the original query. Theprogram still takes a long time to run, but now you can see why. If you run out of patience running the outer shufflewith logging turned on, switch back to the inner shuffle. You'll still see the lazy evaluation effects. In one run, itexecutes 2592 queries, including all the value and suit generation.There is an easy way to update this program to avoid all those executions. There are LINQ methods ToArray() and ToList() that cause the query to run, and store the results in an array or a list, respectively. You use these methodsto cache the data results of a query rather than execute the source query again. Append the queries that generatethe card decks with a call to ToArray() and run the query again: public static void Main(string[] args) { var startingDeck = (from s in Suits().LogQuery("Suit Generation") from r in Ranks().LogQuery("Value Generation") select new PlayingCard(s, r)) .LogQuery("Starting Deck") .ToArray();();
Run again, and the inner shuffle is down to 30 queries. Run again with the outer shuffle and you'll see similarimprovements. (It now executes 162 queries).Don't misinterpret this example by thinking that all queries should run eagerly. This example is designed tohighlight the use cases where lazy evaluation can cause performance difficulties. That's because each newarrangement of the deck of cards is built from the previous arrangement. Using lazy evaluation means each newdeck configuration is built from the original deck, even executing the code that built the startingDeck . That causesa large amount of extra work.In practice, some algorithms run much better using eager evaluation, and others run much better using lazyevaluation. (In general, lazy evaluation is a much better choice when the data source is a separate process, like adatabase engine. In those cases, lazy evaluation enables more complex queries to execute only one round trip to thedatabase process.) LINQ enables both lazy and eager evaluation. Measure, and pick the best choice.
As one final cleanup, let's make a type to represent the card, instead of relying on an anonymous type. Anonymoustypes are great for lightweight, local types, but in this example, the playing card is one of the main concepts. Itshould be a concrete type.
This type uses auto-implemented read-only properties which are set in the constructor, and then cannot bemodified. It also makes use of the new string interpolation feature that makes it easier to format string output.Update the query that generates the starting deck to use the new type:
Compile and run again. The output is a little cleaner, and the code is a bit more clear and can be extended moreeasily.
ConclusionThis sample should you some of the methods used in LINQ, how to create your own methods that will be easilyused with LINQ enabled code. It also showed you the differences between lazy and eager evaluation, and the affectthat decision can have on performance.You learned a bit about one magician's technique. Magician's use the faro shuffle because they can control whereevery card moves in the deck. In some tricks, the magician has an audience member place a card on top of the deck,and shuffles a few times, knowing where that card goes. Other illusions require the deck set a certain way. Amagician will set the deck prior to performing the trick. Then she will shuffle the deck 5 times using an innershuffle. On stage, she can show what looks like a random deck, shuffle it 3 more times, and have the deck setexactly how she wants. Microservices hosted in Docker 5/23/2017 13 min to read Edit Online
IntroductionThis tutorial details the tasks necessary to build and deploy an ASP.NET Core microservice in a Docker container.During the course of this tutorial, you'll learn: How to generate an ASP.NET Core application using YeomanYou can view or download the sample app for this topic. For download instructions, see Samples and Tutorialswill work for an ASP.NET Core application.
PrerequisitesYoull need to setup your machine to run .NET core. You can find the installation instructions on the .NET Core page.You can run this application on Windows, Ubuntu Linux, macOS or in a Docker container. Youll need to install yourfavorite code editor. The descriptions below use Visual Studio Code which is an open source, cross platform editor.However, you can use whatever tools you are comfortable with.You'll also need to install the Docker engine. See the Docker Installation page for instructions for your platform.Docker can be installed in many Linux distributions, macOS, or Windows. The page referenced above containssections to each of the available installations.Most components to be installed are done by a package manager. If you have node.js's package manager npminstalled you can skip this step. Otherwise install the latest NodeJs from nodejs.org which will install the npmpackage manager.At this point you will need to install a number of command line tools that support ASP.NET core development. Thecommand line templates use Yeoman, Bower, Grunt, and Gulp. If you have them installed that is good, otherwisetype the following into your favorite shell:npm install -g yo bower grunt-cli gulp
The -g option indicates that it is a global install, and those tools are available system wide. (A local install scopesthe package to a single project). Once you've installed those core tools, you need to install the yeoman asp.nettemplate generators:npm install -g generator-aspnet
This command prompts you to select what Type of application you want to create. For this microservice, you wantthe simplest, most lightweight web application possible, so select 'Empty Web Application'. The template willprompt you for a name. Select 'WeatherMicroservice'.The template creates eight files for you: A .gitignore, customized for asp.net core applications. A Startup.cs file. This contains the basis of the application. A Program.cs file. This contains the entry point of the application. A WeatherMicroservice.csproj file. This is the build file for the application. A Dockerfile. This script creates a Docker image for the application. A README.md. This contains links to other asp.net core resources. A web.config file. This contains basic configuration information. A runtimeconfig.template.json file. This contains debugging settings used by IDEs.Now you can run the template generated application. That's done using a series of tools from the command line.The dotnet command runs the tools necessary for .NET development. Each verb executes a different commandThe first step is to restore all the dependencies:
dotnet restore
Dotnet restore uses the NuGet package manager to install all the necessary packages into the application directory.It also generates a project.json.lock file. This file contains information about each package that is referenced. Afterrestoring all the dependencies, you build the application:
dotnet build
And once you build the application, you run it from the command line:
dotnet run
The default configuration listens to. You can open a browser and navigate to that page andsee a "Hello World!" message.Anatomy of an ASP.NET Core applicationNow that you've built the application, let's look at how this functionality is implemented. There are two of thegenerated files that are particularly interesting at this point: project.json and Startup.cs.Project.json contains information about the project. The two nodes you'll often work with are 'dependencies' and'frameworks'. The dependencies node lists all the packages that are needed for this application. At the moment, thisis a small node, needing only the packages that run the web server.The 'frameworks' node specifies the versions and configurations of the .NET framework that will run thisapplicationmicroservice, so it doesn't need to configure any dependencies. The Configure method configures the handlers forincoming HTTP Requests. The template generates a simple handler that responds to any request with the text 'HelloWorld!'.
Build a microserviceThe service you're going to build will deliver weather reports from anywhere around the globe. In a productionapplication, you'd call some service to retrieve weather data. For our sample, we'll generate a random weatherforecastform:
All the changes you need to make are in the lambda expression defined as the argument to app.Run in yourstartup class.The argument on the lambda expression is the HttpContext for the request. One of its properties is the Requestobject. The Request object has a Query property that contains a dictionary of all the values on the query string forthe request. The first addition is to find the latitude and longitude values:
The Query dictionary values are StringValue type. That type can contain a collection of strings. For your weatherservice, each value is a single string. That's why there's the call to FirstOrDefault() in the code above.Next, you need to convert the strings to doubles. The method you'll use to convert the string to a double isdouble.TryParse() :
This method leverages C# out parameters to indicate if the input string can be converted to a double. If the stringdoes represent a valid representation for a double, the method returns true, and the result argument contains thevalue. If the string does not represent a valid double, the method returns false.You can adapt that API with the use of an extension method that returns a nullable double. A nullable value type isa type that represents some value type, and can also hold a missing, or null value. A nullable type is represented byappending the ? character to the type declaration.Extension methods are methods that are defined as static methods, but by adding the this modifier on the firstparameter, can be called as though they are members of that class. Extension methods may only be defined in staticclasses. Here's the definition of the class containing the extension method for parse:
The default(double?) expression returns the default value for the double? type. That default value is the null (ormissing) value.You can use this extension method to convert the query string arguments into the double type:
To easily test the parsing code, update the response to include the values of the arguments:
At this point, you can run the web application and see if your parsing code is working. Add values to the webrequest in a browser, and you should see the updated results.Build a random weather forecastYour next task is to build a random weather forecast. Let's start with a data container that holds the values you'dwant for a weather forecast: public class WeatherReport { private static readonly string[] PossibleConditions = new string[] { "Sunny", "Mostly Sunny", "Partly Sunny", "Partly Cloudy", "Mostly Cloudy", "Rain" };
Next, build a constructor that randomly sets those values. This constructor uses the values for the latitude andlongitude to seed the Random number generator. That means the forecast for the same location is the same. If youchange the arguments for the latitude and longitude, you'll get a different forecast (because you start with adifferent seed.)
You can now generate the 5-day forecast in your response method: packet. After you'veconstructed the response packet, you set the content type to application/json , and write the string.The application now runs and returns random forecasts.
FROM microsoft/dotnet:1.1-sdk-msbuild
Docker allows you to configure a machine image based on a source template. That means you don't have to supplyall the machine parameters when you start, you only need to supply any changes. The changes here will be toinclude our application.In this first sample, we'll use the 1.1-sdk-msbuild version of the dotnet image. This is the easiest way to create aworking Docker environment. This image include the dotnet core runtime, and the dotnet SDK. That makes it easierto get started and build, but does create a larger image.The next five lines setup and build your application:
WORKDIR /app
COPY WeatherMicroservice.csproj . RUN dotnet restore
COPY . .
This will copy the project file from the current directory to the docker VM, and restore all the packages. Using thedotnet CLI means that the Docker image must include the .NET Core SDK. After that, the rest of your applicationgets copied, and the dotnet publish command builds and packages your application.The final line of the file runs the application: ENTRYPOINT ["dotnet", "out/WeatherMicroservice.dll", "--server.urls", ""]
This configured port is referenced in the --server.urls argument to dotnet on the last line of the Dockerfile. The ENTRYPOINT command informs Docker what command and command line options start the service.
bin/* obj/* out/*
You build the image using the docker build command. Run the following command from the directory containingyour code.
This command builds the container image based on all the information in your Dockerfile. The -t argumentprovides a tag, or name, for this container image. In the command line above, the tag used for the Docker containeris weather-microservice . When this command completes, you have a container ready to run your new service.Run the following command to start the container and launch your service:
The -d option means to run the container detached from the current terminal. That means you won't see thecommand output in your terminal. The -p option indicates the port mapping between the service and the host.Here it says that any incoming request on port 80 should be forwarded to port 5000 on the container. Using 5000matches the port your service is listening on from the command line arguments specified in the Dockerfile above.The --name argument names your running container. It's a convenient name you can use to work with thatcontainer:
The --sig-proxy=false argument means that Ctrl-C commands do not get sent to the container process, butrather stop the docker attach command. The final argument is the name given to the container in the docker runcommand.the attached running container.Press Ctrl-C to stop the attach process.When you are done working with your container, you can stop it:
The container and image is still available for you to restart. If you want to remove the container from your machine,you use this command:
docker rm hello-docker
If you want to remove unused images from your machine, you use this command:
ConclusionIn this tutorial, you built an asp.net core microservice, and added a few simple features.You built a docker container image for that service, and ran that container on your machine. You attached aterminal window to the service, and saw the diagnostic messages from your service.Along the way, you saw several features of the C# language in action.
|
https://fr.scribd.com/document/355850636/Apostila-de-C
|
CC-MAIN-2019-39
|
refinedweb
| 9,249
| 57.16
|
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
Every array has a number of dimensions, a shape, a data type, and strides. Strides are integer numbers describing, for each dimension, the byte step in the contiguous block of memory. The address of an item in the array is a linear combination of its indices: the coefficients are the strides.
import numpy as np
id = lambda x: x.__array_interface__['data'][0]
x = np.zeros(10); x.strides
This vector contains float64 (8 bytes) items: one needs to go 8 bytes forward to go from one item to the next.
y = np.zeros((10, 10)); y.strides.
We create a new array pointing to the same memory block as
a, but with a different shape. The strides are such that this array looks like it is a vertically tiled version of
a. NumPy is tricked: it thinks
b is a 2D
n * n array with
n^2 elements, whereas the data buffer really contains only
n elements.
n = 1000; a = np.arange(n)
b = np.lib.stride_tricks.as_strided(a, (n, n), (0, 4))
b
b.size, b.shape, b.nbytes
%timeit b * b.T
This first version does not involve any copy, as
b and
b.T are arrays pointing to the same data buffer in memory, but with different strides.
%timeit np.tile(a, (n, 1)) * np.tile(a[:, np.newaxis], (1, n))
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
|
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter04_optimization/06_stride_tricks.ipynb
|
CC-MAIN-2018-13
|
refinedweb
| 278
| 75.71
|
This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.
So, you would go to NEW in the file menu,
select Project Workspace
Give you project a new name.
Copy all of the header files and c/c++ files into
the project directory (to preseve the original
files incase of problems)
Start inserting the C/C++ files into the project.
Build the executable.
Here are the things you may need to look for:
The old program was made with intention to be compiled on GCC, this being the case, you have a standard GCC makefile. You're going to want to either print (easier) or open the file and read the file itself. Most lines probably describe making '.o' files (ie: search.o), then there is a line that has all of the '.o' files and puts them together to make the executable. the .o files are object files, basically steps in compilation. If you had multiple files in MSVC, it would compile to C code to get object files, then link them with the standard libraries to get the actual program
What you are going to need to look for in the makefile is if the code makes a libSOMETHING.a
(ie: libmsql.a), if it does, then it is compiling a library file, and you are going to need to build a library project in MSVC, then include your compiled library in the link options.
Another thing to be aware of: gcc is mostly on UNIX systems (yes, I know there is a port to DOS), and it may have requirements for UNIX specific libraries. if it does, find out what the libraries are for and if there are a set in the MSVC world.
You are also going to need to check all of the header files the program needs. Some headerfiles require libraries that are not automatically linked at compile time.
Final note: as a console application, you should make sure you include:
#include <windows.h>
in the C file that has MAIN.
Good luck!
|
https://www.experts-exchange.com/questions/10001214/Building-ANSI-C-code-with-MVC-4-0.html
|
CC-MAIN-2018-22
|
refinedweb
| 375
| 79.09
|
Tonight’s Hack
So in preparation for the upcoming Mindcamp, I wanted to glue together the code I have to upload to flickr and Apple’s folder actions to make PhotoBooth automatically upload pictures to Flickr and put them in a PhotoBooth set.
To keep the applescript as small and troublefree as possible, I took an existing applescript folder action (in this case, copy to jpeg) and hacked it up to call a python shell script where I make a few calls using the FlickrAPI module.
-- the list of file types which will be processed -- eg: {"PICT", "JPEG", "TIFF", "GIFf"} property type_list : {"JPEG"} -- since file types are optional in Mac OS X, -- check the name extension if there is no file type -- NOTE: do not use periods (.) with the items in the name extensions list -- eg: {"txt", "text"}, NOT: {".txt", ".text"} property extension_list : {"jpg", "jpeg"} on adding folder items to this_folder after receiving these_items set theCmd to "/Users/erics/bin/pb_flickr.py " & (quoted form of POSIX path of (this_item as string)) & "&" --display dialog (theCmd as string) do shell script theCmd end if end repeat on error error_message number error_number if the error_number is not -128 then tell application "Finder" activate display dialog error_message buttons {"Cancel"} default button 1 giving up after 120 end tell end if end try end adding folder items to
The python code is pretty straight forward, the only real tricks are getting api keys, secrets, and tokens. I’m not sure I remember what I did to get those, since I did it a year ago.
!/usr/local/bin/python
import os import sys from flickrapi import FlickrAPI
set_id='72157594368688426' # your set id here...
class flickrprefs: apikey = 'your key here' apisecret = 'your secret herer' auth_token = 'your token here'
if name=='main': argv = sys.argv if len(argv) < 2: sys.exit(0)
pth = argv[1]
prefs = flickrprefs() try: fapi = FlickrAPI(prefs.apikey, prefs.apisecret) rsp = fapi.authcheckToken(apikey=prefs.apikey, authtoken=prefs.auth_token)
if not rsp: #token isn't valid. sys.exit(0) rsp = fapi.upload(filename = pth, description = '', is_public="1", api_key= prefs.api_key, auth_token = prefs.auth_token, tags='photobooth' ) rsp = fapi.photosets_addPhoto(auth_token=prefs.auth_token, api_key= prefs.api_key, photoset_id=set_id, photo_id=rsp.photoid[0].elementText)
except Exception, msg: #print msg pass # I don’t really care if I’m not connected to the net.
Finally, it’s a simple matter of dropping the applescript in the ~/Library/Scripts/Folder Action Scripts directory and enabling it with the Applescript Folder Actions Setup app.
|
http://www.wiredfool.com/2006/11/09/tonights-hack/
|
crawl-002
|
refinedweb
| 415
| 53.81
|
#include <LCD4Bit.h>void setup(){ LCD4Bit lcd = LCD4Bit(1); lcd.init(); delay(500); lcd.clear(); delay(500); lcd.printIn("hello world"); delay(500); lcd.commandWrite(0x1c); delay(500);}void loop(){}
void setup(){ LCD4Bit lcd = LCD4Bit(2); //It's 2-line internally lcd.init(); delay(50); lcd.clear(); delay(50); lcd.cursorTo(1,0); //Home on first line delay(50); lcd.printIn("hello wo"); //First part of message delay(50); lcd.cursorTo(2,0); //Home on second line delay(50); lcd.printIn("rld! foo"); //Second part of message delay(50);}
Did anyone ever work this out it must be an init setting...
Most 16 by 1 LCD's work like 8 by 2 ...
The reason they act like this is so that the same chip can be used in lots of different LCD configurations.
#include <LiquidCrystal.h>LiquidCrystal lcd(7, 6, 5, 4, 3, 2); // this is obiusly my connection since i use a ethernet shield i can't use 10..13// LiquidCrystal lcd(12, 11, 5, 4, 3, 2); normal connectionsvoid setup(){ lcd.begin(16, 2); lcd.setCursor(0, 0); lcd.print("12345678"); lcd.setCursor(0, 1); lcd.print("90123456");}void loop(){}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=18172.msg132281
|
CC-MAIN-2015-11
|
refinedweb
| 227
| 71.51
|
We have an integration with a third party that uses a callback function that passed into one of their methods. The problem is that the callback loses its context and can no longer access
this.
Within the controller:
makeRequest() { let response = thirdParty.api.call(this.params, this.callback) } callback(response) { console.log(response) if(response.success) { this.success(response) // Won't work because "this" is now within the third party api } success(response) { // do lots of things within the controller }
Before Stimulus, the callback called
nameSpace.callback(), but I don’t know how to access the controller namespace. For example, in the contrived example above, I need to be able to pass the results into the the
success controller method.
Should this be done by triggering an event?
|
https://discourse.stimulusjs.org/t/passing-a-callback-function/584
|
CC-MAIN-2019-04
|
refinedweb
| 128
| 57.27
|
The CheckWidget class provides an abstract base class for all test widgets which support 'checked' and 'unchecked' states. More...
#include <QtUiTest>
This class is under development and is subject to change.
The CheckWidget class provides an abstract base class for all test widgets which support 'checked' and 'unchecked' states.
QtUiTest::CheckWidget exposes the current check state of widgets which can be checked or unchecked.
Examples of widgets suitable for this interface include QCheckBox and QAbstractButton.
Returns the current check state of this widget.
See also QCheckBox::checkState() and setCheckState().
Returns true if this widget has three possible states, i.e. the widget can be in state Qt::PartiallyChecked.
The base implementation returns false.
See also QCheckBox::isTristate().
Simulates the user input necessary to set the current check state to state, returning true on success.
The default implementation does nothing and returns false.
See also QCheckBox::setCheckState() and checkState().
This signal is emitted when the check state of this widget changes to state. state is compatible with Qt::CheckState.
|
https://doc.qt.io/archives/qtextended4.4/qtuitest-checkwidget.html
|
CC-MAIN-2021-31
|
refinedweb
| 168
| 61.22
|
Configuring babel for Node.js/Express Server
GaneshMani
・4 min read.
Babel Setup
Firstly, we need to install two main packages to setup babel in the project.
- babel-core - babel-core is the main package to run any babel setup or configuration
- babel-node - babel-node is the package used to transpile from ES(Any Standard) to plain javascript.
- babel-preset-env - this package is used to make use of upcoming features which node.js couldn't understand. like, some feature will be new and take time to implement in the node.js by default.
Best Practice.
- One is for Production
- One is for Development
Development Setup
$ npm init --yes $ npm install --save exress body-parser cors $npm install --save nodemon
Here, we initialize the package.json and install the basic express server with nodemon.
Next, we need to install @babel/core and @babel/node packages.
$ npm install @babel/core @babel/node --save-dev
After that, we need to create a file called .babelrc which contains all the babel configuration.
{ "presets": [ "@babel/preset-env" ] }
Now, the setup is ready. we need to create a script which transpile our code on run time.
"scripts": { "dev": "nodemon --exec babel-node index.js" }
Finally, add the following code in the index.js and run the script.
import express from 'express'; import bodyParser from 'body-parser'; const app = express(); app.use(bodyParser.json()); app.get('/',(req,res) => { res.send("Hello Babel") }) app.listen(4000,() => { console.log(`app is listening to port 4000`); })
$ npm run dev
Finally, you will see the output like Hello Babel.
Production Setup
Mainly, we cannot transpile the code in run time in production. So, what we need to do is, to compile the code into vanilla javascript and deploy the compiled version to the node.js production server.
Add the following command for building a compiled version of code
"scripts": { "build" : "babel index.js --out-file index-compiled.js", "dev": "nodemon --exec babel-node index.js" } :-)
How NOT to ask for help
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/ganeshmani/configuring-babel-for-node-js-express-server-35hp
|
CC-MAIN-2019-35
|
refinedweb
| 332
| 61.02
|
Test Drive
You can use Smartface on-device Android and iOS emulators to run your code instantly and seamlessly on a real device.
Install Smartface On-Device Emulator
Downloading the Smartface On-Device Emulator for iOS and Android
If you haven't installed the Smartface On-Device Emulator, get it from the internal Enterprise App Store from your phone. Please contact with the support to retrieve the credentials.
Activating the Smartface iOS Widget
You can activate the Smartface Widget on iOS by following steps below;
Note: Once you activated the widget, you won't have to activate it again even you if reinstall the Smartface On-Device Emulator.
Locating the On-Device Emulator on iOS and Android
When you install the on-device emulator, it will appear in the application menu of your device.
Run on Device (On-Device Android and iOS.
To deploy your app to your device there are two different ways you can take.
Run Your App on Device via Wireless Connection
To connect your on-device emulator with Smartface IDE via a wireless connection, your computer and mobile device need to be connected to the same local network.
Then to test your application press on the Connect Wireless via QR Code button under the
Run > Run on Device menu.
It will generate a unique QR Code for your workspace to be able to emulate your project.
Open Smartface on-device Android/iOS emulator from your device and scan the QR Code. It will download all the resources and codes and load your application in the Smartface on-device emulator. After initializing and downloading steps you will be able to run your application for testing purposes.
Run Your App on Device via Wired Connection
Smartface also provides a way to connect and test your project with your device through a USB Cable. Let's have a look at the steps you should take to achieve this behavior.
Android
For Android, there are two things that have to be covered on your side.
- Android's USB debugging feature needs to be enabled on your mobile device.
- adb must be installed on your computer.
To enable USB debugging on your mobile device follow the steps below:
- On the device, go to Settings > About (device).
- Tap the Build number seven times to make Settings > Developer options available.
- Then enable the USB Debugging option.
You might also want to enable the Stay awake option, to prevent your Android device from sleeping while plugged into the USB port.
To install adb on your computer follow the steps below:
First connect your phone to your system with a data cable. You will get a prompt asking whether or not do you want to allow USB debugging. Check ‘Always allow from this computer‘ and tap ‘OK.’
Windows
If Android Studio is installed in your system that means the adb is already installed so the only thing you need to do is add adb to your System PATH, To achieve this you can skip the manual installation steps below and refer to the
Adding the adb to your System Path at the end of this section.
Manual Installation:
- Download Platform Tools:
- Extract the contents of this ZIP file into an easily accessible folder (such as C:\platform-tools)
- Open up a new command prompt in that folder.
- In the Command Prompt window, enter the following command to launch the ADB daemon:
./adb devices
- Finally, the adb command to be recognized on any folder you are in, adb needs to be added to your System PATH. To do this, the following guide can be followed for learning how to add adb to your System PATH.
Adding the adb to your System, type down your path to platform-tools (e.g. C:\platform-tools). Click OK. Close all remaining windows by clicking OK.
Linux
If Android Studio is installed in your system that means the adb is already installed so you can skip the manual installation steps below.
For Debian-based Linux users, type the following command to install ADB:
sudo apt install android-tools-adb
For Fedora/Suse-based Linux users, type one of the following commands to install ADB:
sudo dnf install android-tools
sudo yum install android-tools
For Arch-based Linux users, type the following command to install ADB:
sudo pacman -S android-tools
Then open up a new terminal window on your computer and execute the
adb start-server command.
MacOS
If Android Studio is installed in your system that means the adb is already installed so you can skip the manual installation steps below.
You can use one single
homebrew command to install adb:
brew install --cask android-platform-tools
Alternatively, manual installation steps below can be followed:
- Download Platform Tools:
- Extract the ZIP file to an easily-accessible folder (like the Desktop for example)
- Open Terminal
- To browse to the folder you extracted ADB into, enter the following command:
cd /path/to/extracted/folder/
- For example, on my Mac it was this: cd /Users/MyUsername/Desktop/platform-tools/
- Once the Terminal is in the same folder your ADB tools are in, you can execute the following command to launch the ADB daemon:
./adb devices
Now after configuring your adb installation the last thing you need to do to is, connect your device with a USB Cable to your computer, then click on to Connect Wired via ADB (Android Only) menu item under the
Run > Run on Device menu.
By this way you can now test your project on a real device without any need of internet access.
iOS
For iOS, to deploy and test your project to a mobile device via USB Cable we will be using the Internet Sharing feature of MacOS therefore it is required to have any MacOS installed system. Though your data will get transferred through USB Cable, for iOS you will still need a common internet connection between your mobile device and your system.
Here are the steps you should take to run iPhone locally on Mac:
- Plug your iPhone into your Mac via USB
- On your Mac, go to
System preferences → Sharing
- Click and select Internet Sharing from the left hand-side and make sure that you have selected the iPhone USB from the
To computers using:menu. (Make sure the Internet Sharing is On.)
- Check and note your dispatcher connection port from the Smartface IDE. To learn where to find port number you can refer to document below:
- Then you need to check for your system's IP Address. It can be found at
System Preferences > Networkon your Mac.
- To match your devices one last thing to do is, open your mobile device and search for on your favorite browser. (e.g.)
- Finally, to test your project in your on-device emulator click on to
Run > Run On Device > Connect Wireless via QR Codeand scan the newly generated QR from your mobile device..
If you use iOS 14 or later, you can also use Haptic Touch feature to fast-acces the Update button. Simply hold the Smartface icon:
Smartface IDE also gives you a three different ways to update the application that runs on your on-device emulator.
The first two exist under the
Run > Run on Device menu, from here Select device(s) to Apply Changes and Apply Changes to Connected Devices(to all of them at once) can be used to update your on-device emulator with the latest changes from your project.
And the third one is the Apply Changes button that placed at the bottom-right of your IDE. It's functionality is completely same with the Apply Changes to Connected Devices menu item.
For this, you can also use the keyboard shortcut. The default keybinding for applying changes is ctrlcmd+alt+v. To change default keybindings, press
ctrlcmd+p and type down >Open Keyboard Shortcuts. In the newly opened Keyboard Shortcuts page search for Smartface: Apply Changes to Connected Devices and edit from there.
Clearing On-Device Emulator Contents
Clear allows you to completely remove the downloaded files for Smartface on-device emulator. It cleans up the cached files.
Determine if the Current Code is Running on On-Device Emulator
On some cases, you might want your code to be run only on Smartface On-Device Emulator since logs can chunk up and slow down the application. The common cases are:
- Logging for debug purposes
- Published-Application-only concepts like Firebase, Push Notification
- Plugins(some might require to be run on Published App only)
- Internal concepts like auto-filling passwords for easier development or developer settings
Example:
import System from "@smartface/native/system";
if (System.isEmulator) {
console.log("Debug log for X purpose");
}
or within Service Call to keep logs
import System from '@smartface/native/system';
import ServiceCall from '@smartface/extension-utils/service-call';
export default const service = new ServiceCall({
baseUrl: '',
logEnabled: System.isEmulator // Only show logs if the application is running by emulator.
});
Simply use System.isEmulator property to detect such cases.
|
https://docs.smartface.io/7.0.0/smartface-getting-started/test-drive/
|
CC-MAIN-2022-40
|
refinedweb
| 1,495
| 57.5
|
While performing a refactoring, I ended up creating a method like the example below. The datatype has been changed for simplicity's sake.
I previous had an assignment statement like this:
MyObject myVar = new MyObject();
private static new MyObject CreateSomething()
{
return new MyObject{"Something New"};
}
new
private static new
new
override
New keyword reference from MSDN:
Here is an example I found on the net from a Microsoft MVP that made good sense: Link to Original
public class A { public virtual void One(); public void Two(); } public class B : A { public override void One(); public new void Two(); } B b = new B(); A a = b as A; a.One(); // Calls implementation in B a.Two(); // Calls implementation in A b.One(); // Calls implementation in B b.Two(); // Calls implementation in B
Override can only be used in very specific cases. From MSDN:
You cannot override a non-virtual or static method. The overridden base method must be virtual, abstract, or override.
So the 'new' keyword is needed to allow you to 'override' non-virtual and static methods.
|
https://codedump.io/share/a6937wTB8OEN/1/new-keyword-in-method-signature
|
CC-MAIN-2017-04
|
refinedweb
| 177
| 54.32
|
We keep close track of the evaluations we ask students to write up at the end of each JDEtips class. I personally read every evaluation, and often discuss student comments with our JDE services team. Any instructor who doesn’t receive a passing score is coached on how to improve, or simply not utilized again. Fortunately, that’s a pretty rare occurrence.
The one answer that I pay the most attention to is the one answering the question: “What is the overall value of the class to you?” We ask for a rating between 1 (low) and 5 (high).
This week I analyzed the responses from the last 15 months of public classes. I compared the responses of virtual students vs. on-premise students.
We ask students to rate their skill level before class as beginner, intermediate, or advanced, so I also wanted to find out if the three groups of students had different views on the value of our classes.
Here are the results:
50 virtual 4.5
102 on-premise 4.5
31 beginner 4.6
64 intermediate 4.4
57 advanced 4.5
4.5 out of 5? That’s like scoring a 90 average. I’m quite happy with those results. And it gives us a floor to exceed going forward.
The data indicate that:
JDEtips University started in 2001 with one class—Advanced Pricing. When we started teaching JDE classes, I first tailored the classes towards advanced students only—requiring that students first take the standard course available from JD Edwards.
However, I realized early on that motivated and capable beginners could benefit from the class—due to the fact that we had to start at the beginning.
We couldn’t just jump into intermediate and advanced topics without building up a price schedule from scratch. The first six hours of the class was dedicated to establishing the foundation for more advanced topics. So, we opened up the class to all levels of students.
I am pleasantly surprised that even students who rate themselves as advanced are giving us high marks. I used to tell prospective students that our classes were aimed at clients with ½ to 5 years experience.
With this data, I think we can say that all students benefit. Beginners advance to the intermediate level. Intermediate students move to the advanced level. Advanced students grow their knowledge to the mastery level.
The virtual vs. on-premise data is also interesting. We started out in 2010 with virtual and on-premise students attending the same sessions. At times, there were communication issues with the online students being unable to hear the on-premise students.
Starting late last year, we moved to a different model. Public classes are now either 100% virtual, or 100% on-premise. This has worked much better for students and instructors.
Although I prefer the interaction afforded by on-premise classes myself, there are compelling reasons why virtual training has been well accepted. Economic reasons, such as no travel expense, are predominant. Eclipsing the lack of travel expense, I feel, is the increased flexibility of virtual classes.
Our virtual classes are typically scheduled four hours per day, instead of eight hours per day. Although that means the class might take eight to ten days instead of four or five days to complete, this allows students to keep up with their most critical job responsibilities.
Well, that’s enough math for one day, don’t you think?
I hope you’ll check out JDEtips University the next time you are looking for great JD Edwards education.
JDEtips University Public Schedule
JDEtips University Private Classes
For more information on JDEtips University, contact Sandy.Acker@ERPtips.com.
|
http://it.toolbox.com/blogs/jdedwards/training-evaluation-scores-an-analysis-of-jdetips-university-results-50855
|
CC-MAIN-2016-30
|
refinedweb
| 614
| 58.28
|
EVENT(3) BSD Programmer's Manual EVENT(3)
event_init, event_dispatch, event_loop, event_loopexit, event_set, event_base_dispatch, event_base_loop, event_base_loopexit, event_base_set, event_base_free, event_add, event_del, event, - execute a function when a specific event occurs
#include <sys/time.h> #include <event.h> struct event_base * event_init(void); int event_dispatch(void); int event_loop(int flags); int event_loopexit(struct timeval *tv);); int (*event_sigcb)(void); volatile sig_atomic_t event_gotsig; pro- cess received signals. The callback returns 1 when no events are re- gistered any more. It can return -1 to indicate an error to the event li- brary, causing event_dispatch() to terminate with errno set to EINTR. The event_loop function provides an interface for single pass execution of pending events. The flags EVLOOP_ONCE and EVLOOP_NONBLOCK are recog- nized. The event_loopexit function allows the loop to be terminated after some amount of time has passed. The parameter indicates the time after which the loop should terminate. It is the responsibility of the caller to provide these functions with pre-allocated event structures. (assuming abbreviations. The event type will be a persistent EV_SIGNAL. That means signal_set() adds EV_PERSIST. It is possible to disable support for kqueue, poll, or select by setting the environment variables EVENT_NOKQUEUE, EVENT_NOPOLL, or EVENT_NOSELECT, respectively. By setting the environment variable EVENT_SHOW_METHOD, libevent displays the kernel notification method that it uses..
Libevent has experimental support for thread-safe events. When initializ- ing bufferevent with bufferevent_new(). event_base_free() should be used to free memory associated with the event base when it is no longer needed. in- stead call- back has the following form: void (*cb)(struct bufferevent *bufev, short what, void *arg). The argument is specified by the fourth parameter cbarg. A bufferevent struct pointer is returned on success, NULL on er- ror. buf- f be- fore enabling the bufferevent for the first time. This section does not document all the possible function calls; please check event.h for the public interfaces.
Upon successful completion event_add() and event_del() return 0. Other- wise, -1 is returned and the global variable errno is set to indicate the error.
kqueue(2), poll(2), select(2), evdns(3), timeout(9)
The event API manpage is based on the timeout(9) manpage by Artur Gra- bowski. Support for real-time signals is due August 8, 2000.
|
http://mirbsd.mirsolutions.de/htman/sparc/man3/bufferevent_read.htm
|
crawl-003
|
refinedweb
| 369
| 58.38
|
Re: After a full day of computers...
From: David Wright (david_c_wright_at_hotmail.com)
Date: 10/16/04
- ]
Date: Sat, 16 Oct 2004 11:41:28 +0200
JPB wrote:
> I'm getting too tired to think, so please help me fix my last two issues
> - likely simple ones for the old hands here. To start with, both the
> computers in question run SuSE 9.1 Pro and nothing but.
>
> Firstly, I got my wireless cards to work; finally. Now all is well -
> except when I input the WEP passphrase in Yast and activate same in the
> router, both computers get cut off; permanently. I then have to go back
> to an ethernet connection (ifup eth0), so as to be able to access the
> router and get rid of encryption again. Then, bingo, it all works again.
> So where's the catch?
>
> All I've done is set the WEP passphrase (tried both 64 & 128-bit
> incryption with same unsatisfactory result) in the router. Then I go to
> YAST and input the same in the "Passphrase" field in "Wireless
> Settings". I only need one key and the authentication mode is "open". --
> Please advise...
Can't really help, didn't have much luck with Wireless, it was too slow and
unreliable under Windows, so I went back to cabled before I got into
Linux... After entering the passphrase, have you tried restarting the
wireless interface (or rebooting)? It could be that somewhere in the
configuration, although the passphrase has been entered, it is not being
initialised properly - if turning WEP off again on the router restores the
connection, then it suggests that WEP is not being turned on in Linux...
> The other issue is LAN file sharing. I've had no trouble getting the
> printer to work (connected to one machine & the other accessing it
> through the CUPS server). Easy, that one.
>
> I've set up a NFS server and a client on each machine, with the
> wildcards set to the name of the other computer. Allocated directories
> to share (/home) and went with defaults in "options"
> (ro,root_squash,sync). The machines can see each other, as well as the
> shared directories, with no problem. However, the exported directories
> cannot be mounted - if I try to do that in Konqueror, I get just an
> empty directory. There is a message on boot stating that "permission was
> denied" by the server to mount the directories. -- So no doubt I've
> stuffed up the necessary permissions somewhere. Please point me towards
> the right solution - I'm sure it must be something despicably basic! :-)
You need to have the same users with the same user ID's (that's the number,
not the name) on both machines. If it is fairly static, you can do this
manually, but there are automated way of doing that, NIS, NIS+, LDAP etc.
For a small home network, that probably isn't necessary.
It is probably better to share the directories from one machine with
another. Having reciprocal shares between two machines can cause problems
during boot-up - delays at the minimum on the first machine booted, while
it times out looking for the second...
Personally, I would designate one machine as the server, the machine that is
usually booted first, and set the shared space on that machine (this isn't
necessary, but makes life and administration a lot easier).
You need to check the users have the same usernames and ID's on both
machines. If the ID's don't match, they won't be able to access "their"
shares on the other machine.
I'm guessing you have done some or all of the following, but just to recap:
1. You need to create a mountpoint on the client machine. This is an empty
directory, to which the share from the serving machine will be overlayed.
For example, create a directory called /netdata. The netdata directory
needs to have permissions set to allow the other users to see it. Either
chmod it to 777 access to let everybody into it, or change the owner and
group as appropriate. (man chmod, man chown for a description of changing
access and owner respectively)
2. On the server, create the NFS share you want to export. You need to set
the options here as well for read, write and root_squash as appropriate.
YaST is probably the simplest method for setting them up (YaST-Network
Services->NFS Server).
3. Once the shares have been created, they need to be mapped onto the mount
point on your client machine(s). Again, using YaST is probably the simplest
method (YaST->Network Services->NFS Client) and specify the shares want to
import and specify where you want to mount the to (in our
example /netdata). You can specify the options for the client as well, they
can give different access options, but can, I believe, only be more
restrictive than those on the server.
If everything has gone according to plan, you should now be able to see the
server diretories mapped under /netdata on the client machine.
The options you gave in your original post should allow read-only access to
the share to all users. Root_squash means that root doesn't get root
privileges to the NFS mount, but its privileges get set to those of the
anonuid and anonguid options.
I hope that helps a bit. If you have tried the above and it is still not
working, please post some more information.
NOTE: You can't/shouldn't export the same directories to the same
mountpoints on both machines. E.g. you can't export /home on both machines
and mount them as /home, that would cause errors, at the very least!
Dave
- ]
|
http://linux.derkeiler.com/Newsgroups/alt.os.linux.suse/2004-10/1870.html
|
CC-MAIN-2016-07
|
refinedweb
| 953
| 71.24
|
The Extensible Markup Language (XML) standard was released in 1998. It quickly became the standard way to represent and exchange almost every kind of data, from books to genes to function calls.
XML succeeded where other past "standard" data formats failed (including XML's ancestor, SGMLthe Standard Generalized Markup Language). There are three reasons for XML's success: it is text-based instead of binary, it is simple rather than complex, and it has a superficial resemblance to HTML.
Unix realized nearly 30 years before XML that humans primarily interact with computers through text. Thus text files are the only files any system is guaranteed to be able to read and write. Because XML is text, programmers can easily make legacy systems emit XML reports.
As we'll see, a lot of complexity has arisen around XML, but the XML standard itself is very simple. There are very few things that can appear in an XML document, but from those basic building blocks you can build extremely complex systems.
XML is not HTML, but XML and HTML share a common ancestor: SGML. The superficial resemblance meant that the millions of programmers who had to learn HTML to put data on the web were able to learn (and accept) XML more easily.
Example 22-1 shows a simple XML document.
<?xml version="1.0" encoding="UTF-8"?> <books> <!-- Programming Perl 3ed --> <book id="1"> > <isbn>0-596-00027-8</isbn> </book> <!-- Perl & LWP --> <book id="2"> <title>Perl & </title> <edition>1</edition> <authors> <author> <firstname>Sean</firstname> <lastname>Burke</lastname> </author> </authors> <isbn>0-596-00178-9</isbn> </book> <book id="3"> <!-- Anonymous Perl --> <title>Anonymous Perl</title> <edition>1</edition> <authors /> <isbn>0-555-00178-0</isbn> </book> </books>
At first glance, XML looks a lot like HTML: there are elements (e.g., <book> </book>), entities (e.g., & and <), and comments (e.g., <!-- Perl & LWP -->). Unlike HTML, XML doesn't define a standard set of elements, and defines only a minimum set of entities (for single quotes, double quotes, less-than, greater-than, and ampersand). The XML standard specifies only syntactic building blocks like the < and > around elements. It's up to you to create the vocabulary, that is, the element and attribute names like books, authors, etc., and how they nest.
XML's opening and closing elements are familiar from HTML:
<book> </book>
XML adds a variation for empty elements (those with no text or other elements between the opening and closing tags):
<author />
Elements may have attributes, as in:
<book id="1">
Unlike HTML, the case of XML elements, entities, and attributes matters: <Book> and <book> start two different elements. All attributes must be quoted, either with single or double quotes (id='1' versus id="1"). Unicode letters, underscores, hyphens, periods, and numbers are all acceptable in element and attribute name, but the first character of a name must be a letter or an underscore. Colons are allowed only in namespaces (see Namespaces, later in this chapter).
Whitespace is surprisingly tricky. The XML specification says anything that's not a markup character is content. So (in theory) the newlines and whitespace indents between tags in Example 22-1 are text data. Most XML parsers offer the choice of retaining whitespace or sensibly folding it (e.g., to ignore newlines and indents).
The first line of Example 22-1 is the XML declaration:
<?xml version="1.0" encoding="UTF-8" ?>
This declaration is optionalVersion 1.0 of XML and UTF-8 encoded text are the defaults. The encoding attribute specifies the Unicode encoding of the document. Some XML parsers can cope with arbitrary Unicode encodings, but others are limited to ASCII and UTF-8. For maximum portability, create XML data as UTF-8.
Similar to declarations are processing instructions, which are instructions for XML processors. For example:
<title><?pdf font Helvetica 18pt?>XML in Perl</title>
Processing instructions have the general structure:
<?target data ... ?>
When an XML processor encounters a processing instruction, it checks the target. Processors should ignore targets they don't recognize. This lets one XML file contain instructions for many different processors. For example, the XML source for this book might have separate instructions for programs that convert to HTML and to PDF.
XML comments have the same syntax as HTML comments:
<!-- ... -->
The comment text can't contain --, so comments don't nest.
Sometimes you want to put text in an XML document without having to worry about encoding entities. Such a literal block is called CDATA in XML, written:
<![CDATA[literal text here]]>
The ugly syntax betrays XML's origins in SGML. Everything after the initial <![CDATA[ and up to the ]]> is literal data in which XML markup characters such as < and & have no special meaning.
For example, you might put sample code that contains a lot of XML markup characters in a CDATA block:
<para>The code to do this is as follows:</para> <code><![CDATA[$x = $y << 8 & $z]]>
To ensure that all XML documents are parsable, there are some minimum requirements expected of an XML document. The following list is adapted from the list in Perl & XML, by Erik T. Ray and Jason McIntosh (O'Reilly):
The document must have one and only one top-level element (e.g., books in Example 22-1).
Every element with content must have both a start and an end tag.
All attributes must have values, and those values must be quoted.
Elements must not overlap.
Markup characters (<, >, and &) must be used to indicate markup only. In other words, you can't have <title>Perl & XML</title> because the & can only indicate an entity reference. CDATA sections are the only exception to this rule.
If an XML document meets these rules, it's said to be "well-formed." Any XML parser that conforms to the XML standard should be able to parse a well-formed document.
There are two parts to any program that processes an XML document: the XML parser, which manipulates the XML markup, and the program's logic, which identifies text, the elements, and their structure. Well-formedness ensures that the XML parser can work with the document, but it doesn't guarantee that the elements have the correct names and are nested correctly.
For example, these two XML fragments encode the same information in different ways:
<book> > </book> <work> <writers>Larry Wall, Tom Christiansen, and Jon Orwant</writers> <name edition="3">Programming Perl</name> </work>
The structure is different, and if you wrote code to extract the title from one ("get the contents of the book element, then find the contents of the title element within that") it would fail completely on the other. For this reason, it is common to write a specification for the elements, attributes, entities, and the ways to use them. Such a specification lets you be confident that your program will never be confronted with XML it cannot deal with. The two formats for such specifications are DTDs and schemas.
DTDs are the older and more limited format, acquired by way of XML's SGML past. DTDs are not written in XML, so you need a custom (complex) parser to work with them. Additionally, they aren't suitable for many usessimply saying "the book element must contain one each of the title, edition, author, and isbn elements in any order" is remarkably difficult.
For these reasons, most modern content specifications take the form of schemas. The World Wide Web Consortium (W3C), the folks responsible for XML and a host of related standards, have a standard called XML Schema (). This is the most common schema language in use today, but it is complex and problematic. An emerging rival for XML Schema is the OASIS group's RelaxNG; see for more information.
There are Perl modules for working with schemas. The most important action you do with a schemas, however, is to validate an XML document against a schema. Recipe 22.5 shows how to use XML::LibXML to do this. XML::Parser does not support validation.
One especially handy property of XML is nested elements. This lets one document encapsulate another. For example, you want to send a purchase order document in a mail message. Here's how you'd do that:
<mail> <header> <from>me@example.com</from> <to>you@example.com</to> <subject>PO for my trip</subject> </header> <body> <purchaseorder> <for>Airfare</for> <bill_to>Editorial</bill_to> <amount>349.50</amount> </purchaseorder> </body> </mail>
It worked, but we can easily run into problems. For example, if the purchase order used <to> instead of <bill_to> to indicate the department to be charged, we'd have two elements named <to>. The resulting document is sketched here:
<mail> <header> <to>you@example.com</to> </header> <body> <to>Editorial</to> </body> </mail>
This document uses to for two different purposes. This is similar to the problem in programming where a global variable in one module has the same name as a global variable in another module. Programmers can't be expected to avoid variable names from other modules, because that would require them to know every module's variables.
The solution to the XML problem is similar to the programming problem's solution: namespaces. A namespace is a unique prefix for the elements and attributes in an XML vocabulary, and is used to avoid clashes with elements from other vocabularies. If you rewrote your purchase-order email example with namespaces, it might look like this:
<mail xmlns: <email:from>me@example.com</email:from> <email:to>you@example.com</email:to> <email:subject>PO for my trip</email:subject> <email:body> <purchaseorder xmlns: <po:for>Airfare</po:for> <po:to>Editorial</po:to> <po:amount>349.50</po:amount> </purchaseorder> </email:body> </mail>
An attribute like xmnls:prefix="URL" identifies the namespace for the contents of the element that the attribute is attached to. In this example, there are two namespaces: email and po. The email:to element is different from the po:to element, and processing software can avoid confusion.
Most of the XML parsers in Perl support namespaces, including XML::Parser and XML::LibXML.
One of the favorite pastimes of XML hackers is turning XML into something else. In the old days, this was accomplished with a program that knew a specific XML vocabulary and could intelligently turn an XML file that used that vocabulary into something else, like a different type of XML, or an entirely different file format, such as HTML or PDF. This was such a common task that people began to separate the transformation engine from the specific transformation, resulting in a new specification: XML Stylesheet Language for Transformations (XSLT).
Turning XML into something else with XSLT involves writing a stylesheet. A stylesheet says "when you see this in the input XML, emit that." You can encode loops and branches, and identify elements (e.g., "when you see the book element, print only the contents of the enclosed title element").
Transformations in Perl are best accomplished through the XML::LibXSLT module, although XML::Sablotron and XML::XSLT are sometimes also used. We show how to use XML::LibXSLT in Recipe 22.7.
Of the new vocabularies and tools for XML, possibly the most useful is XPath. Think of it as regular expressions for XML structureyou specify the elements you're looking for ("the title within a book"), and the XPath processor returns a pointer to the matching elements.
An XPath expression looks like:
/books/book/title
Slashes separate tests. XPath has syntax for testing attributes, elements, and text, and for identifying parents and siblings of nodes.
The XML::LibXML module has strong support for XPath, and we show how to use it in Recipe 22.6. XPath also crops up in the XML::Twig module shown in Recipe 22.8.
Initially, Perl had only one way to parse XML: regular expressions. This was prone to error and often failed to deal with well-formed XML (e.g., CDATA sections). The first real XML parser in Perl was XML::Parser, Larry Wall's Perl interface to James Clark's expat C library. Most other languages (notably Python and PHP) also had an expat wrapper as their first correct XML parser.
XML::Parser was a prototypethe mechanism for passing components of XML documents to Perl was experimental and intended to evolve over the years. But because XML::Parser was the only XML parser for Perl, people quickly wrote applications using it, and it became impossible for the interface to evolve. Because XML::Parser has a proprietary API, you shouldn't use it directly.
XML::Parser is an event-based parser. You register callbacks for events like "start of an element," "text," and "end of an element." As XML::Parser parses an XML file, it calls the callbacks to tell your code what it's found. Event-based parsing is quite common in the XML world, but XML::Parser has its own events and doesn't use the standard Simple API for XML (SAX) events. This is why we recommend you don't use XML::Parser directly.
The XML::SAX modules provide a SAX wrapper around XML::Parser and several other XML parsers. XML::Parser parses the document, but you write code to work with XML::SAX, and XML::SAX translates between XML::Parser events and SAX events. XML::SAX also includes a pure Perl parser, so a program for XML::SAX works on any Perl system, even those that can't compile XS modules. XML::SAX supports the full level 2 SAX API (where the backend parser supports features such as namespaces).
The other common way to parse XML is to build a tree data structure: element A is a child of element B in the tree if element B is inside element A in the XML document. There is a standard API for working with such a tree data structure: the Document Object Model (DOM). The XML::LibXML module uses the GNOME project's libxml2 library to quickly and efficiently build a DOM tree. It is fast, and it supports XPath and validation. The XML::DOM module was an attempt to build a DOM tree using XML::Parser as the backend, but most programmers prefer the speed of XML::LibXML. In Recipe 22.2 we show XML::LibXML, not XML::DOM.
So, in short: for events, use XML::SAX with XML::Parser or XML::LibXML behind it; for DOM trees, use XML::LibXML; for validation, use XML::LibXML.
While the XML specification itself is simple, the specifications for namespaces, schemas, stylesheets, and so on are not. There are many good books to help you learn and use these technologies:
For help with all of the nuances of XML, try Learning XML, by Erik T. Ray (O'Reilly), and XML in a Nutshell, Second Edition, by Elliotte Rusty Harold and W. Scott Means (O'Reilly).
For help with XML Schemas, try XML Schema, by Eric van der Vlist (O'Reilly).
For examples of stylesheets and transformations, and help with the many non-trivial aspects of XSLT, see XSLT, by Doug Tidwell (O'Reilly), and XSLT Cookbook, by Sal Mangano (O'Reilly).
For help with XPath, try XPath and XPointer, by John E. Simpson (O'Reilly).
If you're the type that relishes the pain of reading formal specifications, the W3C web site,, has the full text of all of their standards and draft standards.
|
http://etutorials.org/Programming/Perl+tutorial/Chapter+22.+XML/Introduction/
|
CC-MAIN-2019-18
|
refinedweb
| 2,568
| 63.8
|
Re: Visual Studio 2005
- From: "Mads Peter Nymand" <madspeter@xxxxxxxxxx>
- Date: 7 Mar 2006 11:34:57 -0800
Thanks a lot for your answer.
Maybe the command prompt compiler is set to include the full C# 2.0
Class Library the same way, that the Java command line compiler is set
to include the full java API class library.
It raises a new question though:
In the command prompt and in the Java IDEs I have used, the whole API
class library of the SDK is included by default. Why isn't it so in VS.
Why do you have such a limited set of references in a C# project in VS?
In the top of a file, you are declaring which namespaces to use. That
makes sense, because you have to define, which namespaces the classes
you are using, are belonging to. In my opinion, it does not make sense,
that you have to define new references (add refernces) at project
level, when you are using namespaces, that is part of the standard C#
library. That's just extra work for no reason. Can anybody give me a
reason why it is so - better stucture? faster execution? - something
Mads Peter
.
- Follow-Ups:
- Re: Visual Studio 2005
- From: Jon Skeet [C# MVP]
- References:
- Visual Studio 2005
- From: Mads Peter Nymand
- Prev by Date: Re: Identifying client that sent data to Asynchronous socket.
- Next by Date: Re: Threads very slow
- Previous by thread: Visual Studio 2005
- Next by thread: Re: Visual Studio 2005
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2006-03/msg01177.html
|
crawl-002
|
refinedweb
| 254
| 78.99
|
PROBLEM LINK:
Practice
Contest: Division 1
Contest: Division 2
Setter: Ritesh Gupta
Tester: Radoslav Dimitrov
Editorialist: Taranpreet Singh
DIFFICULTY
Easy-Medium
PREREQUISITES
Dynamic programming and observations.
PROBLEM
Given a circular array A of length N i.e. A_i and A_{i+1} are considered adjacent. Also, A_1 and A_N are also considered adjacent. Initially, the whole sequence consists of 0s and 1s only. In one operation, you must choose a position p such that A_p = 1, set A_p = 0 and choose any position adjacent to p and increment it by 1.
A sequence is considered good if all its elements are different from 1. Find out the minimum number of operations
QUICK EXPLANATION
- The base cases are when the sequence A contains at most one 1.
- The final sequence shall consist of 0s, 2s, and 3s.
- Considering the non-cyclic version, the problem becomes to divide the set of positions of 1s into groups of size 2 and 3 such that the total cost is minimum. The cost for a group is the minimum number of operations required to get all 1s in the same group at the same position.
- For the cyclic variant, we just need to solve the non-cyclic variant 3 times, each time doing a cyclic rotation of sequence such that exactly one 1 moves from start to end of the sequence.
EXPLANATION
First off, let’s consider base cases.
- If the sequence consists of 0s only, then the sequence is already good, hence the answer is zero.
- If the sequence consists of exactly one 1, then it’s impossible to make the sequence good.
In all other cases, it is possible to make the sequence good by a sequence of operations.
Proof: Just perform operations till all 1s are merged into a single position. Since the number of 1s is greater than 1, the sequence won’t contain any occurrence of 1 at the end.
Let’s move to find the minimum number of operations.
Lemma: The final sequence shall consist of 0s and 2s and 3s.
Proof: Let us assume that the final sequence contains an occurrence of 4. Let us assume the positions of 1s to be p_1 < p2 < p3 < p4 and the position containing 4 is p. But, it is better to bring 1s at p1 and p2 in the same position, and at p3 and p4 at the same position (technically splitting 4 into two 2s). We can similarly prove that values \geq 4 cannot appear in the final sequence in minimum operations.
Let’s ignore the cyclic condition for now.
Consider S as the sequence of positions p such that A_p = 1, in sorted order. We can now see that the problem has now become to partition the whole sequence S into groups of size 2 and 3 at minimum cost such that each position is a part of exactly one group. The cost of partitioning is sum of the cost of each group, and the cost of each group is just the absolute difference between the leftmost and rightmost position in the group.
Didn't understand, We want more!
Consider only two 1s, one at position p1 and one at position p2 such that p1 < p2. What is the minimum number of operations?
Answer
p2-p1
Consider only three 1s, one at position p1, one at position p2 and one at position p3 such that p1 < p2 < p3. What is the minimum number of operations?
Answer
p3-p1
The above problem can be easily solved with dynamic programming. Let DP_x denote the minimum cost of partitioning first x positions into groups. It is easy to deduce the recurrence by considering groups of size 2 and 3 ending at position x and taking the minimum.
Give us the recurrence
DP_x = min(DP_{x-2} + S_x-S_{x-1}, DP_{x-3}+S_x-S_{x-2})
The minimum number of operations is given by DP_{|S|}
Coming back to the cyclic variant. Let us suppose we have split the sequence of positions into groups of size 2 and 3. By considering the cyclic sequence, the only case we have missed is when in optimal partitioning, some 1s from the end of the sequence S grouping with some 1s at the start give lower cost.
But since the group size is at most 3, an easy way to deal with this would be to solve the non-cyclic version of this problem three times, each time rotating the sequence A till exactly one 1 moves from start to end.
This way, we are guaranteed that in one shift, the border of non-cyclic array coincides with the border of the group, in which case the answer of the non-cyclic version on this rotation is the required answer.
TIME COMPLEXITY
The time complexity is O(N) per test case.
SOLUTIONS
Setter's Solution
#include <bits/stdc++.h> using namespace std; vector <int> v,v1; int dp[1000010]; int solve() { dp[0] = 0; dp[1] = 1e9; dp[2] = v[1] - v[0]; for(int i=2;i<v.size();i++) dp[i+1] = min(v[i]-v[i-1]+dp[i-1], v[i]-v[i-2]+dp[i-2]); return dp[v.size()]; } int main() { int t; cin >> t; while(t--) { int n; cin >> n; v.clear(); for(int i=1;i<=n;i++) { int x; cin >> x; if(x == 1) v.push_back(i); } if(v.size() == 0) { cout << 0 << endl; continue; } if(v.size() == 1) { cout << -1 << endl; continue; } int ans = 1e9; ans = min(ans, solve()); v1.clear(); v1.push_back(1); for(int i:v) v1.push_back(n-v.back()+1+i); v1.pop_back(); v = v1; ans = min(ans, solve()); v1.clear(); v1.push_back(1); for(int i:v) v1.push_back(n-v.back()+1+i); v1.pop_back(); v = v1; ans = min(ans, solve()); cout << ans << endl; } }; int a[MAXN]; void read() { cin >> n; for(int i = 0; i < n; i++) { cin >> a[i]; } } const int OFFSET = 3; int dp[MAXN][2 * OFFSET + 2]; int myabs(int x) { return x > 0 ? x : -x; } int rec(int pos, int bal) { if(pos == n) { return bal == 0 ? 0 : (int)1e9; } int &memo = dp[pos][bal + OFFSET]; if(memo != -1) return memo; // place 0 here memo = (int)1e9; if(-OFFSET <= bal + a[pos] && bal + a[pos] <= OFFSET) { chkmin(memo, myabs(bal + a[pos]) + rec(pos + 1, bal + a[pos])); } for(int place_here = 2; place_here <= 3; place_here++) { int need = place_here - a[pos]; int new_bal = bal - need; if(new_bal >= -OFFSET && new_bal <= OFFSET) { chkmin(memo, myabs(new_bal) + rec(pos + 1, new_bal)); } } return memo; } void solve() { int cnt = 0; for(int i = 0; i < n; i++) cnt += a[i]; if(cnt == 1) cout << -1 << endl; else if(cnt == 0) cout << 0 << endl; else { int ans = (int)1e9; for(int cnt_rotations = 0; cnt_rotations <= 3; cnt_rotations++) { int _cnt = a[0], second_one_pos = 0; while(_cnt <= 1) _cnt += a[++second_one_pos]; // Set second one as first element rotate(a, a + second_one_pos, a + n); for(int i = 0; i <= n; i++) { for(int bal = 0; bal < 2 * OFFSET + 1; bal++) { dp[i][bal] = -1; } } chkmin(ans, rec(0, 0)); } cout << ans << endl; } } int main() { ios_base::sync_with_stdio(false); cin.tie(NULL); int T; cin >> T; while(T--) { read(); solve(); } return 0; }
Editorialist's Solution
import java.util.*; import java.io.*; import java.text.*; class CHEFZRON{ //SOLUTION BEGIN void pre() throws Exception{} void solve(int TC) throws Exception{ int N = ni(); ArrayList<Integer> pos = new ArrayList<>(); for(int i = 0; i< N; i++)if(ni() == 1)pos.add(i); if(pos.isEmpty())pn(0); else if(pos.size() == 1)pn(-1); else{ int ans = N; for(int IT = 0; IT < 3; IT++){ //Solve for current shift ans = Math.min(ans, solveNonCyclic(pos)); //Moving one 1 from start to end pos.add(pos.remove(0)+N); } pn(ans); } } int solveNonCyclic(ArrayList<Integer> pos){ int N = pos.size(), INF = (int)1e6; int[] DP = new int[1+N]; DP[1] = INF; for(int i = 2; i<= N; i++){ DP[i] = pos.get(i-1)-pos.get(i-2)+DP[i-2]; if(i >= 3)DP[i] = Math.min(DP[i], pos.get(i-1)-pos.get(i-3)+DP[i-3]); } return DP[N]; } //SOLUTION END void hold(boolean b)throws Exception{if(!b)throw new Exception("Hold right there, Sparky!");} DecimalFormat df = new DecimalFormat("0.00000000000"); static boolean multipleTC = true; FastReader in;PrintWriter out; void run() throws Exception{ in = new FastReader(); out = new PrintWriter(System.out); //Solution Credits: Taranpreet Singh int T = (multipleTC)?ni():1; pre();for(int t = 1; t<= T; t++)solve(t); out.flush(); out.close(); } public static void main(String[] args) throws Exception{ new CHEFZRON().run(); }. Suggestions are welcomed as always.
|
https://discuss.codechef.com/t/chefzron-editorial/67105
|
CC-MAIN-2020-40
|
refinedweb
| 1,447
| 62.98
|
qtooltipgroup.3qt - Man Page
Collects tool tips into related groups
Synopsis
#include <qtooltip.h>
Inherits QObject.
Public Members
QToolTipGroup ( QObject * parent, const char * name = 0 )
~QToolTipGroup ()
bool delay () const
bool enabled () const
Public Slots
void setDelay ( bool )
void setEnabled ( bool )
Signals
void showTip ( const QString & longText )
void removeTip ()
Properties
bool delay - whether the display of the group text is delayed
bool enabled - whether tool tips in the group are enabled
Description
The QToolTipGroup class collects tool tips into related groups. Help System.
Member Function Documentation
QToolTipGroup::QToolTipGroup ( QObject * parent, const char * name = 0 )
Constructs a tool tip group called name, with parent parent.
QToolTipGroup::~QToolTipGroup ()
Destroys this tool tip group and all tool tips in it.
bool QToolTipGroup::delay () const
Returns TRUE if the display of the group text is delayed; otherwise returns FALSE. See the "delay" property for details.
bool QToolTipGroup::enabled () const
Returns TRUE if tool tips in the group are enabled; otherwise returns FALSE. See the "enabled" property for details.
void QToolTipGroup::removeTip () [signal]
This signal is emitted when a tool tip in this group is hidden. See the QToolTipGroup documentation for an example of use.
See also showTip().
Example: helpsystem/mainwindow.cpp.
void QToolTipGroup::setDelay ( bool ) [slot]
Sets whether the display of the group text is delayed. See the "delay" property for details.
void QToolTipGroup::setEnabled ( bool ) [slot]
Sets whether tool tips in the group are enabled. See the "enabled" property for details.
void QToolTipGroup::showTip ( const QString & longText ) [signal]
This signal is emitted when one of the tool tips in the group is displayed. longText is the extra text for the displayed tool tip.
See also removeTip().
Example: helpsystem/mainwindow.cpp.
Property Documentation
bool delay
This property holds whether the display of the group text is delayed.
If set to TRUE (the default), the group text is displayed at the same time as the tool tip. Otherwise, the group text is displayed immediately when the cursor enters the widget.
Set this property's value with setDelay() and get this property's value with delay().
bool enabled
This property holds whether tool tips in the group are enabled.
This property's default is TRUE.
Set this property's value with setEnabled() and get this property's value withtooltipgroup.3qt) and the Qt version (3.3.8).
Referenced By
The man page QToolTipGroup.3qt(3) is an alias of qtooltipgroup.3qt(3).
|
https://www.mankier.com/3/qtooltipgroup.3qt
|
CC-MAIN-2022-40
|
refinedweb
| 395
| 67.04
|
Steven Scholz wrote: > Hi, > > i pick up this issue again. > > I am running 2.6.19 + adeos-ipipe-2.6.19-arm-1.6-02.patch + > xenomai-svn-2007-02-22 > on an AT91RM9200 (160MHz/80MHz). > > When starting "latency -p 200" it runs for a while printing > > RTT| 00:05:37 (periodic user-mode task, 200 us period, priority 99) > RTH|-----lat min|-----lat avg|-----lat max|-overrun|----lat best|---lat worst > RTD| 11.200| 139.200| 236.800| 1| 10.800| 280.800 > RTD| 11.200| 146.400| 253.200| 1| 10.800| 280.800 > RTD| 11.200| 144.400| 240.400| 1| 10.800| 280.800 > > but then hangs. The timer LED stops blinking. No "soft lockup detected" > appears.
The only explanation I have is that the period is too small. I do not observe the same behaviour with latency -p 1000. Note that setting the period to a value comparable to the latency is not considered a normal use of Xenomai. When setting the period to 100 us on x86, the latency is less than 50 us (and most of the time a lot less than that), so the period is at least twice the latency. If you observe a latency of 300 us, you should select a period of at least 600 us to run the test in the same conditions. > > Using a BDI200 it looks like that in kernel/sched.c:schedule() he is returning > in the lines > > #ifdef CONFIG_IPIPE > if (unlikely(!ipipe_root_domain_p)) > return; > #endif /* CONFIG_IPIPE */ > > When stepping trough I only see him getting into schedule() but leaving > it in the above lines and in include/linux/proc_fs.h:proc_net_fops_create() > ... Ok. Thanks for pointing this out. That is interesting, but not very informative. It would be interesting if you could get the full backtrace. What would be also interesting would be to set a break point on the timer interrupt handler and to follow what happens from timer interrupt to timer interrupt. I do not think to remember that there are cases where calling schedule from a real-time context is done by Xenomai, so maybe you can call panic in schedule instead of returning. I will try and trig a tracer freeze and dump the tracer at this point in order to have a better idea of what happens. -- Gilles Chanteperdrix _______________________________________________ Xenomai-core mailing list Xenomai-core@gna.org
|
https://www.mail-archive.com/xenomai-core@gna.org/msg04134.html
|
CC-MAIN-2018-51
|
refinedweb
| 396
| 75.5
|
Putting a web front-end on a Jupyter notebook
Let’s say you’re a data scientist, and you’ve been asked to solve a problem. Of course, what you really want is to build an interactive tool, so your colleagues can solve the problem themselves!
In this tutorial, we take a machine-learning model in a Jupyter notebook, and turn it into a web application using the Anvil Uplink:
Want to know more?
Read about the Users Service in the Anvil reference docs.
Topics Covered
How the connection is made | 0:21 - 0:45
We use the Anvil Uplink to connect a Jupyter Notebook to Anvil. It’s a library you can
pip install on your computer or
wherever your Notebook is running.
It allows you to: - call functions in your Jupyter Notebook from your Anvil app - call functions in your Anvil app from your Juypter Notebook - store data in your Anvil app from your Jupyter Notebook - use the Anvil server library inside your Jupyter Notebook
It works for any Python process - this happens to be a Jupyter Notebook, but it could be an ordinary Python script, a Flask app, even the Python REPL!
Running on-site | 0:45 - 1:02
Doing this 100% on-site in your own private network is supported. You get your own private version of the Anvil server (it’s three Docker containers) and the system runs as before, but all within your network.
Read this article for more information.
Plan of action | 1:02 - 1:10
We’re going to: 1. Set up a Jupyter Notebook 2. Connect it to an Anvil app 3. Build a user interface to our model
Setting up the Jupyter Notebook | 1:10 - 1:22
We start with a pre-existing Jupyter Notebook containing a classification model that distinguishes between cats and dogs. You give it an image and it scores it as ‘cat’ or ‘dog’.
Our thanks to Uysim Ty for sharing it on Kaggle.
Making the connection | 1:22 - 1:54
We enable the Anvil Uplink and connect the notebook.
First, we enable the Uplink in the Anvil IDE.
Then we
pip install the Uplink library on the machine the Jupyter Notebook is running on:
pip install anvil-uplink
By adding the following lines, we connect the Jupyter Notebook to our Anvil app:
import anvil.server anvil.server.connect('PKXOZYCZKFLQ75KUKBRTS5DO-7NSHFRCVF76FAGQL')
Now our Jupyter Notebook can do anything we can do in an Anvil Server Module - call Anvil server functions, store data in Data Tables, and define server functions to be called from other parts of the app.
We load the image into the Jupyter Notebook by making an
anvil.server.callable function in the Jupyter Notebook:
import anvil.media @anvil.server.callable def classify_image(file): with anvil.media.TempFile(file) as filename: img = load_img(filename)
This function can be called from the client-side code (and server-side code) of our Anvil app. | 3:14 - 3:45
We drag-and-drop components to create a User Interface. It consists of a FileLoader to upload the images, an Image to display them, and a Label to display the classification.
Making the User Interface call the Jupyter Notebook | 3:45 - 4:32
Now to write some Python code that runs in the browser so the app responds when an image is loaded in..
It works! Now to publish it | 4:32 - 5:10
We give it a quick test by uploading an image of a cat - yes, it works, that’s a cat.
It’s already published at a private URL. By click a button and entering our chosen domain name, we made it public at.
Try it for yourself
Build this app for yourself - Sign up and follow along, or watch more tutorials in the Anvil Learning Centre.
The Jupyter Notebook is available on Kaggle - our thanks to Uysim Ty, its creator.
Next up
If you’d like to learn the basics of Anvil, start with the Hello, World! tutorial.
If you’d like to learn more about the Uplink, you can read the reference docs.
|
https://anvil.works/learn/tutorials/jupyter-notebook-to-web-app
|
CC-MAIN-2019-51
|
refinedweb
| 680
| 69.62
|
Ceiling in sorted array
In last post, binary search algorithm, we discussed basics of algorithm, it’s implementation and worst case complexity. Today, we will use binary search algorithm to solve another problem called find ceiling in sorted array. To understand problem, first let’s understand what is ceiling? Given an array of integer and element X, ceiling of X is the smallest element in array greater than or equal to X. For example in given array below, ceiling of 7 would be 9. It’s good to know that array is sorted array.
What will be the most simple solution? To scan through array and see for each index i, if key lies between A[i] and A[i+1] or is equal to A[i], return index i. Worst case complexity would be
O(N). But then what’s the fun in solving problem in O(N) complexity!
Ceiling in sorted array : Thought Process
We know that array is sorted, which makes sure that if
A[i]>
key, all A[j] will also be greater than key for
j <
i. This helps us to discard some part of array.How?
If all
A[j] are greater than key, it means that from
i to end of array, minimum element which is greater than key will be
A[i] or any element left of
i. But surely not on the right side.
This seems like binary search, we split in mid and based on relation of mid with key, we either discard left or right part of array and focus on other. Whenever, you feel like binary search can be used to solve any problem, it’s good to prove that it will work.
Consider all possible solutions as a set S, any value in that set can be answer to problem. Now, let’s say we have a predicate which takes input from candidate set S. This predicate validates that candidate does not violet any condition given problem statement. Predicate function can return any value, for purpose of simplicity, in this article assume that it return true or false.
Binary search can be used if and only if for all x in candidate Set S, predicate(x) implies predicate(y) for all y > x.
Why do I need this abstraction, when I can solve this problem with slight modification of binary search? That’s true, but then you are not realizing the true power of binary search. This is because many problems can’t be modeled as searching for a particular value, but it’s possible to define and evaluate a predicate such as “Is there an assignment which costs x or less?”, when we’re looking for some sort of assignment with the lowest cost.
How to find ceiling in sorted array with this method?
What will be our candidate solution set S in this problem? Since, we are looking for an array index which satisfy the condition, that it is smallest element greater than key, every index is in candidate set.
What is the predicate here? Predicate is : if element at
i is greater than key. If it is, predicate returns true else false.
Does the condition to apply binary search apply? If
p(x) is true, then for all
y>x>,
p(y) is also true. So we can apply binary search on candidate set and find which is the least x, where predicate returns true.
Ceiling in sorted array : Implementation
package com.company; /** * Created by sangar on 25.3.18. */ public class BinarySearcchAlgorithm { private static boolean predicate(int[] a, int index, int key){ if(a[index] >= key) return true; return false; } public static int findFirstElementEqualOrGreater(int[] a, int start, int end, int key){ while(start < end){ int mid = start + (end - start) / 2; if(predicate(a, mid, key)){ end = mid; } else{ start = mid + 1; } } if(!predicate(a,start, key)) return -1; return start; } public static void main(String[] args) { int[] input = {3,10,11,15,17,18,19,20}; int index = findFirstElementEqualOrGreater(input,0, input.length-1, 15); System.out.print(index == -1 ? "Element not found" : "Element found at : " + index); } }
Important part of this implementation is to get lower and higher bounds of solution. In ceiling problem, lowest possible value is 0, first index of array and highest will be array length-1. How to deal with mid index? Should it be included in any subarray? If yes, which left or right? Answer is it depends what are you looking for. In this example, if
A[mid] > key, that is predicate(mid) is true, it is likely to be smallest element which is greater than key, so we must include it in left subarray partition.
high = mid;
However, if predicate(mid) is false, then mid is not required on right subarray.
low = mid+1;
To test that your implementation for any binary search application works, always test your code on a two-element set where the predicate is false for the first element and true for the second.
Complexity of this algorithm to find ceiling in sorted array is
O(log N).
Learnings
This problem makes us understand that binary search is not limited to array indices, it can be applied to any discrete set which satisfy mentioned condition. Find, candidate solution set, predicate and make sure that predicate follows that p(x) implies p(y) for all y < x
In next few post, we will take problems from Top coder and SPOJ, and solve them using binary search.
Please feel free to drop us comment or email at communications@algorithmsandme.com if you find anything wrong or missing. If you are willing to write articles and share your knowledge with other learners, please drop us a message.
|
https://algorithmsandme.com/tag/ceiling-in-sorted-array/
|
CC-MAIN-2019-18
|
refinedweb
| 953
| 63.29
|
Creating Searchable Task Links for a Control Panel Item
In Windows Vista, the Control Panel Home provides task links beneath Control Panel items' icons as shown here.
When a user enters text in the Search box in the upper right of the window, the search results include these task links as shown here for a search on the word "display".
This topic discusses the following:
- Task Link Best Practices
- Creating a Task XML File
- Localizing Task Links
- Keywords and Searching
- Related Topics
Task Link Best Practices
It is recommended that you provide task links for your Control Panel items as an aid to users searching for functionality. It is also possible to add keywords to the task links so that a user can find them even without knowing a task's title or terminology.
The best task links serve three purposes:
- Provide a shortcut to the functionality of the Control Panel item.
- Provide keywords so that users can search using their own language. A user may want to type "compaction" because he or she knows the technical term. A user may type "DB too big", or "database filesize". Adding suitable keywords to the task means that users can find your Control Panel item.
- Provide hints about what the Control Panel item does. When a user sees the links beneath a Control Panel item's icon, they can get more information about what the Control Panel item is used for than the name and icon alone can provide.
Task links should be end-user focused, not technology- or feature-focused. For example, "Enable database compaction" would be bad wording because it is technical jargon unfamiliar to the majority of users. "Make my database file smaller" is better because it mentions the user's actual end goal rather than the mechanism to get there. The goal is not to oversimplify, but rather to phrase the task in terms of what the user wants to accomplish.
Creating a Task XML File
Task links are defined in an XML file. This section provides the details of an example .xml file that defines three task links for a Control Panel item called Notepad. It defines titles, keywords, and the command lines for the task links. It also illustrates how to specify which task links appear under which category. A Control Panel item that is registered to appear in more than one category has the option of showing different links depending on the category. Explanations of the various elements and information provided are given as comments in the XML itself.
<?xml version="1.0" ?> <applications xmlns="" xmlns: <!-- Notepad --> <application id="{0052D9FC-6764-4D29-A66F-2F3BD9E2BB40}"> <!-- This GUID must match the GUID you created for your Control Panel item, and registered in namespace --> <!-- Solitaire --> <sh:task <!-- This is a generated GUID, specific to this task link --> <sh:name>Play solitaire</sh:name> <sh:keywords>solitare;game;cards;ace;diamond;heart;club;single</sh:keywords> <sh:command>%ProgramFiles%\Microsoft Games\Solitaire\solitaire.exe</sh:command> </sh:task> <!-- Task Manager --> <sh:task <!-- This is a generated GUID, specific to this task link --> <!-- The <sh:name>Open Internet Explorer</sh:name> <sh:keywords>IE;web;browser;net;Internet;ActiveX;plug-in;plugin</sh:keywords> <sh:command>iexplore.exe</sh:command> </sh:task> <!-- Category assignments --> <!-- Appearance and Personalization --> <category id="1"> <!-- These idref attributes refer to the GUIDs of the tasks defined above. A maximum of five tasks are shown per category. --> <sh:task > <!-- Programs --> <category id="8"> <sh:task <sh:name>Click here to play</sh:name> <!-- This overrides the defined text. When the Notepad Control Panel item appears in the Programs category, it uses the "Click here to play" text for this Solitaire link, instead of "Play solitaire". --> </sh:task> > </application> </applications>
Localizing Task Links
The text for the task links' titles and keywords can be stored in a string table in the Control Panel item's module. In that case, the format used in the XML file becomes:
<sh:task <!-- This is a generated GUID, specific to this task link --> <sh:name>@myTextResources.dll,-100</sh:name> <sh:keywords>@myTextResources.dll,-101</sh:keywords> <sh:command>%ProgramFiles%\Microsoft Games\Solitaire\solitaire.exe</sh:command> </sh:task>
In this example, the text for the task's name appears in string resource ID 100 in myTextResources.dll, and the text for the keywords appears in string resource ID 101.
Keywords and Searching
The Control Panel search finds task links based on their name and also on their keywords. It matches each word in the search with the prefix of words in the name and keywords. For example, the query string "cpu" would match the task "See running processes" in the earlier example because "cpu" is in the keyword list. The query string "pro" would also find that result because the title word "processes" begins with that string. Note that the query only matches prefixes. The query string "rocess" would not match a result because that string, while part of the title word "process", does not begin that word.
When a search query contains multiple tokens, all the tokens must match the prefix of some keyword or part of the task title for a result. The query "cpu level" would not match, because "level" is not in the keyword set. The query "cpu run" would give a result, because "cpu" matches a keyword, and "run" is the prefix of the word "running" in the task's title.
Control Panel does not automatically provide spelling correction or variations such as plurals or hyphenation. Matches are also case-insensitive. To ensure a successful keyword list, it is recommended to provide variations yourself, such as for this task link that involves screen savers: "screensavers;screen-savers;screen savers;"
There is no need to add the singular "screensaver", because a query that finds "screensavers" will also find "screensaver" due to the prefix match. A user typing even part of the word, like "screensa" will still see a match on a task link that has "screensavers" as a keyword. For languages where plural forms change the word, it is necessary to put all the forms a user could reasonably be expected to type into the keywords.
As a convention, Microsoft has omitted small words like "how do I" or "I want to" from the set of keywords. The expectation is that most users will simply type the most important words such as "mouse", "high contrast", or "video driver" to get results.
|
https://docs.microsoft.com/en-us/previous-versions/bb776840(v=vs.85)?redirectedfrom=MSDN
|
CC-MAIN-2020-10
|
refinedweb
| 1,075
| 62.58
|
Last time we saw a geometric version of the algorithm to add points on elliptic curves. We went quite deep into the formal setting for it (projective space
), and we spent a lot of time talking about the right way to define the “zero” object in our elliptic curve so that our issues with vertical lines would disappear.
With that understanding in mind we now finally turn to code, and write classes for curves and points and implement the addition algorithm. As usual, all of the code we wrote in this post is available on this blog’s Github page.
Points and Curves
Every introductory programming student has probably written the following program in some language for a class representing a point.
class Point(object): def __init__(self, x, y): self.x = x self.y = y
It’s the simplest possible nontrivial class: an x and y value initialized by a constructor (and in Python all member variables are public).
We want this class to represent a point on an elliptic curve, and overload the addition and negation operators so that we can do stuff like this:
p1 = Point(3,7) p2 = Point(4,4) p3 = p1 + p2
But as we’ve spent quite a while discussing, the addition operators depend on the features of the elliptic curve they’re on (we have to draw lines and intersect it with the curve). There are a few ways we could make this happen, but in order to make the code that uses these classes as simple as possible, we’ll have each point contain a reference to the curve they come from. So we need a curve class.
It’s pretty simple, actually, since the class is just a placeholder for the coefficients of the defining equation. We assume the equation is already in the Weierstrass normal form, but if it weren’t one could perform a whole bunch of algebra to get it in that form (and you can see how convoluted the process is in this short report or page 115 (pdf p. 21) of this book). To be safe, we’ll add a few extra checks to make sure the curve is smooth.
class EllipticCurve(object): def __init__(self, a, b): # assume we're already in the Weierstrass form self.a = a self.b = b self.discriminant = -16 * (4 * a*a*a + 27 * b * b) if not self.isSmooth(): raise Exception("The curve %s is not smooth!" % self) def isSmooth(self): return self.discriminant != 0 def testPoint(self, x, y): return y*y == x*x*x + self.a * x + self.b def __str__(self): return 'y^2 = x^3 + %Gx + %G' % (self.a, self.b) def __eq__(self, other): return (self.a, self.b) == (other.a, other.b)
And here’s some examples of creating curves
>>> EllipticCurve(a=17, b=1) y^2 = x^3 + 17x + 1 >>> EllipticCurve(a=0, b=0) Traceback (most recent call last): [...] Exception: The curve y^2 = x^3 + 0x + 0 is not smooth!
So there we have it. Now when we construct a Point, we add the curve as the extra argument and a safety-check to make sure the point being constructed is on the given elliptic curve.
class Point(object): def __init__(self, curve, x, y): self.curve = curve # the curve containing this point self.x = x self.y = y if not curve.testPoint(x,y): raise Exception("The point %s is not on the given curve %s" % (self, curve))
Note that this last check will serve as a coarse unit test for all of our examples. If we mess up then more likely than not the “added” point won’t be on the curve at all. More precise testing is required to be bullet-proof, of course, but we leave explicit tests to the reader as an excuse to get their hands wet with equations.
Some examples:
>>> c = EllipticCurve(a=1,b=2) >>> Point(c, 1, 2) (1, 2) >>> Point(c, 1, 1) Traceback (most recent call last): [...] Exception: The point (1, 1) is not on the given curve y^2 = x^3 + 1x + 2
Before we go ahead and implement addition and the related functions, we need to be decide how we want to represent the ideal point
. We have two options. The first is to do everything in projective coordinates and define a whole system for doing projective algebra. Considering we only have one point to worry about, this seems like overkill (but could be fun). The second option, and the one we’ll choose, is to have a special subclass of Point that represents the ideal point.
class Ideal(Point): def __init__(self, curve): self.curve = curve def __str__(self): return "Ideal"
Note the inheritance is denoted by the parenthetical (Point) in the first line. Each function we define on a Point will require a 1-2 line overriding function in this subclass, so we will only need a small amount of extra bookkeeping. For example, negation is quite easy.
class Point(object): ... def __neg__(self): return Point(self.curve, self.x, -self.y) class Ideal(Point): ... def __neg__(self): return self
Note that Python allows one to override the prefix-minus operation by defining __neg__ on a custom object. There are similar functions for addition (__add__), subtraction, and pretty much every built-in python operation. And of course addition is where things get more interesting. For the ideal point it’s trivial.
class Ideal(Point): ... def __add__(self, Q): return Q
Why does this make sense? Because (as we’ve said last time) the ideal point is the additive identity in the group structure of the curve. So by all of our analysis,
, and the code is satisfyingly short.
For distinct points we have to follow the algorithm we used last time. Remember that the trick was to form the line
passing through the two points being added, substitute that line for
in the elliptic curve, and then figure out the coefficient of
in the resulting polynomial. Then, using the two existing points, we could solve for the third root of the polynomial using Vieta’s formula.
In order to do that, we need to analytically solve for the coefficient of the
term of the equation
. It’s tedious, but straightforward. First, write
The first step of expanding
gives us
And we notice that the only term containing an
part is the last one. Expanding that gives us
And again we can discard the parts that don’t involve
. In other words, if we were to rewrite
as
, we’d expand all the terms and get something that looks like
where
are some constants that we don’t need. Now using Vieta’s formula and calling
the third root we seek, we know that
Which means that
. Once we have
, we can get
from the equation of the line
.
Note that this only works if the two points we’re trying to add are different! The other two cases were if the points were the same or lying on a vertical line. These gotchas will manifest themselves as conditional branches of our add function.
class Point(object): ... def __add__(self, Q): if isinstance(Q, Ideal): return self x_1, y_1, x_2, y_2 = self.x, self.y, Q.x, Q.y if (x_1, y_1) == (x_2, y_2): # use the tangent method ... else: if x_1 == x_2: return Ideal(self.curve) # vertical line # Using Vieta's formula for the sum of the roots m = (y_2 - y_1) / (x_2 - x_1) x_3 = m*m - x_2 - x_1 y_3 = m*(x_3 - x_1) + y_1 return Point(self.curve, x_3, -y_3)
First, we check if the two points are the same, in which case we use the tangent method (which we do next). Supposing the points are different, if their
values are the same then the line is vertical and the third point is the ideal point. Otherwise, we use the formula we defined above. Note the subtle and crucial minus sign at the end! The point
is the third point of intersection, but we still have to do the reflection to get the sum of the two points.
Now for the case when the points
are actually the same. We’ll call it
, and we’re trying to find
. As per our algorithm, we compute the tangent line
at
. In order to do this we need just a tiny bit of calculus. To find the slope of the tangent line we implicitly differentiate the equation
and get
The only time we’d get a vertical line is when the denominator is zero (you can verify this by taking limits if you wish), and so
implies that
and we’re done. The fact that this can ever happen for a nonzero
should be surprising to any reader unfamiliar with groups! But without delving into a deep conversation about the different kinds of group structures out there, we’ll have to settle for such nice surprises.
In the other case
, we plug in our
values into the derivative and read off the slope
as
. Then using the same point slope formula for a line, we get
, and we can use the same technique (and the same code!) from the first case to finish.
There is only one minor wrinkle we need to smooth out: can we be sure Vieta’s formula works? In fact, the real problem is this: how do we know that
is a double root of the resulting cubic? Well, this falls out again from that very abstract and powerful theorem of Bezout. There is a lot of technical algebraic geometry (and a very interesting but complicated notion of dimension) hiding behind the curtain here. But for our purposes it says that our tangent line intersects the elliptic curve with multiplicity 2, and this gives us a double root of the corresponding cubic.
And so in the addition function all we need to do is change the slope we’re using. This gives us a nice and short implementation
def __add__(self, Q): if isinstance(Q, Ideal): return self x_1, y_1, x_2, y_2 = self.x, self.y, Q.x, Q.y if (x_1, y_1) == (x_2, y_2): if y_1 == 0: return Ideal(self.curve) # slope of the tangent line m = (3 * x_1 * x_1 + self.curve.a) / (2 * y_1) else: if x_1 == x_2: return Ideal(self.curve) # slope of the secant line m = (y_2 - y_1) / (x_2 - x_1) x_3 = m*m - x_2 - x_1 y_3 = m*(x_3 - x_1) + y_1 return Point(self.curve, x_3, -y_3)
What’s interesting is how little the data of the curve comes into the picture. Nothing depends on
, and only one of the two cases depends on
. This is one reason the Weierstrass normal form is so useful, and it may bite us in the butt later in the few cases we don’t have it (for special number fields).
Here are some examples.
>>> C = EllipticCurve(a=-2,b=4) >>> P = Point(C, 3, 5) >>> Q = Point(C, -2, 0) >>> P+Q (0.0, -2.0) >>> Q+P (0.0, -2.0) >>> Q+Q Ideal >>> P+P (0.25, 1.875) >>> P+P+P Traceback (most recent call last): ... Exception: The point (-1.958677685950413, 0.6348610067618328) is not on the given curve y^2 = x^3 + -2x + 4! >>> x = -1.958677685950413 >>> y = 0.6348610067618328 >>> y*y - x*x*x + 2*x - 4 -3.9968028886505635e-15
And so we crash headfirst into our first floating point arithmetic issue. We’ll vanquish this monster more permanently later in this series (in fact, we’ll just scrap it entirely and define our own number system!), but for now here’s a quick fix:
>>> import fractions >>> frac = fractions.Fraction >>> C = EllipticCurve(a = frac(-2), b = frac(4)) >>> P = Point(C, frac(3), frac(5)) >>> P+P+P (Fraction(-237, 121), Fraction(845, 1331))
Now that we have addition and negation, the rest of the class is just window dressing. For example, we want to be able to use the subtraction symbol, and so we need to implement __sub__
def __sub__(self, Q): return self + -Q
Note that because the Ideal point is a subclass of point, it inherits all of these special functions while it only needs to override __add__ and __neg__. Thank you, polymorphism! The last function we want is a scaling function, which efficiently adds a point to itself
times.
class Point(object): ... def __mul__(self, n): if not isinstance(n, int): raise Exception("Can't scale a point by something which isn't an int!") else: if n < 0: return -self * -n if n == 0: return Ideal(self.curve) else: Q = self R = self if n & 1 == 1 else Ideal(self.curve) i = 2 while i <= n: Q = Q + Q if n & i == i: R = Q + R i = i << 1 return R def __rmul__(self, n): return self * n class Ideal(Point): ... def __mul__(self, n): if not isinstance(n, int): raise Exception("Can't scale a point by something which isn't an int!") else: return self
The scaling function allows us to quickly compute
(
times). Indeed, the fact that we can do this more efficiently than performing
additions is what makes elliptic curve cryptography work. We’ll take a deeper look at this in the next post, but for now let’s just say what the algorithm is doing.
Given a number written in binary
, we can write
as
The advantage of this is that we can compute each of the
iteratively using only
additions by multiplying by 2 (adding something to itself)
times. Since the number of bits in
is
, we’re getting a huge improvement over
additions.
The algorithm is given above in code, but it’s a simple bit-shifting trick. Just have
be some power of two, shifted by one at the end of every loop. Then start with
being
, and replace
, and in typical programming fashion we drop the indices and overwrite the variable binding at each step (Q = Q+Q). Finally, we have a variable
to which
is added when the
-th bit of
is a 1 (and ignored when it’s 0). The rest is bookkeeping.
Note that __mul__ only allows us to write something like P * n, but the standard notation for scaling is n * P. This is what __rmul__ allows us to do.
We could add many other helper functions, such as ones to allow us to treat points as if they were lists, checking for equality of points, comparison functions to allow one to sort a list of points in lex order, or a function to transform points into more standard types like tuples and lists. We have done a few of these that you can see if you visit the code repository, but we’ll leave flushing out the class as an exercise to the reader.
Some examples:
>>> import fractions >>> frac = fractions.Fraction >>> C = EllipticCurve(a = frac(-2), b = frac(4)) >>> P = Point(C, frac(3), frac(5)) >>> Q = Point(C, frac(-2), frac(0)) >>> P-Q (Fraction(0, 1), Fraction(-2, 1)) >>> P+P+P+P+P (Fraction(2312883, 1142761), Fraction(-3507297955, 1221611509)) >>> 5*P (Fraction(2312883, 1142761), Fraction(-3507297955, 1221611509)) >>> Q - 3*P (Fraction(240, 1), Fraction(3718, 1)) >>> -20*P (Fraction(872171688955240345797378940145384578112856996417727644408306502486841054959621893457430066791656001, 520783120481946829397143140761792686044102902921369189488390484560995418035368116532220330470490000), Fraction(-27483290931268103431471546265260141280423344817266158619907625209686954671299076160289194864753864983185162878307166869927581148168092234359162702751, 11884621345605454720092065232176302286055268099954516777276277410691669963302621761108166472206145876157873100626715793555129780028801183525093000000))
As one can see, the precision gets very large very quickly. One thing we’ll do to avoid such large numbers (but hopefully not sacrifice security) is to work in finite fields, the simplest version of which is to compute modulo some prime.
So now we have a concrete understanding of the algorithm for adding points on elliptic curves, and a working Python program to do this for rational numbers or floating point numbers (if we want to deal with precision issues). Next time we’ll continue this train of thought and upgrade our program (with very little work!) to work over other simple number fields. Then we’ll delve into the cryptographic issues, and talk about how one might encode messages on a curve and use algebraic operations to encode their messages.
Until then!
|
http://jeremykun.com/category/algorithms/
|
CC-MAIN-2014-10
|
refinedweb
| 2,672
| 71.14
|
Do I really need to import namespaces into views?
Depending on your project size and complexity, you may not have ever needed access to any non-standard namspaces in your views - particularly if you are a fan of the var keyword. However, as soon as you start writing your own HTML helpers or using resource files, you will almost certainly find that it becomes essential.
Of course, there are workarounds. Some of those that I have seen (in companies that will remain nameless!) include using the full type name throughout the view (e.g. CompanyName.ProjectName.Resources.ErrorMessages.LoginFailure) and using an incorrect namespace that is already available globally such as System.Web.Mvc. Obviously both of these workarounds are pretty horrible but without the correct knowledge it is amazing the hacks that some developers resort to!
To avoid such ugliness and keep your code clean, read on...
Importing namespaces in MVC2 and MVC3 with the ASPX View Engine
If you are using the standard ASPX view engine, then regardless of the version of MVC, the mechanism for importing namespaces is the same. To make a namespace available to a single view, you use the following directive:
<%@ Import Namespace="CompanyName.ProjectName.Resources" %>
This is sufficient in some situations, but in the case of say a resources assembly (for localisation in a multi-language application), pretty much every view will require access to this namespace. We do not want to clutter up every view with this statement. What we need is a global namespace available to ALL views.
Because the ASPX view engine is essentially just ASP.NET WebForms, the way to add a global namespace is exactly the same as you would do with a non-MVC WebForm Application. Just open up your web.config and add the namespace under system.web/pages/namespaces:
<system.web> ... <pages> <namespaces> <add namespace="System.Web.Helpers" /> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Web.WebPages"/> <add namespace="CompanyName.ProjectName.Resources"/> </namespaces> </pages> </system.web>
Importing namespaces in MVC3 with the Razor View Engine
The new MVC3 Razor view engine has a completely different syntax to the earlier ASPX one and does not use the <%@ style directives that are so familiar to ASP.NET WebForm developers. The good news is that the syntax is much cleaner. To import a namespace into a single Razor view, we use:
@using CompanyName.ProjectName.Resources
Very clean I think you will agree, but as we have already mentioned, if all or most of your views need access to certain namespaces, you do not want to be forced to add this statement to every view. Just as with the built-in HTML helpers, you want every view, partial view and template to have immediate access to your custom helpers, resources etc., without any messing about with using statements. We need global namespaces.
In an ideal world, this would work the same way as with WebForms, but the Razor view engine is not part of core ASP.NET and as such, we cannot use ASP.NET specific configuration sections. Instead, the MVC team have added their own configuration section, but to find it we must look inside the second web.config that is found in the Views folder of your MVC application:
="CompanyName.ProjectName.Resources"/> </namespaces> </pages> </system.web.webPages.razor>
If you have issues with types not being found even though you have added them to the global namespaces list, you may find that closing and reopening the view (or even the solution) should fix the issue.
What about the Global.asax?
If you worked with beta releases of MVC3, you may have come across an alternative method of registering global namespaces:
protected void Application_Start() { ... CodeGeneratorSettings.AddGlobalImport("CompanyName.ProjectName.Resources"); }
Alas, this is no more. This code-based method of registration was removed from the final release of MVC3 in favour of the system.web.webPages.razor config section that we have discussed above.
Conclusion
Both ASPX and Razor MVC view engines allow you to import a namespace into a single view or globally, so that the namespace is available to all views, partial views and templates. Both mechanisms are useful for different scenarios depending on how many views will need access to the types in the imported namespace. For site-wide types such as resource files, global namespaces are essential.
The syntax for each view engine is slightly different, as is the method for creating global namespaces, but in either case it is very simple. Whilst there are hacks available for those who are not familiar with namespace importing, for anyone who has read this article, there is no excuse :-)
Useful or Interesting?
If you liked the article, we would really appreciate it if you would share it with your Twitter followers.Share on Twitter »
|
http://www.devtrends.co.uk/blog/importing-namespaces-into-razor-views-in-asp.net-mvc-3
|
CC-MAIN-2015-40
|
refinedweb
| 815
| 57.16
|
Today we have a pretty quick and easy topic. Very much related to our previous item about keeping accessibility as low as possible. Today's topic is about using accessor methods instead of providing public access to fields.
The core of what this chapter comes down to is to resist the urge to make fields of a class publicly accessible. By making these items accessible we surrender the encapsulation of the class and all the benefits it brings. You can't change the representation of the data, enforce invariants, or perform other actions when a field is accessed. While many hard-core object oriented programmers, as Effective Java puts it, will say that all fields should have accessors and none should be accessible outside of the class Effective Java disagrees in some cases. While it agrees that this should be the case with public classes, it suggests that this may be unnecessary with package-private and private classes. The main reason for this pitch is that you can avoid the visual clutter while still keeping safety as the blast radius is low when changes need to be made. It's up to you and your organization if you agree (below I will share a way to decrease the clutter).
What about other exceptions? Exposing constant values from a class can be acceptable in some cases. There are still trade-offs, For example you cannot change the internal representation of the value nor can you do auxiliary actions when data is accessed. However, you can enforce invariants in that there isn't any varying of data in constants.
Finally, how can we lessen the visual clutter of accessors. As pitched numerous times in this blog series, Lombok again comes to the rescue. Lombok has the
@Getter and
@Setter annotations that do just as they sound like, provide getters and setters. This allows very low clutter in your code and you still get the ability to implement the method yourself later and enforce invariants, do auxiliary actions, etc.
That's it for this chapter. It's pretty straightforward and simply allows you to keep control of your classes. With modern tooling it doesn't even create much clutter.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kylec32/effective-java-tuesday-in-public-classes-use-accessors-not-public-fields-3hm2
|
CC-MAIN-2021-31
|
refinedweb
| 368
| 62.68
|
package: the library unit
Posted on March 1st, 2001
import java.util.*;
This brings in the entire utility library that’s part of the standard Java distribution. Since Vector is in java.util, you can now either specify the full name java.util.Vector (which you can do without the import statement), or you can simply say Vector (because of the import).
If you want to bring in a single class, you can name that class in the import statement
import java.util.Vector;
Now you can use Vector the book. If you’re planning to create a program that is “Internet friendly,” however,). If you don’t do this, the compiler will complain. There can be only one public class in each compilation unit (again, the jar utility in Java 1.1). The Java interpreter is responsible for finding, loading and interpreting these files. [23], where the package statement must appear as the first non-comment packages { <p><tt> // . . . </tt></p>.
Creating unique package names how Java handles the problem of clutter.
It also (by InterNIC, [24] who controls their assignment) when Java, or a tool like a Java-enabled browser, is installed on a 1.2 so the entire package name is lowercase.) I can further subdivide this by deciding that I want to create a library named util, so I’ll end up with a package name:
package com.bruceeckel.util;
Now this package name can be used as an umbrella name space for the following two files:
//: Vector.java // Creating a package package com.bruceeckel.util; public class Vector { public Vector() { System.out.println( "com.bruceeckel.util.Vector"); } } ///:~
When you create your own packages, you’ll discover that the package statement must be the first non-comment code in the file. The second file looks much the same:
//: List.java // Creating a package package com.bruceeckel.util; public class List { public List() { System.out.println( "com.bruceeckel.util.List"); } } ///:~
Both of these files are placed in the subdirectory on my system:
C:\DOC\JavaT\com\bruceeckel\util
If you walk back through this, you can see the package name com.bruceeckel.util,: (See page 97 if you have trouble executing this program.):
//: LibTest.java // Uses the library package c05; import com.bruceeckel.util.*; public class LibTest { public static void main(String[] args) { Vector v = new Vector(); List l = new List(); } } ///:~
When the compiler encounters the import statement, it begins searching at the directories specified by CLASSPATH, looking for subdirectory com\bruceeckel\util, then seeking the compiled files of the appropriate names ( Vector.class for Vector and List.class for List). Note that both the classes and the desired methods in Vector and List must be public.Automatic compilation
The first time you create an object of an imported class (or you access a static member of a class), the compiler will hunt for the .class file of the same name (so if you’re creating an object of class X, it looks for X.class) in the appropriate directory. If it finds only X.class , that’s what it must use. However, if it also finds an X.java in the same directory, the compiler will compare the date stamp on the two files, and if X.java is more recent than X.class, it will automatically recompile X.java to generate an up-to-date X.class.
If a class is not in a .java file of the same name as that class, this behavior will not occur for that class.Collisions
What happens if two libraries are imported via * and they include the same names? For example, suppose a program does this:
import com.bruceeckel.util.*; import java.util.*;
Since java.util.* also contains a Vector class, this causes a potential collision. However, as long as the collision doesn’t actually occur,.
A custom tool library
With this knowledge, you can now create your own libraries of tools to reduce or eliminate duplicate code. Consider, for example, creating an alias for System.out.println( ) to reduce typing. This can be part of a package called tools:
//: P.java // The P.rint & P.rintln shorthand package com.bruceeckel.tools; public class P { public static void rint(Object obj) { System.out.print(obj); } public static void rint(String s) { System.out.print(s); } public static void rint(char[] s) { System.out.print(s); } public static void rint(char c) { System.out.print(c); } public static void rint(int i) { System.out.print(i); } public static void rint(long l) { System.out.print(l); } public static void rint(float f) { System.out.print(f); } public static void rint(double d) { System.out.print(d); } public static void rint(boolean b) { System.out.print(b); } public static void rintln() { System.out.println(); } public static void rintln(Object obj) { System.out.println(obj); } public static void rintln(String s) { System.out.println(s); } public static void rintln(char[] s) { System.out.println(s); } public static void rintln(char c) { System.out.println(c); } public static void rintln(int i) { System.out.println(i); } public static void rintln(long l) { System.out.println(l); } public static void rintln(float f) { System.out.println(f); } public static void rintln(double d) { System.out.println(d); } public static void rintln(boolean b) { System.out.println(b); } } ///:~
All the different data types can now be printed out:
//: ToolTest.java // Uses the tools library import com.bruceeckel.tools.*; public class ToolTest { public static void main(String[] args) { P.rintln("Available from now on!"); } } ///:~
So from now on, whenever you come up with a useful new utility, you can add it to the tools directory. (Or to your own personal util or tools directory.)Classpath pitfall
The P.java file brought up an interesting pitfall. Especially with early implementations of Java, setting the classpath correctly is generally quite a headache. During the development of this book, the P.java file was introduced and seemed to work fine, but at some point it began breaking. For a long time I was certain that this was the fault of one implementation of Java or another, but finally I discovered that at one point I had introduced a program ( CodePackager.java, shown in Chapter 17) that used a different class P. Because it was used as a tool, it was sometimes placed in the classpath, and other times it wasn’t. When it was, the P in CodePackager.java was found first by Java when executing a program in which it was looking for the class in com.bruceeckel.tools, and the compiler would say that a particular method couldn’t be found. This was frustrating because you can see the method in the above class P and no further diagnostics were reported to give you a clue that it was finding a completely different class. (That wasn’t even public.)
At first this could seem like a compiler bug, but if you look at the import statement it says only “here’s where you might find P.” However, the compiler is supposed to look anywhere in its classpath, so if it finds a P there it will use it, and if it finds the “wrong” one first during a search then it will stop looking. This is slightly different from the case described on page 196 because there the offending classes were both in packages, and here there was a P that was not in a package, but could still be found during a normal classpath search.
If you’re having an experience like this, check to make sure that there’s only one class of each name anywhere in your classpath.
Using imports to change behavior for 9, you’ll learn about a more sophisticated tool for dealing with errors called exception handling , but the perr( ) method will work fine in the meantime.
When you want to use this class, you add a line in your program:
import com.bruceeckel.tools.debug.*;
To remove the assertions so you can ship the code, a second Assert class is created, but in a different package:
//: out assertions. Here’s an example:
//: TestAssert.java // Demonstrating the assertion tool package c05; //.
Package caveat.
[23] There’s nothing in Java that forces the use of an interpreter. There exist native-code Java compilers that generate a single executable file.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/java/tij/tij0057.shtml
|
CC-MAIN-2014-42
|
refinedweb
| 1,398
| 59.4
|
JavaScript is an open-source programming language. It is designed for creating web-centric applications. It is lightweight and interpreted, which makes it much faster than other languages. JavaScript is integrated with HTML, which makes it easier to implement JavaScript in web applications.
This article provides you with a comprehensive list of common JavaScript interview questions that often come up in interviews with great answers to them. It will also help you understand the fundamental concepts of JavaScript.
Beginner JavaScript Interview Questions
1. What do you understand about JavaScript?
Fig: JavaScript Logo
JavaScript is a popular web scripting language and is used for client-side and server-side development. The JavaScript code can be inserted into HTML pages that can be understood and executed by web browsers while also supporting object-oriented programming abilities.
2. What’s the difference between JavaScript and Java?
3. What are the various data types that exist in JavaScript?
These are the different types of data that JavaScript supports:
- Boolean - For true and false values
- Null - For empty or unknown values
- Undefined - For variables that are only declared and not defined or initialized
- Number - For integer and floating-point numbers
- String - For characters and alphanumeric values
- Object - For collections or complex values
- Symbols - For unique identifiers for objects
4. What are the features of JavaScript?
These are the features of JavaScript:
- Lightweight, interpreted programming language
- Cross-platform compatible
- Open-source
- Object-oriented
- Integration with other backend and frontend technologies
- Used especially for the development of network-based applications
5. What are the advantages of JavaScript over other web technologies?
These are the advantages of JavaScript:
Enhanced Interaction
JavaScript adds interaction to otherwise static web pages and makes them react to users’ inputs.
Quick Feedback
There is no need for a web page to reload when running JavaScript. For example, form input validation.
Rich User Interface
JavaScript helps in making the UI of web applications look and feel much better.
Frameworks
JavaScript has countless frameworks and libraries that are extensively used for developing web applications and games of all kinds.
6. How do you create an object in JavaScript?
Since JavaScript is essentially an object-oriented scripting language, it supports and encourages the usage of objects while developing web applications.
const student = {
name: 'John',
age: 17
}
7. How do you create an array in JavaScript?
Here is a very simple way of creating arrays in JavaScript using the array literal:
var a = [];
var b = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’];
8. What are some of the built-in methods in JavaScript?
9. What are the scopes of a variable in JavaScript?
The scope of a variable implies where the variable has been declared or defined in a JavaScript program. There are two scopes of a variable:
Global Scope
Global variables, having global scope are available everywhere in a JavaScript code.
Local Scope
Local variables are accessible only within a function in which they are defined.
10. What is the ‘this’ keyword in JavaScript?
The ‘this’ keyword in JavaScript refers to the currently calling object. It is commonly used in constructors to assign values to object properties.
11. What are the conventions of naming a variable in JavaScript?
Following are the naming conventions for a variable in JavaScript:
- Variable names cannot be similar to that of reserved keywords. For example, var, let, const, etc.
- Variable names cannot begin with a numeric value. They must only begin with a letter or an underscore character.
- Variable names are case-sensitive.
12. What is Callback in JavaScript?
In JavaScript, functions are objects and therefore, functions can take other functions as arguments and can also be returned by other functions.
Fig: Callback function
A callback is a JavaScript function that is passed to another function as an argument or a parameter. This function is to be executed whenever the function that it is passed to gets executed.
13. How do you debug a JavaScript code?
All modern web browsers like Chrome, Firefox, etc. have an inbuilt debugger that can be accessed anytime by pressing the relevant key, usually the F12 key. There are several features available to users in the debugging tools.
We can also debug a JavaScript code inside a code editor that we use to develop a JavaScript application—for example, Visual Studio Code, Atom, Sublime Text, etc.
14. What is the difference between Function declaration and Function expression?
15. What are the ways of adding JavaScript code in an HTML file?
There are primarily two ways of embedding JavaScript code:
- We can write JavaScript code within the script tag in the same HTML file; this is suitable when we need just a few lines of scripting within a web page.
- We can import a JavaScript source file into an HTML document; this adds all scripting capabilities to a web page without cluttering the code.
Intermediate JavaScript Interview Questions
16. What do you understand about cookies?
Fig: Browser cookies
A cookie is generally a small data that is sent from a website and stored on the user’s machine by a web browser that was used to access the website. Cookies are used to remember information for later use and also to record the browsing activity on a website.
17. How would you create a cookie?
The simplest way of creating a cookie using JavaScript is as below:
document.cookie = "key1 = value1; key2 = value2; expires = date";
18. How would you read a cookie?
Reading a cookie using JavaScript is also very simple. We can use the document.cookie string that contains the cookies that we just created using that string.
The document.cookie string keeps a list of name-value pairs separated by semicolons, where ‘name’ is the name of the cookie, and ‘value’ is its value. We can also use the split() method to break the cookie value into keys and values.
19. How would you delete a cookie?
To delete a cookie, we can just set an expiration date and time. Specifying the correct path of the cookie that we want to delete is a good practice since some browsers won’t allow the deletion of cookies unless there is a clear path that tells which cookie to delete from the user’s machine.
function delete_cookie(name) {
document.cookie = name + "=; Path=/; Expires=Thu, 01 Jan 1970 00:00:01 GMT;";
}
20. What’s the difference between let and var?
Both let and var are used for variable and method declarations in JavaScript. So there isn’t much of a difference between these two besides that while var keyword is scoped by function, the let keyword is scoped by a block.
21. What are Closures in JavaScript?
Closures provide a better, and concise way of writing JavaScript code for the developers and programmers. Closures are created whenever a variable that is defined outside the current scope is accessed within the current scope.
function hello(name) {
var message = "hello " + name;
return function hello() {
console.log(message);
};
}
//generate closure
var helloWorld = hello("World");
//use closure
helloWorld();
22. What are the arrow functions in JavaScript?
Arrow functions are a short and concise way of writing functions in JavaScript. The general syntax of an arrow function is as below:
const helloWorld = () => {
console.log("hello world!");
};
23. What are the different ways an HTML element can be accessed in a JavaScript code?
Here are the ways an HTML element can be accessed in a JavaScript code:
- getElementByClass(‘classname’): Gets all the HTML elements that have the specified classname.
- getElementById(‘idname’): Gets an HTML element by its ID name.
- getElementbyTagName(‘tagname’): Gets all the HTML elements that have the specified tagname.
- querySelector(): Takes CSS style selector and returns the first selected HTML element.
24. What are the ways of defining a variable in JavaScript?
There are three ways of defining a variable in JavaScript:
Var
This is used to declare a variable and the value can be changed at a later time within the JavaScript code.
Const
We can also use this to declare/define a variable but the value, as the name implies, is constant throughout the JavaScript program and cannot be modified at a later time.
Let
This mostly implies that the values can be changed at a later time within the JavaScript code.
25. What are Imports and Exports in JavaScript?
Imports and exports help in writing modular code for our JavaScript applications. With the help of imports and exports, we can split a JavaScript code into multiple files in a project. This greatly simplifies the application source code and encourages code readability.
calc.js
export const sqrt = Math.sqrt;
export function square(x) {
return x * x;
}
export function diag(x, y) {
return sqrt(square(x) + square(y));
}
This file exports two functions that calculate the squares and diagonal of the input respectively.
main.js
import { square, diag } from "calc";
console.log(square(4)); // 16
console.log(diag(4, 3)); // 5
Therefore, here we import those functions and pass input to those functions to calculate square and diagonal.
26. What is the difference between Document and Window in JavaScript?
27. What are some of the JavaScript frameworks and their uses?
JavaScript has a collection of many frameworks that aim towards fulfilling the different aspects of the web application development process. Some of the prominent frameworks are:
- React - Frontend development of a web application
- Angular - Frontend development of a web application
- Node - Backend or server-side development of a web application
28. What is the difference between Undefined and Undeclared in JavaScript?
29. What is the difference between Undefined and Null in JavaScript?
30. What is the difference between Session storage and Local storage?
Advanced JavaScript Interview Questions
31. How do you empty an array in JavaScript?
There are a few ways in which we can empty an array in JavaScript:
- By assigning array length to 0:
var arr = [1, 2, 3, 4];
arr.length = 0;
- By assigning an empty array:
var arr = [1, 2, 3, 4];
arr = [];
- By popping the elements of the array:
var arr = [1, 2, 3, 4];
while (arr.length > 0) {
arr.pop();
}
- By using the splice array function:
var arr = [1, 2, 3, 4];
arr.splice(0, arr.length);
32. What is the difference between Event Capturing and Event Bubbling?
33. What is the Strict mode in JavaScript?
Strict mode in JavaScript introduces more stringent error-checking in a JavaScript code.
- While in Strict mode, all variables have to be declared explicitly, values cannot be assigned to a read-only property, etc.
- We can enable strict mode by adding ‘use strict’ at the beginning of a JavaScript code, or within a certain segment of code.
34. What would be the output of the below JavaScript code?
var a = 10;
if (function abc(){})
{
a += typeof abc;
}
console.log(a);
The output of this JavaScript code will be 10undefined. The if condition statement in the code evaluates using eval. Hence, eval(function abc(){}) will return function abc(){}.
Inside the if statement, executing typeof abc returns undefined because the if statement code executes at run time while the statement inside the if the condition is being evaluated.
35. Can you write a JavaScript code for adding new elements in a dynamic manner?
<script type="text/javascript">
function addNode() {
var newP = document.createElement("p");
var textNode = document.createTextNode(" This is a new text node");
newP.appendChild(textNode); document.getElementById("firstP").appendChild(newP);
}
</script>
36. What is the difference between Call and Apply?
37. What will be the output of the following code?
var Bar = Function Foo()
{
return 11;
};
typeof Foo();
The output would be a reference error since a function definition can only have a single reference variable as its name.
38. What will be the output of the following code?
var Student = {
college: "abc",
};
var stud1 = Object.create(Student);
delete stud1.college;
console.log(stud1.company);
This is essentially a simple example of object-oriented programming. Therefore, the output will be ‘abc’ as we are accessing the property of the student object.
39. How do you remove duplicates from a JavaScript array?
There are two ways in which we can remove duplicates from a JavaScript array:
By Using the Filter Method
To call the filter() method, three arguments are required. These are namely array, current element, and index of the current element.
By Using the For Loop
An empty array is used for storing all the repeating elements.
40. Can you draw a simple JavaScript DOM (Document Object Model)?
As you prepare for your upcoming job interview, we hope that these JavaScript Interview Questions have provided more insight into what types of questions you are likely to be asked.
Master the complete JavaScript fundamentals, jQuery, Ajax, and more with the Javascript Certification Training Course. Check out the course preview
Get Ahead of the Curve and Master JavaScript Today
Are you wondering how you can gain the skills necessary to take advantage of JavaScript’s immense popularity now that you are familiar with JavaScript Interview Questions? We have got your back! We offer a comprehensive JavaScript Training Course, which will help you become career-ready upon completion.
To learn more, check out our Youtube video that provides a quick introduction to JavaScript Interview Questions and helps in clearing doubts for your next JavaScript interview. If you’re an aspiring web and mobile developer, JavaScript training will broaden your skills and career horizons.
Do you have any questions for us? Please mention it in the comments section of this ‘JavaScript Interview Questions’’ article and we'll have our experts answer it for you.
|
https://www.simplilearn.com/tutorials/javascript-tutorial/javascript-interview-questions?source=sl_frs_nav_user_clicks_on_next_tutorial
|
CC-MAIN-2021-49
|
refinedweb
| 2,250
| 65.42
|
Working with Queues and Stacks
Introduction
Apart from Hashtables, queues and stacks are probably the most common Collection classes. With this article I will explain the ins and outs of queues and stacks.
Again with the Collections?
Yes. After you have seen how versatile the Collections Namespace is, you'll never again cringe with fear when you encounter it, and you will - many times!
What is a Queue?
As this MSDN article mentions: Queues are useful for storing messages in the order they were received for sequential processing. This class implements a queue as a circular array. Objects stored in a Queue are inserted at one end and removed from the other.
What is a Stack?
A Stack is the opposite of a Queue. Where a Queue is a FIFO list, a Stack represents a LIFO (Last in, First out ) list. This analogy is the same as a stack of books. The first book you place on your desk, will remain at the bottom, whereas the last book that you have placed on the stack, will most likely be used / discarded first.
MSDN explains a Stack as: simple last-in-first-out (LIFO) non-generic collection of objects
Sample Programs
If you still find these two classes hard to understand, I have worked out samples for you. These samples are in VB.NET and C#. You are welcome following the next few steps in creating your project in your specific language ( VB.NET or C# ).
Design
Our design is quite simple. It looks like the following picture:
Figure 1 - Our design
Coding
Add the necessary Namespace to your project.
VB.NET :
Imports System.Collections
C# :
using System.Collections;
Next, our class level variables:
VB.NET :
Private qMyQueue As New Queue 'Our Queue object Private sMyStack As New Stack 'Our Stack Object
C# :
private Queue qMyQueue = new Queue(); //Our Queue object private Stack sMyStack = new Stack(); //Our Stack Object
Store values inside the Queue object.
VB.NET :
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click 'Store values inside Queue qMyQueue.Enqueue("first") qMyQueue.Enqueue("second") qMyQueue.Enqueue("third") End Sub
C# :
private void Button1_Click(object sender, EventArgs e) { //Store values inside Queue qMyQueue.Enqueue("first"); qMyQueue.Enqueue("second"); qMyQueue.Enqueue("third"); }
Populate the Stack object:
VB.NET :
Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click 'Populate Stack Object sMyStack.Push("first") sMyStack.Push("second") sMyStack.Push("third") End Sub
C# :
private void Button2_Click(object sender, EventArgs e) { //Populate Stack Object sMyStack.Push("first"); sMyStack.Push("second"); sMyStack.Push("third"); }
This is where it gets interesting! Once we have our lists set up, there are numerous ways in which we can access the items inside the lists. With the Queue object we can use its Dequeue method. This will in effect release the first object added to the list, for example.
VB.NET :
Console.WriteLine("(Dequeue) {0}", qMyQueue.Dequeue())
C# :
Console.WriteLine("(Dequeue) {0}", qMyQueue.Dequeue())
Would remove first from the list, next time it is called, second will be removed and so on.
With the Stack object we would need to use the Pop method.
VB.NET :
Console.WriteLine("(Pop){0}", sMyStack.Pop())
C# :
Console.WriteLine("(Pop){0}", sMyStack.Pop());
The above code for the Stack object would remove the last item in the list, so in our example, third, then second, then first.
We could also make use of a loop to iterate through the Queue and Stack completely:
VB.NET :
'Queue While qMyQueue.Count > 0 Dim obj As Object = qMyQueue.Dequeue Console.WriteLine("from Queue: {0}", obj) End While 'Stack While sMyStack.Count > 0 Dim obj As Object = sMyStack.Pop Console.WriteLine("from Stack: {0}", obj) End While
C# :
//Queue while (qMyQueue.Count > 0) { object obj = qMyQueue.Dequeue; Console.WriteLine("from Queue: {0}", obj); } //Stack while (sMyStack.Count > 0) { object obj = sMyStack.Pop; Console.WriteLine("from Stack: {0}", obj); }
The IEnumerator Interface
I decided to mention this Interface here as I am trying to teach you how to use the Collection classes properly. The IEnumerator Interface provides a way for us to enumerate, or iterate, or simply, loop through the certain Collection class. It exposes methods such as MoveNext, which moves to the next item in the list, and Current, which shows the current item in the list. We need an IEnumerable Interface object so that it can expose (identify) the enumerator, which supports a simple iteration over a non-generic collection.
Let us add this functionality. Add the following procedure to your code.
VB.NET :
Public Sub DisplayValues(ByVal MyQueueCollection As IEnumerable, ByVal MyStackCollection As IEnumerable) Dim MyQueueEnumerator As IEnumerator = MyQueueCollection.GetEnumerator() 'Get list(s) to iterate through Dim MyStackEnumerator As IEnumerator = MyStackCollection.GetEnumerator() While MyQueueEnumerator.MoveNext() 'Move to next item ListBox1.Items.Add(MyQueueEnumerator.Current.ToString()) End While While MyStackEnumerator.MoveNext() ListBox2.Items.Add(MyStackEnumerator.Current.ToString()) 'Display current item End While End Sub
C# :
public void DisplayValues(IEnumerable MyQueueCollection, IEnumerable MyStackCollection) { IEnumerator MyQueueEnumerator = MyQueueCollection.GetEnumerator(); //Get list(s) to iterate through IEnumerator MyStackEnumerator = MyStackCollection.GetEnumerator(); while (MyQueueEnumerator.MoveNext()) //Move to next item { ListBox1.Items.Add(MyQueueEnumerator.Current.ToString()); } while (MyStackEnumerator.MoveNext()) { ListBox2.Items.Add(MyStackEnumerator.Current.ToString()); //Display current item } }
Finally, add the call to this procedure inside Button3's Click event.
VB.NET :
Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click DisplayValues(qMyQueue, sMyStack) End Sub
C# :
private void Button3_Click(object sender, EventArgs e) { DisplayValues(qMyQueue, sMyStack); }
When the Show Values button is clicked, it would resemble Figure 2:
Figure 2 - Values of both lists
Not so complicated now is it? Hopefully not. I am attaching working samples of these exercises for you, so play around with them and experiment - that is one of the best ways to learn.
Conclusion
Thanks for reading this article. I hope you have enjoyed it and benefited from it. Do not be scared of Collection anymore, they're quite easy! Until next time, cheers!
CringesPosted by Hannes du Preez on 11/26/2012 03:13am
Hi Dominic. Thanks a lot for reading and your honest feedback! I am involved with many online forums as well as give programming classes to students. I also work with many a professional programmers who always try to circumvent the collections namespaces - because they are scared of it. I can't tell you why they seem to be scared of it - perhaps not having the correct fundamentals of programming? Beats me. That is why I try to teach my students about things like thisReply
A question about this articlePosted by Dominic on 09/25/2012 05:06am
I'm curious to know who "cringe[s] with fear when [they] encounter" the Collections namespace? It's widely used and its members are basic data structures that are seen in any entry level CS classes. I'd like to understand what are the motivations to do such an article series? Did you have requests for it? Did you have visitor feedback saying your previous articles were too advanced? I'm really curious about this... Thanks
I cringePosted by Dam on 10/23/2012 09:04am
I didn't know the collections namespace existed; I've been programming in .NET for 10 years. This article has opened my eyes.Reply
|
http://www.codeguru.com/columns/vb/working-with-queues-and-stacks.htm
|
CC-MAIN-2013-48
|
refinedweb
| 1,217
| 60.01
|
An abstract class in C# is a class without implementation or methods. As there is no method, you will have to develop the member functions in the derived class.
What is the purpose? You might ask.
The purpose of an abstract class is to provide a common definition of a base class that multiple derived classes can share.
In a way, creating abstract class is good for organizing code and versionning. If new parameters in the abstract class declaration are needed, you can always modify the relevant declaration without breaking the code.
using System; namespace Abstract { public class Music { public virtual void Display(int i) { Console.WriteLine("Original implementation {0}", i); } } public abstract class Rock : Music { public abstract override void Display(int i); } public class Metal : Rock { public override void Display(int i) { Console.WriteLine("New implementation {0}", i); } } class Program { static void Main() { Metal metal = new Metal(); int i = 10; metal.Display(i); Console.ReadKey(); } } }
Notice the Display() method in line 15. As there is no implementation, the declaration of the abstract method ends with a semicolon without a block of code.
public abstract override void Display();
An abstract class is actually a virtual class and it has to be overridden. Also, you cannot create an instance of an abstract class.
|
https://codecrawl.com/2014/08/30/csharp-abstract-class/
|
CC-MAIN-2018-43
|
refinedweb
| 212
| 56.86
|
| Join
Last post 05-17-2005 4:39 PM by mohan_nz. 9 replies.
Sort Posts:
Oldest to newest
Newest to oldest
I have a MasterPage which has couple of buttons and some Security Methods which is common on all pages, Say on Page load u check for it, Lets call it ValidateUser(). Since its used in all pages i create it in Master Page.Now I have another Master Page, Lets say we Have Authentication Folder and we have a master Page which has headers related to Authentication, and this one has a Method CreateHeader() and this AuthmasterPage derives from Root Master Page.I create a Content Page inside Authentication Folder which has Auth.MasterPage as its master Page. Now i am trying to access the CreateHeader() or ValidateUser() method inside this page, it cannot find it. But the AuthMaster page can See the ValidateUser() method from Root Master Page.Does anyone have a solution or Help in regard to this.
also check out this post, make sure you've not missed anything in the page using authmaster
I understood of what your were trying to say, but the problem is i have to repeat the Methods in each and every page and i have some were around 600+ pages.In Frmwrk 1.1 i used a Base Abstract class with has these and some more methods and the class extends from System.Web.Ui. The webfarm will inherit from this base and it was easy to work with. I am trying to achive the same with Master page, if this works it will reduce lots of development and Maintance issues. The only issue with the previous implementation is you will not be able to view the page in design mode cause it does not inherit directly from System.Web.UI.
Anyway thanks for the info, if you get any other result let me know.I will try keep this forum if there was any progress.
Cheers
Mohan, can't you define the ValidateUser method in your base class and have both the normal master and the auth master inherit from this class so that the method is available to both masters?
Hope that helps, Kashif
I assume that i had tried this, but had some problem, I am not sure what it was, Let me try again and let update you.
Can you provide the code you are using? Can you ensure that you define the MasterType directive on page ot enable strong typed access to the masterpage's methods and prop.
Thanks!
Mohan,
I think your issue is the new compilation model in 2.0. ASPX pages and their partial code behinds are no longer visible because they get dynamically generated at compile time.
However, the 1.1 model still works. I think what you'll want to do is create a Base page that uses either just code or code and controls, but make sure to set it up as an 1.1 style codebehind class in your App_Code folder, rather than a partial class that ASP.NET creates dynamically.
Once you do that you can easily inherit from it.
You might find this discussion interesting as I fought this very issue just a few days ago:
+++ Rick ---
Base Class which will have the common Functions, I have jst defined only a click event and a GetStr Method.
public class ProductBase : System.Web.UI.MasterPage{ public string GetStr() { return "hello"; } protected virtual void Button1_Click(object sender, EventArgs e) { Response.Write("<BR>Master Btn Clicked<BR>"); }}
The Master Page in the root will have a Button and i have the Click event for it define in the ProductBase.
public
AuthMasterPage<%@ Master <asp:TextBox</asp:TextBox></asp:content>
Content Page in Auth folder....<%@ Page Language="C#" MasterPageFile="~/Authentication/AuthMasterPage.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" Title="Untitled Page" %>
Let me know if anything in not clear or i have missed.ThanksMohan
Advertise |
Ads by BanManPro |
Running IIS7
Trademarks |
Privacy Statement
© 2009 Microsoft Corporation.
|
http://forums.asp.net/p/885302/925272.aspx
|
crawl-002
|
refinedweb
| 667
| 63.39
|
Setting Wallpapers in a Windows 8 Store App with VB
Introduction
Today we'll find out how we can change the wallpaper of a Windows 8 Store app. Hold on tight, we have a bumpy ride ahead. Let's get started.
Windows 8 Store Apps
As we know by now, and if you have followed my most recent articles (the last two months or so), Windows 8 Store apps differ quite a bit from desktop apps. While doing today's app, you will see most of the differences concern timers, and API's as well as serialization capabilities. If you are new to Windows 8 Store apps, I suggest you read this article.
What Can We Work With?
- XAML features, including XAML controls - I quote Homer Simpson: "doh!". Obviously we well need these as this is a Windows 8 Store app.
- Background tasks - This we will use instead of Timer controls, which aren't available here.
- SystemParametersInfo API - Yes, it still does work.
- Asynchronous programming - We will use Async programming methods to accomplish our tasks asynchronously. Have a look at this article I wrote recently.
- Basic serialization - The reason why I included this point is: It is logical to assume we will need some sort of serialization here, as we will need to keep track of which paper was set and so on. I haven't gone into much detail here, honestly; perhaps in an updated version of this article (or a future article) I will.
We aren't able to work with the following features:
- Timer controls - these controls do not exist in the Windows 8 Store framework
- Registry - Windows 8 Store apps cannot access the regsitry. We can however use the following two APIs:
- Windows.Storage.ApplicationDataContainer
- Windows.Storage.ApplicationDataContainerSettings
Welcome to Windows 8 Store programming!
Our Project
Our project's aim is to set change the wallpaper at different intervals. Because of the fact that there is no timer control present, we will have to create a Background task. This is not as simple as double clicking on a control. We will have to create a separate project for our background task, register it, and make it work with our main project. This means that we will have two projects in one solution.
Another caveat is that we have limited resources at our disposal. We will only be allowed to access certain folders, which are known to the app, and read their contents asynchronously. Lastly, we do not have access to the registry, so saving info might be hard.
Let us create our first project. Give it a name of Wallpaper Genie 2013, and design it to look like Figure 1.
Figure 1 - Our Main form's design
Add another Page to your project through the Project menu, and design it to resemble Figure 2
Figure 2 - Settings page
Add the following Imports to your MainPage:
Imports Windows.Storage Imports Windows.Storage.Streams Imports Windows.UI.Xaml.Media.Imaging Imports System.Runtime.InteropServices Imports Windows.ApplicationModel.Background Imports Windows.UI.Core Imports Windows.UI.Core.CoreWindow Imports System.Xml Imports System.Runtime.Serialization Imports System.IO
Here, you can have a detailed look at the available namespaces in Windows 8 Store apps and their uses.
Add the following code to your MainPage class:
Public arrFolderNames() As String Public arrSubFolderNames() As String Public arrFileNames() As String Private PaperDuration As Integer Private Async Sub btSelect_Click(sender As Object, e As RoutedEventArgs) Handles btSelect.Click lstFolders.Items.Clear() Dim RootFolder As StorageFolder = KnownFolders.PicturesLibrary Dim FolderList As IReadOnlyList(Of IStorageItem) = Await RootFolder.GetItemsAsync Dim Counter As Integer For Each folder In FolderList If TypeOf folder Is StorageFolder Then lstFolders.Items.Add(folder.Name) ReDim Preserve arrFolderNames(Counter) arrFolderNames(Counter) = folder.Path Counter += 1 End If Next End Sub Private Async Sub lstFolders_SelectionChanged(sender As Object, e As SelectionChangedEventArgs) Handles lstFolders.SelectionChanged lstSubFolders.Items.Clear() Dim SubFolder As StorageFolder = Await StorageFolder.GetFolderFromPathAsync(arrFolderNames(lstFolders.SelectedIndex)) Dim SubFolderList As IReadOnlyList(Of IStorageItem) = Await SubFolder.GetItemsAsync Dim Counter As Integer For Each Folder In SubFolderList If TypeOf Folder Is StorageFolder Then lstSubFolders.Items.Add(Folder.Name) ReDim Preserve arrSubFolderNames(Counter) arrSubFolderNames(Counter) = Folder.Path Counter += 1 End If Next End Sub Private Async Sub lstSubFolders_SelectionChanged(sender As Object, e As SelectionChangedEventArgs) Handles lstSubFolders.SelectionChanged lstFiles.Items.Clear() Dim SubFolder As StorageFolder = Await StorageFolder.GetFolderFromPathAsync(arrSubFolderNames(lstSubFolders.SelectedIndex)) Dim SubFolderList As IReadOnlyList(Of IStorageItem) = Await SubFolder.GetItemsAsync Dim Counter As Integer For Each File In SubFolderList If TypeOf File Is StorageFile Then lstFiles.Items.Add(File.Name) ReDim Preserve arrFileNames(Counter) arrFileNames(Counter) = File.Path Counter += 1 End If Next End Sub Private Async Sub lstFiles_SelectionChanged(sender As Object, e As SelectionChangedEventArgs) Handles lstFiles.SelectionChanged Dim strPicLoc As String = arrFileNames(lstFiles.SelectedIndex) Dim File2 As StorageFile = Await StorageFile.GetFileFromPathAsync(strPicLoc) Dim src As New BitmapImage() src.SetSource(Await File2.OpenAsync(FileAccessMode.Read)) imgWall.Source = src imgWall.Stretch = Stretch.Fill End Sub
Here, we first obtain a list of all folders inside the PicturesLibrary. Next, we obtain a list of subfolders within a selected folder inside the Pictures library. Lastly, we again, asynchronously, get a list of files inside the selected sub folder inside the selected folder in pictures Library. What a mouthful! Keeping it simple. The last sub, obtains the selected file and displays the picture inside an image control. This has also changed (if you're a newbie to Windows 8 store programming). It is not as simple as just supplying a destination source of the picture (as in desktop apps). You have to open the picture and read its contents into the image control.
If you were to run your project now, you will be able to select folders, sub folders, and files and it would look similar to figure 3.
Figure 3 - Our program in action - isn't Alizee Jacotey just beautiful!!?
This is where things get interesting, and I will continue with the Mainpage code a bit later. We have to put each part of the puzzle first, before we can have a completed puzzle.
Add the next event to the MainPage:
Private Sub btSettings_Click(sender As Object, e As RoutedEventArgs) Handles btSettings.Click Dim spSettings As New Frame() spSettings.Navigate(GetType(BasicPage1)) Window.Current.Content = spSettings Window.Current.Activate() End Sub
The Settings button takes us to the Settings page (or Frame) and activates it.
Let's now have a look at the settings page.
Settings
Not much work here, as this page will solely be used to determine when we want the wallpaper to change. Let us add its code:
Private wpDuration As String Private Sub btBack_Click(sender As Object, e As RoutedEventArgs) Handles btBack.Click Me.Frame.Navigate(GetType(MainPage), wpDuration) End Sub Private Sub rdOneMin_Checked(sender As Object, e As RoutedEventArgs) Handles rdOneMin.Checked wpDuration = "1M" End Sub Private Sub rdTenMin_Checked(sender As Object, e As RoutedEventArgs) Handles rdTenMin.Checked wpDuration = "10" End Sub Private Sub rdOneWeek_Checked(sender As Object, e As RoutedEventArgs) Handles rdOneWeek.Checked wpDuration = "1W" End Sub Private Sub rdOneDay_Checked(sender As Object, e As RoutedEventArgs) Handles rdOneDay.Checked wpDuration = "1D" End Sub End Class
There is not much happening here. All we have done here is to store whichever option was selected into a variable called wpDuration. We need to now transfer this variable back to our MainPage. We do this by overriding the MainPage's OnNavigatedTo event:
Protected Overrides Sub OnNavigatedTo(e As Navigation.NavigationEventArgs) MyBase.OnNavigatedTo(e) Dim pDur As String = TryCast(e.Parameter, String) Select Case pDur Case "1M" PaperDuration = 60 Case "10" PaperDuration = 600 Case "1D" PaperDuration = 86400 Case "1W" PaperDuration = 604800 End Select End Sub End Class
Once we are back at the MainPage, we determine which option buttons were selected by investigating the variable's contents. How did it know which variable to use? If you look at the settings page's Back click event, you will notice that we passed the wpDuration variable as a parameter. Inside the MainPage's OnNavigatedTo event we cast the parameter to a string, and voila - the two pages can now communicate with each other!
What is stored inside the PaperDuration variable is the amount of seconds for each option. 60 for one minute. 600 for 10 minutes. 86400 for once a day, and 604800 for once a week. As this program might run continuously (especially on a Windows 8 Phone) this works quite nicely. Now, you may recall that we do not have any timers at our disposal. What we should do now is make use of a Background task, and run it at each of these increments, depending of course on which button was selected.
Adding a Background Task
Background on Background Tasks
Have a proper read through these articles:
- Being productive when your app is offscreen
- Being productive in the background – background tasks
- Guidelines for background tasks (Windows Store apps)
Sadly, all the code samples are in C# only (as always...).
Now that we have decent understanding of the need for Background tasks, we have to know how to create one.
Add a new project to your existing solution by clicking File, Add, Project. Select the Class Library project and give it a nice name. I have named mine GenieBackTask. Rename the default class name (Class1) to something more descriptive such as clsBack.vb. Your Solution Explorer should look like Figure 4.
Figure 4 - Solution Explorer
Add the following code to clsBack:
Imports Windows.ApplicationModel.Background Imports Windows.Storage Imports Windows.UI.Core Imports Windows.UI.Core.CoreWindow Imports System.Runtime.InteropServices Imports System.Xml Imports System.Runtime.Serialization Namespace GenieBackTask Public NotInheritable Class clsBack Implements IBackgroundTask Public Async Sub Run(taskInstance As IBackgroundTaskInstance) Implements IBackgroundTask.Run Dim Deferral As BackgroundTaskDeferral = taskInstance.GetDeferral() Await UpdateUI() Deferral.Complete() End Sub End Class End Namespace
Let us break this code down, piece by piece. Obviously, the first couple of lines are the namespaces that we need.
This part:
Namespace GenieBackTask Public NotInheritable Class clsBack Implements IBackgroundTask
Creates a namespace (for the project) and creates the class. You will notice that this class implements the IBackgroundTask interface. This interface provides a method to perform the work of a background task.
The next section:
Is the declaration of our API that will change the wallpapers.
Now things get tricky! The Run sub (which has to be included inside any Background task) is what will run when it has been triggered. The trigger we will set inside our MainPage class when we register the background task. The Run sub looks like:
Public Async Sub Run(taskInstance As IBackgroundTaskInstance) Implements IBackgroundTask.Run Dim Deferral As BackgroundTaskDeferral = taskInstance.GetDeferral() Await UpdateUI() Deferral.Complete() End Sub
What happens here is the following:
- We again implement the IBackgroundTask interface, but this time its Run method.
- We obtain a Deferral object as the task will run Async.
- We call our method that will run.
- We complete the deferral.
Now the fun part! This...
...runs another Async function inside of it, and dispatches it to the current thread. We created a FileOpenPicker, which allows the user to select a file inside the PicturesLibrary. Obviously you can customize this further (as this is just an example) to locate the specific picture you need.
Finally, we apply the selected picture as a Wallpaper. Build this project now.
This is all we need for the Background task. We now need to connect it to our main project, and then register and start it from our main project. Connect this task to the main project now by opening the Package.Manifest file and navigating to the Declarations tab and entering GenieBackTask.clsBack in the Entry Point field, as displayed in Figure 5.
Figure 5 - Page Manifest
Now our main output project knows about the task. All that is left now is to register the task in code from our MainPage, and then we're done. Add the following code to the Mainpage class:
Private Sub btnSet_Click(sender As Object, e As RoutedEventArgs) Handles btnSet.Click Dim x = RegisterBackgroundTask("GenieBackTask.clsBack", "GenieBackTask", New TimeTrigger(PaperDuration, False)) End Sub Public Shared Function RegisterBackgroundTask(TaskEntryPoint As String, TaskName As String, Trigger As IBackgroundTrigger) As BackgroundTaskRegistration For Each cur In BackgroundTaskRegistration.AllTasks If cur.Value.Name = TaskName Then Return DirectCast(cur.Value, BackgroundTaskRegistration) End If Next Dim Builder As New BackgroundTaskBuilder Builder.Name = TaskName Builder.TaskEntryPoint = TaskEntryPoint Builder.SetTrigger(Trigger) Dim task As BackgroundTaskRegistration = Builder.Register() Return task End Function Public Shared Sub UnregisterBackgroundTasks(TaskName As String) For Each cur In BackgroundTaskRegistration.AllTasks If cur.Value.Name = TaskName Then cur.Value.Unregister(True) End If Next End Sub
In btnSet we register the background task by supplying the same EntryPoint (as we did in the Mainfest), giving it a name, and when this task should run (PaperDuration).
Lastly, we Unregister the task. This is needed so that it doesn't take up unnecessary resources when the app is closed.
Conclusion
This was tough, agreed. But things fall into place the more you work with a certain technology. My aim is just to guide you in the right direction, and I try to make your transition from normal desktop apps to Windows Store apps easier. I hope you have learned a thing or two today. Until next time, cheers!
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/win_mobile/win_store_apps/setting-wallpapers-in-a-windows-8-store-app-with-vb.htm
|
CC-MAIN-2017-04
|
refinedweb
| 2,212
| 50.63
|
jazs 0 Report post Posted March 14, 2007 I am writing a code to run of of my programs that I run on a daily basis and I am having problems with the initresponse. The box that is supposed to pop up is a yes or no box.. If you answer yes then it is supposed to select yes, if you select no then it is supposed to send an {ESC} keystroke.. The good part it I have the {ESC} part working on the later init response. In the first initresponse the yes works but not the now.. If someone could help it would be apreciated. Below is the code that I am using... Thanks in advance $intResponse=MsgBox(4, "Close The GL", "Close the General Ledger?") if $intResponse=2 then send("{ENTER}") else send("{LEFT}") send("{ENTER}") Endif send("{ENTER}") sleep(1000) send("{ENTER}") sleep(1000) send("{ENTER}") sleep(2000) $intResponse=MsgBox(4, "Schedule Goodnight", "Schedule Goodnight?") if $intResponse=1 then send("{ENTER}") else send("{ESCAPE}") endif sleep(1000) send("{ENTER}") Share this post Link to post Share on other sites
|
https://www.autoitscript.com/forum/topic/42662-coding-problem/
|
CC-MAIN-2018-26
|
refinedweb
| 182
| 71.85
|
How to print integer literals in binary or hex in haskell?
haskell hex string to int
haskell bin2dec
haskell realfloat
finitebits haskell
haskell complement
putstrln
haskell symbol search
How to print integer literals in binary or hex in haskell?
printBinary 5 => "0101" printHex 5 => "05"
Which libraries/functions allow this?
I came across the Numeric module and its showIntAtBase function but have been unable to use it correctly.
> :t showIntAtBase showIntAtBase :: (Integral a) => a -> (Int -> Char) -> a -> String -> String
The Numeric module includes several functions for showing an Integral type at various bases, including
showIntAtBase. Here are some examples of use:
import Numeric (showHex, showIntAtBase) import Data.Char (intToDigit) putStrLn $ showHex 12 "" -- prints "c" putStrLn $ showIntAtBase 2 intToDigit 12 "" -- prints "1100"
How to print integer literals in binary or hex in haskell?, How to print integer literals in binary or hex in haskell? printBinary 5 => "0101" printHex 5 => "05". Which libraries/functions allow this? I came across the] You could define your own recursive functions like:
You may also use printf of the printf package to format your output with c style format descriptors:
import Text.Printf main = do let i = 65535 :: Int putStrLn $ printf "The value of %d in hex is: 0x%08x" i i putStrLn $ printf "The html color code would be: #%06X" i putStrLn $ printf "The value of %d in binary is: %b" i i
Output:
The value of 65535 in hex is: 0x0000ffff The html color code would be: #00FFFF The value of 65535 in binary is: 1111111111111111
Data.GHex, Portability, non-portable (GHC Extensions) This module defines operations for an interactive hex-caluclator using GHCi. in hexadecimal format by default; Suppress type annotation of numeric literals by Generate bits and bytes with position It is useful for reading from various command output, such as od, hexdump PostScript. Integer literals in PostScript can be either standard decimal literals or in the form base # number. base can be any decimal integer between 2 and 36, number can then use digits from 0 to base − 1. Digits above 9 are replaced by A through Z and case does not matter. 123 % 123. 8#1777 % 1023.
If you import the
Numeric and
Data.Char modules, you can do this:
showIntAtBase 2 intToDigit 10 "" => "1010" showIntAtBase 16 intToDigit 1023 "" => "3ff"
This will work for any bases up to 16, since this is all that
intToDigit works for. The reason for the extra empty string argument in the examples above is that
showIntAtBase returns a function of type
ShowS, which will concatenate the display representation onto an existing string.
Literals/Integer, Binary literals are of type BITS, and need to be converted to INT using the operator ABS. main:( print((" INT:", dec, hex, oct, bin, new line)); 3) Hexadecimal integer literal (base 16, the first digit is the most significant, the letters 'a' through 'f' represent values (decimal) 10 through 15) 4) Binary integer literal (base 2, the first digit is the most significant)]
Haskell: Library function to convert hexadecimal to binary, Not sure if you could use this: How to print integer literals in binary or hex in haskell? Haskell: recursively convert hex string to integer? I don't think that there is a As of VB.NET 15 there is now support for binary literals: Dim mask As Integer = &B00101010. You can also include underscores as digit separators to make the number more readable without changing the value: Dim mask As Integer = &B0010_1010. share.
Hex can be written with
0x and binary with
0b prefix e.g.:
> 0xff 255 >:set -XBinaryLiterals > 0b11 3
Note that binary requires the
BinaryLiterals extension.
Haskell: Could not find module `Data.List.Split' - haskell - php, Not sure if you could use this: How to print integer literals in binary or hex in haskell? Haskell: recursively convert hex string to integer? I don't think that there is a is that the print function that's used for this purpose always guarantees a format that's safe to use again in Haskell code, hence it puts strings in quotes and escapes all tricky characters including newlines. If you simply want to dump a string to the terminal as-is, use putStrLn instead of print.
Haskell showhex, recall the following type definitions from the Prelude: Use Haskell to display a ByteString as hex. How to print integer literals in binary or hex in haskell? With the default declaration, integer literals are inferred as Hex types. ghci> default (Hex) ghci> 255 0x0000_0000_0000_00ff Produced by hackage and Cabal 2.4.2.99.
Integer literal, In computer science, an integer literal is a kind of literal for an integer whose value is directly For example, in C++ 0x10ULL indicates the value 16 (because hexadecimal) as binary numbers (base-2) in four digit groups (one nibble, representing one of 16 "Glasgow Haskell Compiler User's Guide: 11.3.7. Print/export. Int, which fixed-width machine-specific integers with a minimum guaranteed range of −2 29 to 2 29 − 1. In practice, its range can be much larger: on the x86-64 version of Glasgow Haskell Compiler, it can store any signed 64-bit integer.
Some answers of the homework in the slides · GitHub,. binExp :: Int -> String. binExp x = showIntAtBase 2 intToDigit Read a hexadecimal integer, consisting of an optional leading "0x" followed by at least one hexadecimal digit. Input is consumed until a non-hex-digit or end of string is reached.
- For anyone lazy like me who didn't scroll down, the printf example is much more concise and flexible, and can do other useful stuff like e.g. give a constant length string and all other printf features. Instead of above, just:
printf "%032b" 5
- @mozboz,
printfin Haskell is more like a magic trick than a function to be used in serious code. The format string is parsed at runtime (which may produce runtime errors) and the whole mechanism is a bit slow.
- ` printf "The value of %d in hex is: 0x%08x" i i ` is ok because printf can be both
IO ()and
String
- Yes, indeed. I just wanted to make it obvious, that it can be used as a pure function that returns a String.
- Would it not be better to have b:(decToBIn a) at the end of the last line?
- Shouldn't 0 give back 0?
decToBin' 0 = [0]maybe?
- I think you can use
("0123456789abcdef" !!)instead of
(\n -> "0123456789abcdef" !! n).
- All the other answers ignore
Data.Word. And lets get real here - once you do "bit hammering", e.g. using
Data.Bits, you are more likely to want to see hex or binary numbers AND you are more likely not working with
Intbut much rather with
Word8, Word16, ... +1
- although it is a curious choice he made regarding using Word8 for the base instead of for the number... Could be improved... Even better if it worked for all
Bits a.
|
http://thetopsites.net/article/52749385.shtml
|
CC-MAIN-2020-45
|
refinedweb
| 1,148
| 61.56
|
None
0 Points
Aug 25, 2016 03:29 AM|fromberg100|LINK
Hi,
I am wondering if it is possible to make certain validation rules get executed in order but if one fails (error) then the next one not get executed.
For example:
public class MyViewModel
{
[MaxLength(10)]
[MinLength(15)]
public string description { get; set; }
}
In this particular case, the validation attributes does not make sense, but for illustration purposes lets say the length the field "description" is 11, I get 2 errors: maxlength and minlength. What I expect is that if maxlength fails the minlength does not get validated and I expect to get one single error.
Is this possible?
Thanks in advance and best regards,
Fabian
Aug 25, 2016 06:12 AM|amaldevv|LINK
Looks like you may need to write custom validation attribute for your criteria to work
None
0 Points
Aug 25, 2016 04:00 PM|fromberg100|LINK
Hi amaldevv,
thanks for your reply and help.
Regarding the attributes getting executed in order, good, I can fix this by implementing IModelValidatorProvider. Thanks for the link, it helps on this capacity.
On the other hand, where can I make so that the second ValidationAttribute does not get validated if the previous one fails?
Thanks in advance and regards,
2 replies
Last post Aug 25, 2016 04:00 PM by fromberg100
|
https://forums.asp.net/t/2102196.aspx?Model+Validation+all+rules+executed
|
CC-MAIN-2019-13
|
refinedweb
| 222
| 56.29
|
++ program to find the longest cable length // between any two cities. #include<bits/stdc++.h> using namespace std; // visited[] array to make nodes visited // src is starting node for DFS traversal // prev_len is sum of cable length till current node // max_len is pointer which stores the maximum length // of cable value after DFS traversal void DFS(vector< pair<int,int> > graph[], int src, int prev_len, int *max_len, vector <bool> &visited) { // Mark the src node visited visited[src] = 1; // curr_len is for length of cable from src // city to its adjacent city int curr_len = 0; // Adjacent is pair type which stores // destination city and cable length pair < int, int > adjacent; // Traverse all adjacent for (int i=0; i<graph[src].size(); i++) { // Adjacent element adjacent = graph[src][i]; // If node or city is not visited if (!visited[adjacent.first]) { // Total length of cable from src city // to its adjacent curr_len = prev_len + adjacent.second; // Call DFS for adjacent city DFS(graph, adjacent.first, curr_len, max_len, visited); } // If total cable length till now greater than // previous length then update it if ((*max_len) < curr_len) *max_len = curr_len; // make curr_len = 0 for next adjacent curr_len = 0; } } // n is number of cities or nodes in graph // cable_lines is total cable_lines among the cities // or edges in graph int longestCable(vector<pair<int,int> > graph[], int n) { // maximum length of cable among the connected // cities int max_len = INT_MIN; // call DFS for each city to find maximum // length of cable for (int i=1; i<=n; i++) { // initialize visited array with 0 vector< bool > visited(n+1, false); // Call DFS for src vertex i DFS(graph, i, 0, &max_len, visited); } return max_len; } // driver program to test the input int main() { // n is number of cities int n = 6; vector< pair<int,int> > graph[n+1]; // create undirected graph // first edge graph[1].push_back(make_pair(2, 3)); graph[2].push_back(make_pair(1, 3)); // second edge graph[2].push_back(make_pair(3, 4)); graph[3].push_back(make_pair(2, 4)); // third edge graph[2].push_back(make_pair(6, 2)); graph[6].push_back(make_pair(2, 2)); // fourth edge graph[4].push_back(make_pair(6, 6)); graph[6].push_back(make_pair(4, 6)); // fifth edge graph[5].push_back(make_pair(6, 5)); graph[6].push_back(make_pair(5, 5)); cout << "Maximum length of cable = " << longestCable(graph, n); return 0; }
Output:
Maximum length of cable = 12
Time Complexity : O(V * (V + E)).
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
|
http://www.geeksforgeeks.org/longest-path-between-any-pair-of-vertices/
|
CC-MAIN-2017-13
|
refinedweb
| 411
| 51.28
|
Someone recently asked me what I recommend for synchronizing worker threads and I suggested setting an event. This person's response was that you could not do that since worker threads do not support a message pump (UI threads are required to support messages). The confusion here is that events and messages are different animals under windows.
I have forgotten where I originally copied these examples, but I found them to be interesting because of their simplicity. If anyone is aware of the author of this code, I would appreciate hearing from you, so I can give him/her credit.
Note there is considerable support for threads in MFC that is not covered here. API's like _beginthread (a C runtime library call) would likely be replaced by MFC API's like AfxBeginThread in an MFC application.
This first example illustrates two unsynchronized threads. The main loop, which is the primary thread of a process, prints the contents of a global array of integers. The thread called "Thread" continuously populates the global array of integers.
#include <process.h> #include <stdio.h> int a[ 5 ]; void Thread( void* pParams ) { int i, num = 0; while ( 1 ) { for ( i = 0; i < 5; i++ ) a[ i ] = num; num++; } } int main( void ) { _beginthread( Thread, 0, NULL ); while( 1 ) printf("%d %d %d %d %d\n", a[ 0 ], a[ 1 ], a[ 2 ], a[ 3 ], a[ 4 ] ); return 0; }
Note in this sample output, the numbers in red illustrate a state where the primary thread preempted the secondary thread in the middle of populating the values of the array:
81751652 81751652 81751651 81751651
81751651
81751652 81751652 81751651 81751651 81751651
83348630 83348630 83348630 83348629 83348629
83348630 83348630 83348630 83348629 83348629
83348630 83348630 83348630 83348629 83348629
If you are running Windows 9x/NT/2000, you can run this program by clicking here. After the program begins to run, press the "Pause" key stop the display output (this stops the primary thread's I/O, but the secondary thread continues to run in the background) and any other key to restart it.
What if your main thread needed all elements of the array to processed prior to reading? One solution is to use a critical section.
Critical section objects provide synchronization similar to that provided by mutex objects, except critical section objects can be used only by the threads of a single process. Event, mutex, and semaphore objects can also be used in a single-process application, but critical section objects provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization. system will be fair to all threads.
#include <windows.h> #include <process.h> #include <stdio.h> CRITICAL_SECTION cs; int a[ 5 ]; void Thread( void* pParams ) { int i, num = 0; while ( TRUE ) { EnterCriticalSection( &cs ); for ( i = 0; i < 5; i++ ) a[ i ] = num; LeaveCriticalSection( &cs ); num++; } } int main( void )
{ InitializeCriticalSection( &cs ); _beginthread( Thread, 0, NULL ); while( TRUE ) { EnterCriticalSection( &cs ); printf( "%d %d %d %d %d\n", a[ 0 ], a[ 1 ], a[ 2 ], a[ 3 ], a[ 4 ] ); LeaveCriticalSection( &cs ); } return 0; }
If you are running Windows 9x/NT/2000, you can run this program by clicking here.
A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and non-signaled.
Two or more processes can call
CreateMutex to create the same named mutex. The
first process actually creates the mutex, and subsequent processes:
CreateProcessfunction can inherit a handle to a mutex object if the lpMutexAttributes parameter of
CreateMutexenabled inheritance.
DuplicateHandlefunction to create a duplicate handle that can be used by another process.
OpenMutexor
CreateMutexfunction.
Generally speaking, if you are synchronizing threads within the same process, a critical section object is more efficient.
#include <windows.h> #include <process.h> #include <stdio.h> HANDLE hMutex; int a[ 5 ]; void Thread( void* pParams ) { int i, num = 0; while ( TRUE ) { WaitForSingleObject( hMutex, INFINITE ); for ( i = 0; i < 5; i++ ) a[ i ] = num; ReleaseMutex( hMutex ); num++; } } int main( void ) { hMutex = CreateMutex( NULL, FALSE, NULL ); _beginthread( Thread, 0, NULL ); while( TRUE )
{ WaitForSingleObject( hMutex, INFINITE ); printf( "%d %d %d %d %d\n", a[ 0 ], a[ 1 ], a[ 2 ], a[ 3 ], a[ 4 ] ); ReleaseMutex( hMutex ); } return 0; }
If you are running Windows 9x/NT/2000, you can run this program by clicking here.
What if we want to force the secondary thread to run each time the primary thread finishes printing the contents of the global array, so that the values in each line of output is only incremented by one?
An event object is a synchronization object whose state can be explicitly set to
signaled by use of the
SetEvent or
PulseEvent.
A thread can use the
PulseEvent function to set the state of an event object to
signaled and then reset it to non-signaled after releasing the appropriate number of
waiting threads. For a manual-reset event object, all waiting threads are released. For an
auto-reset event object, the function releases only a single waiting thread, even if
multiple threads are waiting. If no threads are waiting,
PulseEvent simply sets the
state of the event object to non-signaled and returns.
#include <windows.h> #include <process.h> #include <stdio.h> HANDLE hEvent1, hEvent2; int a[ 5 ]; void Thread( void* pParams ) { int i, num = 0; while ( TRUE ) { WaitForSingleObject( hEvent2, INFINITE ); for ( i = 0; i < 5; i++ ) a[ i ] = num; SetEvent( hEvent1 ); num++; } } int main( void ) { hEvent1 = CreateEvent( NULL, FALSE, TRUE, NULL ); hEvent2 = CreateEvent( NULL, FALSE, FALSE, NULL ); _beginthread( Thread, 0, NULL ); while( TRUE ) { WaitForSingleObject( hEvent1, INFINITE ); printf( "%d %d %d %d %d\n", a[ 0 ], a[ 1 ], a[ 2 ], a[ 3 ], a[ 4 ] ); SetEvent( hEvent2 ); } return 0; }
If you are running Windows 9x/NT/2000, you can run this program by clicking here.
The MSDN News for July/August 1998 has a front page article on Synchronization Objects. The following table is from that article:
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/threads/sync.aspx
|
crawl-002
|
refinedweb
| 978
| 56.08
|
Add Tailwind to Create React App without ejecting with version 1.4 now including PurgeCSS as a built-in process it has never been easier to get a clean and lean build for your next project.
Best of all, you don't need to eject a Create React App to get the full benefits of adding Tailwind to your project. In fact, it's as simple as installing Tailwind CSS, creating a PostCSS config and making a few tweaks to your npm scripts.
Create the React app
We'll start with a default Create React App to keep things simple.
npx create-react-app cra-tailwind && cd cra-tailwind
Default project clean-up
Now that we've created our default React app lets quickly get some housekeeping out of the way before we dive into the real topic of this article.
Since all of our CSS ends up in a single file we can delete the provided CSS files.
rm src/index.css src/App.css
The
src/index.js entrypoint references
index.css. Open that file up and remove the import.
// src/index.js // Remove this line import './index.css';
In a similar way,
src/App.js is still importing
App.css. Remove this reference.
// src/App.js // Remove this line... import './App.css';
Now that we have that out of the way its time for the fun stuff.
Add Tailwind to the project
Install the required dependencies
Install the development dependencies from our project root.
npm install --save-dev tailwindcss postcss-cli autoprefixer
Briefly, tailwindcss is the framework itself, postcss-cli is used to build our generated CSS, and autoprefixer is used to add all required vendor prefixes in our final built CSS.
Create the Tailwind CSS
To keep things clean, I prefer to have a dedicated styles directory containing a
tailwind.css file. It is also the directory within which Tailwind will generate the final build.
mkdir -p src/styles && touch src/styles/tailwind.css
Now add the
@tailwind directives for base, components and utilities to
src/styles/tailwind.css.
@tailwind base; /* Your own custom base styles */ @tailwind components; /* Your own custom component styles */ @tailwind utilities; /* Your own custom utilities */
This
tailwind.css file constitutes our project's pre-generated CSS and is where we will also write any of our extra custom CSS.
Generate the Tailwind configuration
./node_modules/.bin/tailwindcss init
This will create an empty
tailwind.config.js file in your project root that can be used to dramatically alter the final build.
A brief word about Tailwind customisation
Running the Tailwind build process will effectively replace
@tailwind base; with all of the framework's
base classes and do the same for
components and
utilities. This generated output will be saved in a separate file. But the build process can do way more than just inject those styles if you take the time to dive into the customisation options.
The truly powerful feature of this framework is the ability to fully customise the variables of the build which can dramatically change the generated CSS to meet your exact needs. Do you need new default breakpoints? Add it to the config and Tailwind will automatically generate all related classes. Need to completely override some of the colours in your build? Add it to the config and Tailwind will automatically generate all related classes.
I cannot emphasise enough just how powerful the customisation options are and the massive amount of time it can save you. So please do spend time reading up on how to customise Tailwind.
Create the PostCSS config
Start by creating the
postcss.config.js configuration file in the project root.
touch postcss.config.js
Populate it with
module.exports = { plugins: [require('tailwindcss'), require('autoprefixer')], };
When we run PostCSS through our build scripts it will first send our custom CSS (
/src/styles/tailwind.css) through the Tailwind CLI to generate all of the classes required after applying any configuration customisations.
Once completed, PostCSS will then proceed to run
autoprefixer on the output to automatically add any vendor prefixes to our CSS based upon our browserlist. Tailwind doesn't come with any vendor prefixes and this is one of the main reason we use PostCSS to generate our Tailwind build instead of relying solely on the Tailwind CLI.
Create Build Scripts
During a build, Create React App doesn't recognise the
postcss.config.js from our project root so we have to change our npm scripts to cater for this.
Install dependency for cross-platform environment variables
This is potentially optional based on your use case but I'm including it here to be kind to Windows users. The cross-env package allows us to set environment variables in our npm scripts that works across platforms.
npm install --save-dev cross-env
Option 1: Generate once at start of build process
The best part about using Tailwind is that you'll likely not write any actual CSS once you have it setup in your project. You apply the already generated Tailwind classes to your HTML and JSX. So there's little reason to watch the CSS files and instead rely on Create React App watching the JS, JSX and other relevant files during development and reloading on those changes only.
Inside
package.json change the scripts to include the following.
"scripts": { "build:css": "postcss src/styles/tailwind.css -o src/styles/main.css", "prestart": "npm run build:css", "start": "cross-env BROWSER=none react-scripts start", "prebuild": "cross-env NODE_ENV=production npm run build:css", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" },
The
build:css script runs PostCSS against
src/styles/tailwind.css and applies any plugins we have setup in
postcss.config.js. The output is then saved in
src/styles/main.css. It is this generated file that we import into our application.
We are taking advantage of npm's
pre hooks to kick off a sequential build of the Tailwind generated CSS.
- Production Build: If we run
npm run buildit will first run the command(s) from
prebuild. Notice that we're setting the
NODE_ENVvariable on the
prebuildto ensure Tailwind runs production optimisations.
- Development Build: When developing locally we run
npm run startwhich will first run
prestart. We do not set the environment to production here as we want access to the entire Tailwind framework. Furthermore, I don't like that Create React App automatically opens my default browser so I usually set
BROWSER=noneahead of running
react-scripts start. I run Choosy and when the development server is ready I simply cmd-click on the link to select the browser within which I wish to test the site.
Option 2: Watch CSS files for changes (do not open default browser)
We first need to install
npm-run-all to provide a cross-platform method of running multiple npm scripts in parallel. Windows, for example, does not support
& when trying to chain npm run commands together.
npm install --save-dev npm-run-all
We create a specific
watch:css command that will constantly watch for any changes to the
tailwind.css file and regenerate the
main.css file upon save. These will in turn trigger Create React App to refresh our browser to show the new changes. It's a convenient workflow and works well but Tailwind releases us from having to do too many changes to the actual CSS so having a watcher running feels a bit wasteful.
"scripts": { "start:react": "cross-env BROWSER=none" },
As with Option 1, I have it set so that the default browser does not automatically open when running the development server.
Option 3: Watch CSS files for changes (automatically open default browser)
There is another reason I don't like the default browser to automatically open on a development build. It relates to a race condition if Tailwind hasn't completed compiling the css ahead of the page loading. The page will open but it will be blank and React will not load forcing you to have to manually refresh the page.
If you wish to still maintain the automatic opening of your default browser when developing the site you will either need to refresh the page after the browser loads or add a short delay ahead of
react-scripts start.
"scripts": { "start:react": "sleep 3 &&" },
You'll notice the only change is to the
start:react script where we have added a
sleep 3 delay ahead of running
react-scripts start and removed the
BROWSER=none environment variable. Unlike trying to run npm scripts in parallel,
&& is supported on Windows so no need to create a cascade of scripts for our delay.
Where to import the generated CSS
You only need to import the generated file once. I usually import it into
src/App.js in a Create React App but you could also import it into
index.js or wherever makes sense within your own app. Be sure that it is high enough in the cascade to ensure the styles are available when adding Tailwind classes to your JSX components.
Test the build
I've tried to replicate the default Create React App homepage using Tailwind classes with a default configuration.
Replace
src/App.js with the below code.
import React from 'react'; import logo from './logo.svg'; import './styles/main.css'; function App() { return ( <div className="text-center"> <header className="bg-gray-900 min-h-screen flex flex-col items-center justify-center text-4xl text-white"> <img src={logo} <p> Edit <code>src/App.js</code> and save to reload. </p> <a className="text-blue-300" href="" target="_blank" rel="noopener noreferrer" > Learn React </a> </header> </div> ); } export default App;
We'll also want to recreate a few of the custom styles by using
@apply for Tailwind classes or old-school CSS for the logo animation.
Replace
src/styles/tailwind.css with the code below.
@tailwind base; /* Your own custom base styles */ body { @apply antialiased; } @tailwind components; /* Your own custom component styles */ @media (prefers-reduced-motion: no-preference) { .App-logo { animation: App-logo-spin infinite 20s linear; } } @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } @tailwind utilities; /* Your own custom utilities */
We can now start the development server to view our placeholder app.
npm run start
Once that is ready, open
src/styles/main.css and look at the styles that Tailwind has generated. That CSS file is absolutely massive with almost 70,000 lines!.
Optimise the production build
Update the Tailwind configuration.
The setup for running PurgeCSS used to be a little more involved requiring installing dependencies and adding a regex pattern to our PostCSS configuration. Thankfully this is now a built-in feature as of version 1.4.0 and requires nothing more than defining the file patterns that may contain our Tailwind classes.
Replace the contents of the
tailwind.config.js file in your project's root.
module.exports = { purge: ['./src/**/*.html', './src/**/*.js', './src/**/*.jsx'], theme: { extend: {}, }, variants: {}, plugins: [], };
Now that we have the purge settings in place, Tailwind will look through those files and treeshake any unused styles from the final production build. It was for this reason that we explicitly set
NODE_ENV=production in our
prebuild stage.
If you're curious, read more about how Tailwind controls the file size for production builds.
Test the Production Build
If you're still running the development server hit
Ctrl+C to shut it down.
We're now ready to run a production build.
npm run build
Once that completes if you open the newly generated
src/styles/main.css file you should find it has shrunk down to a file containing only the styles we have thus far used in our project.
Add the generated CSS file to gitignore
Be sure to add the generated CSS file (
src/styles/main.css in our example above) to your
.gitignore. The generated file's contents completely depending on whether we are running a production or development build.
It might feel weird leaving a required file out of version control given we are importing the generated file into our
App.js. This isn't an issue at all since we always generate the file ahead of the production build or running the development server.
Conclusion
Tailwind CSS truly has had a huge impact on improving my workflow when developing web apps. I encourage anybody who is still skeptical of the syntax and adding so many classes to your HTML and JSX to just give it a try. The framework keeps going from strength to strength and I cannot imagine starting a project without first reaching for Tailwind.
|
https://www.39digits.com/add-tailwind-to-create-react-app-without-ejecting/
|
CC-MAIN-2020-34
|
refinedweb
| 2,106
| 54.93
|
LOGIN(2) OpenBSD Programmer's Manual GETLOGIN(2)
NAME
getlogin, setlogin - get/set login name
SYNOPSIS
#include <unistd.h>
char *
getlogin(void);).
RETURN VALUES
If a call to getlogin() succeeds, it returns a pointer to a null-termi-
nated string in a static buffer. If the name has not been set, it re-
turns NULL. If a call to setlogin() succeeds, a value of 0 is returned.
If setlogin() fails, a value of -1 is returned and an error code is
placed in the global location errno. su-
per-user.
SEE ALSO
setsid(2).
HISTORY
The getlogin() function first appeared in 4.2BSD.
OpenBSD 2.6 June 9, 1993 1
|
http://rocketaware.com/man/man2/getlogin.2.htm
|
CC-MAIN-2018-39
|
refinedweb
| 110
| 60.31
|
, a class can be initialized by an initialization list *only if* it does not have constructors, private or protected members, base classes, or virtual functions.
Second, a constructor (like any other function) can take a variable number of arguments.
For example:
#include <stdarg.h>
const int ArraySize = 5;
class Array
{
public:
Array(int, ...);
private:
int a[ArraySize];
};
Array::Array(int num, ...)
{
va_list args;
va_start(args, num);
for (int i = 0; i < num; ++i)
a[i] = va_arg(args, int);
va_end(args);
}
int main()
{
Array a(ArraySize, 1, 2, 3, 4, 5);
return 0;
}
Note however that no compile-time checking will be done on the arguments represented by the "..."
|
https://www.experts-exchange.com/questions/10063360/assigning-an-array.html
|
CC-MAIN-2018-26
|
refinedweb
| 108
| 55.03
|
Symbol servers allow developer tools on Windows to automatically find symbols. They do this so well that most developers never have to worry about the internal mechanisms. However when things go wrong it can be helpful to understand how they work, and it turns out that it is all very simple.
This article should serve as a good comparison to the process of getting symbols, especially for crashes on customer machines, for Linux. I documented that process in a four-part series:
- Symbols on Linux Part One: g++ Library Symbols – learning how to get symbols for my Linux machine
- Symbols on Linux Part Two: Symbols for Other Versions – struggling to get symbols for customer machines
- Symbols on Linux Part Three: Linux versus Windows – the promise of build IDs
- Symbols on Linux update: Fedora Fixes – build ID progress
My discussion of Windows symbol servers make use of the symbol server that I have on my laptop, for my own personal projects. Whenever I release a new version of Fractal eXtreme (64-bit optimized, multi-core, fast and fluid exploration of fractals, demo version here) I put the symbols and binaries on my symbol server so that I can trivially investigate any crash reports that I receive. This may seem like overkill for a home project, but in fact a local symbol server is just a copy of the files, arranged in a specific way for easy retrieval, and it is trivial to set up.
Finding PE files
Symbol servers store not just symbols but also PE files (DLLs and EXEs). If these aren’t already available, such as when looking at a minidump or an xperf profile, then they must be retrieved first, before the symbols. There are three pieces of information that are needed in order to retrieve a PE file from a symbol server: the file name, link time stamp, and the image size. In order to manually check whether the latest version of FractalX.exe made it into my symbol server I would extract the link time stamp and the image size from the executable like this:
> dumpbin FractalX.exe /headers | find “date stamp”
4FFD0109 time date stamp Tue Jul 10 21:28:57 2012
> dumpbin FractalX.exe /headers | find “size of image”
147000 size of image
The format for the path to a PE file in a symbol server share is:
“%s\%s\%s%s\%s” % (serverName, peName, timeStamp, imageSize, peName)
My symbol server is in c:\MySyms (normally it would be on a shared server, but this is my personal laptop) so the full path for the file examined above is:
c:\MySyms\fractalx.exe\4FFD0109147000\FractalX.exe
Simple enough. In my case I use symstore.exe’s /compress option (it saves a lot of space) when I add the files. Compressed files are indicated by replacing the last character with an underscore, so the actual path is this:
c:\MySyms\fractalx.exe\4FFD0109147000\FractalX.ex_
This is a good test to make sure that your PE files have been correctly added to your symbol server but it’s not a very realistic use case since we used the PE file to obtain the values needed to retrieve the PE file. The more common scenario is that you would have a minidump or an xperf ETL file and this file would contain a series of module name, link time stamp, image size triplets and these would be used at analysis time to retrieve the PE files. In the case of minidump files there is an array of MINIDUMP_MODULE structures which contain the relevant data. Note that the layout of symbol server shares can be much more complex. You should use the APIs (discussed later) to retrieve PE files – the technique above is purely for troubleshooting.
If you have a link time-stamp and you want to convert it to a date you can use this Python one-liner batch file:
python -c "import time; print time.ctime(int('%1',16))"
One extra quirk is that Microsoft’s linker/debugger toolchains lower-case the names of PE files. This means that if you use a case-sensitive file system for your symbol server (as Chrome does) then you have to upload your symbols using lower-case file names. The Chrome team discovered this the hard way. The PDB names are extracted from the PE files, with the case intact, so they just have to match.
Finding PDB files
Finding PE files is handy when analyzing customer crash dumps in order to have the assembly instructions but it’s actually more important than that. Minidump files and most profile files do not actually record enough information to retrieve PDB files. Instead the tools retrieve the PE files and then look in the PE files to get the information needed to retrieve the PDB files. Once again we can extract this information from a PE file using dumpbin:
> dumpbin FractalX.exe /headers | find “Format:”
4FFD0109 cv 56 000B9308 B7B08 Format: RSDS, {6143E0D1-9975-4456-AC8E-F24C8777336D}, 1, FractalX.pdb
The long hexadecimal number after RSDS is a GUID, and the number after that (a 32-bit decimal number, but in this case just ‘1’) is called the ‘age’. The PDB file name is also listed here. Together these uniquely identify a particular version of a PDB file. The format for the path to a PDB file in a symbol server share is:
“%s\%s\%s%s\%x” % (serverPath, pdbName, guid, age, pdbName)
Funny thing here – notice that I use %x to print the age, but in the previous paragraph I described the age as being a decimal number. Well, the PDB age (just a measure of how many times the same PDB has been reused) is just a 32-bit number, but dumpbin prints it in decimal, and symbol servers expect it in hexadecimal. Hurray for consistency! This means that if you parse the dumpbin output you need to convert the age to an integer and the print it as hexadecimal. If you get this wrong then the bug won’t show up until you encounter a PDB with an age of ten or greater. Wonderful.
As with the PE files a final underscore indicates when a file is compressed by symstore.exe. The path on my symbol server for the PDB listed above looks like this:
c:\MySyms\FractalX.pdb\6143E0D199754456AC8EF24C8777336D1\FractalX.pd_
Simple enough. The algorithm for generating the GUID and age is that whenever you do a rebuild – whenever a fresh PDB is generated – a new GUID is created and the age is set to one. Whenever you do an partial build the PDB is updated with new debug information and the age is incremented. That’s it – use the PE name, link time stamp, and image size to find the PE (if it isn’t already loaded) and then use the GUID, age, and PDB file name to find the PDB file. Note that the layout of symbol server shares can be much more complex. You should use the APIs (discussed later) to retrieve PDB files – the technique above is purely for troubleshooting.
Adding to a symbol server
If you ship software on Windows then you should have a symbol server. That symbol server should contain the PE files and PDB files for every product you ship. If you don’t do this then you are doing either yourself or your customers a disservice. You should also have a symbol server for all internal builds that anybody at the company might end up running. If a program might crash, and if you want to be able to investigate the crash then put the symbols on the symbol server. If you’re worried about internal builds using up too much space then put them on a separate symbol server and purge the old files occasionally. You should also make sure that your build machines are running source indexing so that when you’re debugging a crash in an old version of your software you will automatically get the right source files. Luckily I wrote about that already. Adding files to a symbol server is the height of simplicity. Set sourcedir to point at a directory containing files to add and set dest to your symbol server directory, which should be accessible to all who need the symbols. Then run these commands:
symstore add /f %sourcedir%\*.dll /s %dest% /t prodname /compress symstore add /f %sourcedir%\*.exe /s %dest% /t prodname /compress symstore add /f %sourcedir%\*.pdb /s %dest% /t prodname /compress
That’s it. Use /r if you want the files recursively added, and see the help for more information. You can download the compress.exe program if you want to compress an existing symbol server – we did this at work and saved many TB of space.
Update: a reader followed my recommendation of using compress.exe and found that it is dangerously unreliable. He shared his tests with me and I confirmed that in some cases the files created by compress.exe are corrupted. They also don’t compress as well as using the /compress option of symstore.exe. If you do use compress.exe use -ZX as the compression option as this produces the smallest files and appears to avoid the corruption problem. But be careful. Another alternative is to extract .pdb files from your existing symbol server and then resubmit them with symstore.exe /compress.
You can also make your symbol server available through https if you want, but I know nothing about how to set this up.
Using a symbol server
The precise details of how to get your development tools to use your symbol server vary, but one almost universal method is to set the _NT_SYMBOL_PATH environment variable (advanced usage here and here), to something like this:
_NT_SYMBOL_PATH=SRV*c:\symbols*c:\MySyms;SRV*c:\symbols*;SRV*c:\symbols*
This tells tools to first look in the local cache (c:\symbols) and then look in the symbol server c:\MySyms. If symbols are found in c:\MySyms then they are copied (and decompressed) to c:\symbols. If none of that works then the same process (including the same cache directory) is followed for Microsoft’s web based symbol cache, and then Chrome’s. Note that a local symbol cache is required when dealing with compressed symbols. Note that some symbol servers, such as Chrome’s and Microsofts, can be reached through https as well as http. When https is available as an option you should always use it since otherwise a man-in-the-middle attack could use malformed PDBs or source-indexing commands to execute arbitrary code when you download and use these PDBs.
Microsoft still lists their symbol server using http in some places, but https works and should be preferred.
The SRV* part of _NT_SYMBOL_PATH is important, poorly documented, and apparently poorly understood. It is my understanding, confirmed by discussions on stackoverflow, that SRV* tells symsrv.dll to treat the following paths or URLs as symbol servers instead of just a collection of loose files. So, if _NT_SYMBOL_PATH is c:\symbols then dbghelp or symsrv may recursively search the directory structure for your symbols, but if _NT_SYMBOL_PATH is SRV*c:\symbols then it will search in a very structured and efficient way. If _NT_SYMBOL_PATH is SRV*c:\symbols* then symsrv will first look in c:\symbols, using the quick and efficient symbol server algorithm, and will then (if the symbols aren’t found) do the same efficient search in Microsoft’s symbol server. You should prefer using SRV* and symbol server layout rather than unstructured symbols.
Programmatically retrieving symbols
Usually the debuggers and profilers that you use will know how to use symbol servers, but occasionally you may need to write code to download symbols – perhaps you are writing a debugger or profiler. In my case I had a web page that listed GUIDs, ages, and PDB names for dozens of Microsoft DLLs from dozens of versions of Windows for which we needed symbols. Writing code to download all of these symbols was trivial – several orders of magnitude easier than getting symbols for other versions of Linux. The short explanation of what I needed to do was “call SymFindFileInPath”. In order to demonstrate how easy it was I decided to give a slightly longer explanation. The code below (also available on github as part of UIforETW) takes a GUID, age, and pdb name and downloads the symbols from Microsoft’s symbol server. The biggest chunk of code is for parsing the GUID – the actual PDB downloading is trivial.
// Symbol downloading demonstration code. // For more information see ReadMe.txt and this blog post: // #include <stdio.h> #include <Windows.h> #include <DbgHelp.h> #include <string> // Link with the dbghelp import library #pragma comment(lib, "dbghelp.lib") // Uncomment this line to test with known-good parameters. //#define TESTING int main(int argc, _Pre_readable_size_(argc) char* argv[]) { // Tell dbghelp to print diagnostics to the debugger output. SymSetOptions(SYMOPT_DEBUG); // Initialize dbghelp const HANDLE fakeProcess = reinterpret_cast<const HANDLE>(1); const BOOL initResult = SymInitialize(fakeProcess, NULL, FALSE); if (initResult == FALSE) { printf("SymInitialize failed!! Error: %u\n", ::GetLastError()); return -1; } #pragma warning(suppress : 4996) // C4996: 'getenv': This function or variable may be unsafe. Consider using _dupenv_s instead. const char* symbolPath = getenv("_NT_SYMBOL_PATH"); if (symbolPath) printf("_NT_SYMBOL_PATH=%s\n", symbolPath); else printf("_NT_SYMBOL_PATH is not set. Symbol retrieval will probably fail.\n\n"); #ifdef TESTING (void)argc; (void)argv; // Set a search path and cache directory. If this isn't set // then _NT_SYMBOL_PATH will be used instead. // Force setting it here to make sure that the test succeeds. SymSetSearchPath(fakeProcess, "SRV*c:\\symbolstest*"); // Valid PDB data to test the code. std::string gTextArg = "072FF0EB54D24DFAAE9D13885486EE09"; const char* ageText = "2"; const char* fileName = "kernel32.pdb"; // Valid PE data to test the code fileName = "crypt32.dll"; const char* dateStampText = "4802A0D7"; const char* sizeText = "95000"; //fileName = "chrome_child.dll"; //const char* dateStampText = "5420D824"; //const char* sizeText = "20a6000"; #else if (argc < 4) { printf("Error: insufficient arguments.\n"); printf("Usage: %s guid age pdbname\n", argv[0]); printf("Usage: %s dateStamp size pename\n", argv[0]); printf("Example: %s 6720c31f4ac24f3ab0243e0641a4412f 1 " "chrome_child.dll.pdb\n", argv[0]); printf("Example: %s 4802A0D7 95000 crypt32.dll\n", argv[0]); return 0; } std::string gTextArg = argv[1]; PCSTR const dateStampText = argv[1]; PCSTR const ageText = argv[2]; PCSTR const sizeText = argv[2]; PCSTR const fileName = argv[3]; #endif // Parse the GUID and age from the text GUID g = {}; DWORD age = 0; DWORD dateStamp = 0; DWORD size = 0; // Settings for SymFindFileInPath void* id = nullptr; DWORD flags = 0; DWORD two = 0; PCSTR const ext = strrchr(fileName, '.'); if (!ext) { printf("No extension found on %s. Fatal error.\n", fileName); return 0; } if (_stricmp(ext, ".pdb") == 0) { std::string gText; // Scan the GUID argument and remove all non-hex characters. This allows // passing GUIDs with '-', '{', and '}' characters. for (auto c : gTextArg) { if (isxdigit(static_cast<unsigned char>(c))) { gText.push_back(c); } } if (gText.size() != 32) { printf("Error: PDB GUIDs must be exactly 32 characters" " (%s was stripped to %s).\n", gTextArg.c_str(), gText.c_str()); return 10; } int count = sscanf_s(gText.substr(0, 8).c_str(), "%x", &g.Data1); DWORD temp; count += sscanf_s(gText.substr(8, 4).c_str(), "%x", &temp); g.Data2 = (unsigned short)temp; count += sscanf_s(gText.substr(12, 4).c_str(), "%x", &temp); g.Data3 = (unsigned short)temp; for (auto i = 0; i < ARRAYSIZE(g.Data4); ++i) { count += sscanf_s(gText.substr(16 + i * 2, 2).c_str(), "%x", &temp); g.Data4[i] = (unsigned char)temp; } count += sscanf_s(ageText, "%x", &age); if (count != 12) { printf("Error: couldn't parse the PDB GUID/age string. Sorry.\n"); return 10; } flags = SSRVOPT_GUIDPTR; id = &g; two = age; printf("Looking for PDB file %s %s %s.\n", gText.c_str(), ageText, fileName); } else { if (strlen(dateStampText) != 8) printf("Warning!!! The datestamp (%s) is not eight characters long. " "This is usually wrong.\n", dateStampText); int count = sscanf_s(dateStampText, "%x", &dateStamp); count += sscanf_s(sizeText, "%x", &size); flags = SSRVOPT_DWORDPTR; id = &dateStamp; two = size; printf("Looking for PE file %s %x %x.\n", fileName, dateStamp, two); } //SymFindFileInPath is annotated_Out_writes_(MAX_PATH + 1) //thus, passing less than (MAX_PATH+1) is an overrun! //The documentation says the buffer needs to be MAX_PATH - hurray for //consistency - but better safe than owned. char filePath[MAX_PATH+1] = {}; DWORD three = 0; if (SymFindFileInPath(fakeProcess, NULL, fileName, id, two, three, flags, filePath, NULL, NULL)) { printf("Found file - placed it in %s.\n", filePath); } else { printf("Error: symbols not found - error %u. Are dbghelp.dll and " "symsrv.dll in the same directory as this executable?\n", GetLastError()); printf("Note that symbol server lookups sometimes fail randomly. " "Try again?\n"); } const BOOL cleanupResult = SymCleanup(fakeProcess); if (cleanupResult == FALSE) { printf("SymCleanup failed!! Error: %u\n", ::GetLastError()); } return 0; }
The TESTING define uses a known-good GUID, age, name, and symbol server. Comment out that define to use this to download arbitrary symbols from the symbol servers specified in _NT_SYMBOL_PATH. If you encounter any difficulties then run this program under — dbghelp will print diagnostics to the debugger output window. The one gotcha is that dbghelp.dll and symsrv.dll have to be in the same directory as your tool – having them in your path does not work reliably.
As previously mentioned, the latest version of RetrieveSymbols (the tool whose source code is listed above) ships in UIforETW – source is here and the latest binary can be found here or in the latest UIforETW release. The symchk tool (ships in the Windows debugger toolkit) also downloads PDB files when passed a PE file – use the /v option to get information about where the PDB file was downloaded to, and other information. symchk is more convenient if you have a .dmp file or a PE file that you need symbols for, whereas RetrieveSymbols is more convenient if you have a GUID, age, and PDB file name.
Diagnosing symbol problems with windbg
If you have a minidump and its symbols are not loading then I recommend loading the minidump into windbg and using its diagnostics:
- !sym noisy – print verbose information about attempts to get symbols
- lmv m MyModule – print a record from the crash dump’s module list including its name, time stamp, image size, and where the PDB is located if it was found
- !lmi MyModule – print a module’s header information – this only works if the PE file has been loaded, which is a prerequisite for having symbols load
Dumpbin summary
- “%VS120COMNTOOLS%..\..\VC\vcvarsall.bat” – this adds dumpbin’s directory to the path
- dumpbin FX.exe /headers | find “date stamp” – find the link time stamp of a PE file
- dumpbin FX.exe /headers | find “size of image” – find the image size of a PE file
- dumpbin FractalX.exe /headers | find “Format:” – find the GUID, age, and file name of a PE file’s PDB file
I’m using the windows symbol servers daily, but I have one weird issue: One of our customers has a machine on which our software crashes, and the generated minidumps give me a stack-trace into MFC-dlls of which I cannot get symbols, which is highly irregular. The exact version-numbers elude me right now, but I do not understand how this could happen in the first place. Does MS have holes in their PDB-coverage? Is the minidump faulty? Is it a problem with the client’s windows installation? Magic?
That sounds peculiar. You should try loading the crashes into windbg and using “lvm m mfc100” to get more information, and !lmi also. You can at least find out whether the problem is with loading the PE file or the PDB file.
Are you shipping the MFC DLLs with your application? That’s the recommended thing to do and that would normally mean that they would be running a known version — your version.
Thank you for the suggestions. I could load the dumps into Windbg, and they seem to be fine, it’s just that I don’t have the correct versions of the dll’s themselves, (or possibly VS/windbg can’t find them due to 32bit app on 64bit dev machine). It seems shipping the mfc100u.dll ourselves would be the better option to begin with.
> VS/windbg can’t find them due to 32bit app on 64bit dev machine
No. That is never a problem. The symbol lookup algorithm doesn’t give a damn about CPU architecture. It’s all about extracting fields from the PE file and using them as search keys. x86/x64/ARM/PPC does not enter in to it.
Assuming you have your symbol path configured correctly the customer must have a version of mfc100u.dll that is not listed in Microsoft’s symbol server. This is possible, albeit very rare. You should be shipping mfc100u.dll anyway which will probably resolve both the crash and the symbol problems.
Been using “srv*shared server*msdl” configuration for ages, and did all sorts of tricks to have a local cache. I did have a backup script to copy shared server to local cache, I had microsoft client-side caching, I even thought to hack its driver to force caching a directory! (The intent was to always write to shared server, but read from local cache). I also have microsoft and our own symbols all messed up in a single directory at shared server. Oh my, i feel so ashamed now that I learned I only had to configure “srv*local cache*shared server*microsoft” to get it working out of the box. Also, didn’t know I can actually put two symbol servers in a row to avoid having a mess of everything in a single directory.
The syntax for _NT_SYMBOL_PATH is excessively messy but pretty configurable. Read the three links in the post for various other ways of having different caching policies for different symbol servers.
I generally cache everything to the same directory and then delete it occasionally. I trust that it will get repopulated as needed.
I already read them. Your post merely served as a starting point. I actually wanted to learn more about compressing an existing server (the only thing I didn’t know from your post, and I seem to have skipped the part where you config local cache through intermediate), but reading stuff ended in learning all that. Thanks 🙂
“Minidump files and most profile files do not actually record enough information to retrieve PDB files”
Not quite so. I made a debugging tool for our crash handling purposes and I actually load PDB’s by MINIDUMP_MODULE.CvRecord, which is a CV_INFO_PDB70, containing everything you need. I didn’t even save PE files for ages, and it worked just fine.
Although I vaguely remember I had a case when I was helping some friend with his minidump and he didn’t have CvRecord in it. Probably a very outdated minidump-making tool or something like that.
A problem we hit was that internally we would record full minidumps (with heap) and they had enough information to allow loading the PDBs, without finding the PEs. However when Microsoft sent us mini-minidumps (no heap) we couldn’t load the symbols. This is what forced me to learn more about how the process works so that I could configure our symbol publishing so that we could load symbols for *all* minidumps.
So, there are some cases where the PE files are unnecessary, but I prefer not to risk it 🙂
Also, are you aware of SymSrvGetFileIndexes() / SymSrvGetFileIndexInfo() ? This is a programmatical way of what you’re doing with dumpbin
I was not aware of those functions. Thanks for sharing.
Pingback: Symbols on Linux Part Three: Linux versus Windows | Random ASCII
Pingback: Symbols on Linux Part One: g++ Library Symbols | Random ASCII
Pingback: Symbols on Linux Part Two: Symbols for Other Versions | Random ASCII
Finally I got time to re-configure our symbols servers and compress it.
I downloaded the compress.exe from your link, with md5 a911550b51f759a723f40db3157572f7.
I compressed the symsrv using some batch script.
And now it’s all ruined! Many files just can’t be extracted, others can, but with warnings. 7-zip will show absolutely invalid original file sizes for every single file. Having googled the internal format of compress.exe I can confirm that header contains exactly that incorrect size. To be specific, if will always have byte 0x63 where it shouldn’t be.
I’m pretty much terrified. Even though I do have a backup, just not too handy.
Now, an experiment. Let’s make file of exactly 8465408 bytes, compress it and try to expand.
HANDLE file = CreateFile(_T(“Zeroes.pdb”), GENERIC_WRITE, 0, 0, CREATE_ALWAYS, 0, 0);
SetFilePointer(file, 8465408, 0, FILE_BEGIN);
SetEndOfFile(file);
CloseHandle(file);
Compressing goes fine: compress.exe -R Zeroes.pdb
Expanding results in a 0 bytes file: expand -R Zeroes.pd_
Damn. I don’t know what would have happened. How are you trying to extract the files? The only way we try extracting them is with symbol server and that works. I don’t know what the format is — I don’t know that 7-zip is supposed to be able to decompress them.
Sorry…
It all started with debugger acting WEIRD on one pdb. Then it turned out that this PDB can’t be extracted at all. Give my experiment a try.
By the way, how did you compress your symbol server? There’re two compression types available in compress.exe and I simply used default one (turns out its compression isn’t as good as -Z compression).
Now it turns out even that is a lie. Compress.exe says -ZX is default, but in fact if neither -ZX nor -Z is specified then it uses some third type of compression (which has caused me damages). It seems that -ZX compresses better then -Z, which compresses better then real default.
symstore /compress will compress even better then default / Z / ZX. In my case:
original = 51mb
ntfs = 24.2mb
default = 18.5mb
Z = 12.9mb
ZX = 11.1mb
symsrv = 9.5mb
I don’t remember what option we used — it was over a year ago. Now we just use the /compress option to symstore. We probably used the default options, and symsrv.dll is able to decompress those.
If you’re able to find any PDB compressed back then, what is its signature? SZDD is bad default compression, MSCF is -Z, -ZX and symsrv compression. If it is SZDD, Do you have byte at offset 0x0A == byte 0x09 + 1? That’s what seems to be the bug. 4 bytes from 0xA should form a 4-byte original size.
First four bytes are MSCF. Then 0x00000000. Then 0x73 E4 0F 00 00 00 00 00.
Then 0x2C 00 00 00 00 00 00 00 03 01 01 00 01 00 00 00
MSCF means you’re lucky. I have a theory that it’s not compress.exe but some of windows DLLs are at fault, going to test it. So far Win7 x64 and Win8 x64 both have the problem.
Also, for quite a while now it looks like we’re both working quite intensively on pretty much the same technologies, and by that I mean general debugging / crash handling / debugging crash dumps / working on arcane faults. I feel that it would be great to make a closer acquaintance. If interested, please send me some instant messenger contact to me email.
Theory about Windows didn’t work out. WinXP SP3 has the same bug. Probably no need to go further on that. What I really wonder is how the bug still exists, it’s been over 10 years now and it’s no good when the file can be compressed, but not expanded.
You pointed out that it’s symsrv that should be able do expand, so I’d like to clear that moment once again: it all started with symsrv. On one of the PDB’s after compression it would create a 0-byte-long PDB in my cache and fail to load any symbols. Investigating I found that the PDB can’t be decompressed by any means, and expand.exe produces the same 0-byte pdb. The next thing I found is that all of the PDB’s were compressed wrong, but most of them can still be decompressed, even though expand.exe will yield warnings. I think it’s best to incorporate that in your post. Also, compress.exe doesn’t compress as good as symstore with any flags. So it’s probably best to convert existing by renaming the symsrv and starting a recursive symstore on it. Transaction history will be lost as a downside, though, but it seems it can be restored by hand, replacing 000Admin and all .ptr files from original symbol store.
Thanks a lot for your script. It saved me a lot of time.
But, when I use it, I have error 2 which is supposed to file not found. Any guess why ? (didnt change a line from your code).
What script are you talking about? The dumpbin commands? The symstore commands? The C++ code for retrieving symbols?
You need to make sure that the command that you are running is in the path. For the C++ code you need to make sure that dbghelp.dll and symsrv.dll are in the same directory as the executable you created.
I’m confused as to how my script can have saved you a lot of time if you can’t run it…
Thanks for your fast reply. I am talking about your C++ code.
Yes both dlls are in the same directory as the executable created.
And you saved me a lot of time by giving me hope 🙂 .
I can’t tell what’s going wrong. The C++ code doesn’t print numeric error codes so you must be having a failure to properly compile and run it. The best thing to do is to create a blank Win32 Command Line project using Visual Studio and then paste the code into the main source file below the #include “stdafx.h”, then build and run. But, I can’t help you debug problems with this process.
You print the error number :
Extract from your code : … %u … GetLastError());
Right you are. Well, assuming that TESTING is defined it should work. Maybe make sure that _NT_SYMBOL_PATH is not set so that it uses the symbol path mentioned in the code. And double-check that dbghelp.dll and symsrv.dll are in the executable directory. I just copied them with this syntax:
c:\temp>xcopy “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\symsrv.dll” TestSymbols\Debug
C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\symsrv.dll
1 File(s) copied
c:\temp>xcopy “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\dbghelp.dll” TestSymbols\Debug
C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\dbghelp.dll
1 File(s) copied
I had to fix up the smart quotes to get the code to compile (stupid smart quotes) but then it worked and found the requested kernel32.pdb.
You were right, shame on me … I forgot to define TESTING … shame on me …
Everything works perfectly. You rock 🙂
One other thing. How to get the GUID number (PdbSig70) ? Is there a way to calculate it from a known DLL ?
PS : Sorry, I can not reply to your last post (dont know why).
If you have the DLL then use dumpbin — see the Finding PDB Files section. If you don’t have the DLL then you need to retrieve it first — you generally can’t get the information needed to retrieve a PDB unless you have the DLL.
Ok, but dont you think you can generate the GUID from a DLL with the dbghelp API ?
I didn’t realize you wanted a programmatic way to get the GUID. Yes, it’s definitely possible. Left as an exercise for the reader…
I’m just trying to get the PDB file given a DLL file. 🙂
Do you think I shouldnt use SymFindFileInPath ?
If you have a DLL you can find the GUID using dumpbin. That works. Or, you can read the dbghelp help to find out how to get the GUID programmatically. If you figure out how to do it you should share your results.
The API’s are SymSrvGetFileIndexes() and SymSrvGetFileIndexInfo(), I have already described them above.
@Alexander : Ok, I did not see it. I will give it a try.
I manage to download the symbol using directly the SymLoadModuleEx function.
One other thing, to parse GUID, I discover the CLSIDFromString which ease a lot the process :
GUID guid;
TCHAR* hash = L”{57AF0B26-EF63-46C2-BFAB-A652F46CB5F7}”;
HRESULT hr;
CLSIDFromString(hash,&guid);
@brucedawson : Thanks a lot for your help. Keep up the good work 🙂
Thanks a lot for publishing this article!
I have a symbol server where PE files and PDB files are pushed for each build. When I open a minidump with windbg the debug symbol is found and the correct callstack is displayed.
However, when I open the same crash dump with Visual Studio I get an error message saying “no matching binary found”.
Do you have any idea why Visual Studio would not be able to find the symbols? The same version of SYMSRV.dll seems to be used by both windbg and Visual Studio.
I use the _NT_SYMBOL_PATH to tell the development tools where the symbol server is located. I am running the 64bit version of windbg 6.12.0002.633 and Visual Studio 2010.
You can try using procmon (sysinternals) to monitor access to the symbol server. Also, you should configure a local symbol server cache. This makes symbol retrieval more reliable and it means that once windbg has retrieved the symbols to the local cache, Visual Studio can retrieve them from there (assuming you specify the same cache directory for both, but it would be foolish not to).
When I’ve seen this in the past it has usually been boring problems like a misspelled symbol server name, or Visual Studio being launched before _NT_SYMBOL_PATH was set.
It looks like it is a problem with VS2010 ignoring _NT_SYMBOL_PATH. Everything works fine when I open the same minidump with VS2012. I am essentially seeing the issue described here:. The workaround is to manually set the symbol server path in VS2010 (for reference I am referring to my own symbol server, not the one from Microsoft).
I am still interested to know what is happening so I followed your advice and used procmon to monitor access to the symbol server. According to procmon the VS2010 process definitely accesses the symbol server; however, I can’t see any references to the PE file. If I run the same experiment with VS2012 and windbg, I can see the process accessing the PE file first and then the PDB file.
I am not sure what conclusion to draw from my experiment. VS2010 accesses the symbol server so it means that it knows about _NT_SYMBOL_PATH (it is the only place where it is specified). However, it doesn’t try to open the PE file. If I manually set the symbol server path in VS2010 then the debugger finds and opens the PE file. I don’t understand what the difference is between specifying the symbol server address with _NT_SYMBOL_PATH or manually in VS. In both cases the address seems to be picked up correctly, however, when I use _NT_SYMBOL_PATH the PE file is not loaded so the pdb cannot be found.
Do VS2010 and VS2012 use a different debug engine? I looked at the callstack that accesses the symbol server and VS2010 uses NatDbgDE.dll while VS2012 uses vsdebugeng.dll.
It sounds like you now know more than I do (or, I only know as much as you because I just read your comment and your link).
The main problem that we have hit is that VS (unsure of which versions, but certainly 2010) will ignore symbol server cache directories and will drop symbol server directories in randomly selected directories at randomly selected times. If VS is running as administrator this can lead to arbitrarily badly corrupted systems since having kernel32.dll directories in your path confuses Windows. If VS is not running as administrator then these files can still end up being dropped in %temp% which can then cause future VS updates to fail to install. Joy. This may be related, or maybe not.
From what I can see the symbol server cache is being used correctly. I am not going to dig any deeper for now. I have a workaround for VS2010 and hopefully I’ll move to VS2012 soon.
Thanks a lot for your help.
Pingback: Slow Symbol Loading in Microsoft’s Profiler, Take Two | Random ASCII
Hello! Your webpage is running slow , this consumed just like a
minute to successfully load up, I personally dont know whether it’s
entirely me or your web site but google and yahoo loaded for me.
Anyways, Thank you for posting an incredibly awesome articles.
I guess this has already been beneficial to many
individuals . This one is definitely fantastic everything that
you actually have concluded in this article and wish to discover even more
great posts from your site. I now have your site book marked to check out new stuff you
publish.
Pingback: Xperf and Visual Studio: The Case of the Breakpoint Hangs | Random ASCII
Pingback: Visual Studio Single Step Performance Fixes | Random ASCII
I know this is an old thread, but just a comment that is super important for those of us that have build farms and use symstore. Publishing to a symbols server needs to be serialized and it will ‘corrupt’ the symstore unless you manage it. Corruption usually looks like not being able to find a particular build (at random) in the store
“Note SymStore does not support simultaneous transactions from multiple users. It is recommended that one user be designated “administrator” of the symbol store and be responsible for all add and del transactions.”
It seems very odd to me that SymStore can’t handle simultaneous transactions. Isn’t a symbol store just a directory structure? I believe that Chrome builds its symbol store with the Google Storage equivalent of mkdir and xcopy. Maybe that is another option – skip symstore and use mkdir/xcopy instead. Or use symstore to compress the files locally and then mkdir/xcopy. Worth a try.
It would be great if somebody at Microsoft could comment on what circumstances can trigger symstore corruption, so that we don’t do bizarre workarounds that aren’t generally needed.
Pingback: Everything Old is New Again, and a Compiler Bug | Random ASCII
Pingback: Add a const here, delete a const there… | Random ASCII
|
https://randomascii.wordpress.com/2013/03/09/symbols-the-microsoft-way/
|
CC-MAIN-2017-26
|
refinedweb
| 6,459
| 64.61
|
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
How to decide which language to learn first
How to choose your first programming language
opensource.com
Get the newsletter
The reasons for learning to program are as a varied as the people who want to learn. You might have a program you want to make, or maybe you just want to jump in. So, before choosing your first programming language, ask yourself: Where do you want that program to run? What do you want that program to do?
Your reasons for learning to code should inform your choice of a first language.
In this article, I use "code," "program," and "develop" interchangeably as verbs, while "code," "program," "application," and "app" interchangeably as nouns. This is to reflect language usage you may hear.
Know your device
Where your programs will run is a defining factor in your choice of language.
Desktop applications are the traditional software programs that run on a desktop or laptop computer. For these you'll be writing code that only runs on a single computer at a time. Mobile applications, known as apps, run on portable communications devices using iOS, Android, or other operating systems. Web applications are websites that function like applications.
Web development is often broken into two subcategories, based on the web's client-server architecture:
Front-end programming, which is writing code that runs in the web browser itself. This is the part that faces the user, or the "front end" of the program. It's sometimes called "client-side" programming, because the web browser is the client half of the web's client-server architecture. The web browser runs on your local computer or device.
Back-end programming, which is also known as "server-side" programming, the code written runs on a server, which is a computer you don't have physical access to.
What to create
Programming is a broad discipline and can be used in a variety of fields. Common examples include:
- data science,
- web development,
- game development, and
- work automation of various types.
Now that we've looked at why and where you want to program, let's look at two great languages for beginners.
Python
Python is one of the most popular languages for first-time programmers, and that is not by accident. Python is a general-purpose language. This means it can be used for a wide range of programming tasks. There's almost nothing you can't do with Python. This lets a wide range of beginners make practical use of the language. Additionally, Python has two key design features that make it great for new programmers: a clear, English-like syntax and an emphasis on code readability.
A language's syntax is essentially what you type to make the language perform. This can include words, special characters (like
;,
$,
%, or
{}), white space, or any combination. Python uses English for as much of this as possible, unlike other languages, which often use punctuation or special characters. As a result, Python reads much more like a natural, human language. This helps new programmers focus on solving problems, and they spend less time struggling with the specifics of the language itself.
Combined with that clear syntax is a focus on readability. When writing code, you'll create logical "blocks" of code, sections of code that work together for some related purpose. In many languages, those blocks are marked (or delimited) by special characters. They may be enclosed in
{} or some other character. The combination of block-delimiting characters and your ability to write your code in almost any fashion can decrease readability. Let's look at an example.
Here's a small function, called "fun," which takes a number,
x as its input. If
x equals 0, it runs another function called
no_fun (which does something that's no fun). That function takes no input. Otherwise, it runs the function
big_fun, using the same input,
x.
This function defined in the "C" language could be written like this:
void fun(int x)
{
if (x == 0) {
no_fun();
} else {
big_fun(x);
}
}
or, like this:
void fun(int x) { if (x == 0) {no_fun(); } else {big_fun(x); }}
Both are functionally equivalent and both will run. The
{} and
; tell us where different parts of the block are; however, one is clearly more readable to a human. Contrast that with the same function in Python:
def fun(x):
if x == 0:
no_fun()
else:
big_fun(x)
In this case, there's only one option. If the code isn't structured this way, it won't work, so if you have code that works, you have code that's readable. Also, notice the difference in syntax. Other than
def, the words in the Python code are English and would be clear to a broad audience. In the C language example
void and
int are less intuitive.
Python also has an excellent ecosystem. This means two things. First, you have a large, active community of people using the language you can turn to when you need help and guidance. Second, it has a large number of preexisiting libraries, which are chunks of code that perform special functions. These range from advanced mathematical processing to graphics to computer vision to almost anything you can imagine.
Python has two drawbacks to it being your first language. The first is that it can sometimes be tricky to install, especially on computers running Windows. (If you have a Mac or a Linux computer, Python is already installed.) Although this issue isn't insurmountable, and the situation is improving all the time, it can be a deterrent for some people. The second drawback is for people who specifically want to build websites. While there are projects written in Python (like Django and Flask) that let you build websites, there aren't many options for writing Python that will run in a web browser. It is primarily a back-end or server-side language.
JavaScript
If you know your primary reason for learning to program is to build websites, JavaScript may be the best choice for you. JavaScript is the language of the web. Besides being the default language of the web, JavaScript has a few advantages as a beginner language.
First, there's nothing to install. You can open any text editor (like Notepad on Windows, but not a word processor like Microsoft Word) and start typing JavaScript. The code will run in your web browser. Most modern web browsers have a JavaScript engine built in, so your code will run on almost any computer and a lot of mobile devices. The fact that you can run your code immediately in a web browser provides a very fast feedback loop, which is good for new coders. You can try something and see the results very quickly.
While JavaScript started life as a front-end language, an environment called Node.js lets you write code that runs in a web browser or on a server. Now JavaScript can be used as a front-end or back-end language. This has led to an increase in its popularity. JavaScript also has a huge number of packages that provide added functionality to the core language, allowing it to be used as a general-purpose language, and not just as the language of web development. Like Python, JavaScript has a vibrant, active ecosystem.
Despite these strengths, JavaScript is not without its drawbacks for new programmers. The syntax of JavaScript is not as clear or English-like as Python. It's much more like the C example above. It also doesn't have readability as a key design principle.
Making a choice
It's hard to go wrong with either Python or JavaScript as your first language. The key factor is what you intend to do. Why are you learning to code? Your answer should influence your decision most heavily. If you're looking to make contributions to open source, you will find a huge number of projects written in both languages. In addition, many projects that aren't primarily written in JavaScript still make use of it for their front-end component. As you're making a choice, don't forget about your local community. Do you have friends or co-workers who use either of these languages? For a new coder, having live support is very important.
Good luck and happy coding.
5 Comments
Very rightly said that it's very difficult to make a choice between Python and Javascript. Both of them are very powerful languages. Whatever language one chooses, starting with a great tutorial is one of the most important pre-requisite to hit the ground running. can be used to find the best online programming tutorials.
All the best. Don't Quit!
There is no mention of the asynchronous nature of javascript (aka ecma-script). Much nice client-side development can be done without concerns as the browser event loop tends to hide this complexity from the programmer. Not so in the node.js environment where asynchrony and callbacks are necessary from the start. Imagine the confusion of writing...
fs.open (file);
console.log("File opened.");
fs.write ("Hello, world");
and receiving...
"File opened." followed by an exception for trying to write to an unopen file.
With respect to the initial questions: Where do you want that program to run? What do you want that program to do?, and staying with the choices presented, I would strongly suggest the neophyte limit use of javascript to the browser and python to the server. Javascript was initially designed for the browser and to apply it on the server side (node.js) adds conceptual complexity to be avoided at the outset.
I would never point a new programmer at Javascript. It's a needlessly inefficient, sloppy language that becomes exponentially more murky the more code you write. It's in all the browsers by fiat, not by merit; no alternative languages are even offered for honest competition.
Even Assembler would be a better choice, at least the student could appreciate first hand what makes the CPU faster or slower.
For a first language I think two things are important: learning good principles and getting positive feedback. There's a balance and I think Python does a nice job with that. There's so little overhead involved with getting started compared with the complex syntax of JavaScript - how do you explain the necessity of all of the braces and semicolons to a noob? Forget about explaining how the source code goes in a web page document that is rendered by a browser... way too much for a first exposure to code.
Plus the resources are copious for beginners in Python whether it's ebooks like or tutorials and exercises like at code.org or Khan Academy.
There's a lot out there for JavaScript also but I just don't think a beginner has the context to make it meaningful and motivating.
"In this case, there's only one option. If the code isn't structured this way, it won't work"
You are incorrect. Your fun python function can be written in at least 2 other ways.
def fun(x):
no_fun() if x == 0 else big_fun(x)
fun = lambda x: no_fun() if x == 0 else big_fun(x)
|
https://opensource.com/article/17/1/choosing-your-first-programming-language
|
CC-MAIN-2018-22
|
refinedweb
| 1,887
| 73.78
|
Outliner with drag/drop reordering - part 2
())
How do you install modules into the app
PyQT is incompatible with Pythonista.
Turtle graphics
UI module on a Mac?
Things have not really changed on this issue. The UI module and .pyui files are still iOS only. The Mac UI platforms like tkinter, Pygame, PyQT, wxpython, etc. do not run on Pythonista or Pyto.
I have been hearing people talking about as a platform for building web+iOS+Android apps on a single platform but I have no idea how good it is.
Missing Turtle functionality?
I)
How to use .pyui files on other platforms
How to use pyui files on windows, or convert it for tkinter or pyqt5
Colored text
Hi! I wanted to color my text in pythonista, but, for some reason, there’s no colorama module! Here’s the code that should work:
from colorama import Fore, Back, Style print(Fore.RED + 'some red text') print(Back.GREEN + 'and with a green background') print(Style.DIM + 'and in dim text') print(Style.RESET_ALL) print('back to normal now') ``` from colorama import Fore, Back, Style print(Fore.RED + 'some red text') print(Back.GREEN + 'and with a green background') print(Style.DIM + 'and in dim text') print(Style.RESET_ALL) print('back to normal now')
I also tried this code:
import sys from termcolor import colored, cprint text = colored('Hello, World!', 'red', attrs=['reverse', 'blink']) print(text) cprint('Hello, World!', 'green', 'on_red') print_red_on_cyan = lambda x: cprint(x, 'red', 'on_cyan') print_red_on_cyan('Hello, World!') print_red_on_cyan('Hello, Universe!') for i in range(10): cprint(i, 'magenta', end=' ') cprint("Attention!", 'red', attrs=['bold'], file=sys.stderr)
and it returns an error about neither the colorama module nor the termcolor module existing. Is it just because there is no tkinter? Also, is there any way to work around this?
Numpy
None of the matplotlib interactivity will work, unless you program them. Matplotlibuses different "backends" to offer interactive plots, but the pythonista version uses the image only backend, since it doesn't have TKinter, or other native libraries.
I created a simple pythonista_backend to allow pan/zoom, it may also be possible to extend this to other types of interactions, if you want to tackle it...
|
https://forum.omz-software.com/search/Tkinter
|
CC-MAIN-2022-05
|
refinedweb
| 369
| 69.07
|
The ‘Microsoft
Join the conversationAdd Comment
4.3.1 doesn´t have Beta? You said that every release will have a beta, since the release that prevent profilers to work.
If I copy all EF5 dependencies from .NET 4.5, I can use it at .NET 4.0? Like I did with 3.5SP1 and 3.5, copying System.Data.Entity.dll.
"Table-Valued functions in your database can now be used with Database First."
What about Model First and Code First?
Does anyone know if the support for unique constraints described here is in the Beta of 5? blogs.msdn.com/…/unique-constraints-in-the-entity-framework.aspx I'm hoping that associations based on unique constraints makes the cut and is included in EF 5.
One other thing that I would like to see is the ability to define value types as nullables in the conceptual model, that are not null in the store model. For example, say I have a datetime in the database. I want it to be NOT NULL. But, I want it to be DateTime? at the class-level. That way if I bind to the property in a UI, the default value is blank/null rather than 1/1/1900 or whatever the default value is for a non-nullable DateTime. The same thing would be true for any other value type. Not sure if this works with code first or not. I'm using database first with an EDMX. The designer won't let you set the nullability to nullable in the conceptual model and non-nullable in the store model.
@Richard Deeming: The Entity Designer doesn't have the ability to specify the contents of stored procedures or functions. Therefore it is not possible to use a "pure" Model First workflow to create a database with a TVF. However, you can use Model First to create the tables, then define a TVF in the database and finally use the option to "Update the model from the database" to bring the TVF back to the model. Code First still doesn't have the ability to specify stored procedures and function mapping. Therefore, although it is possible to use Code First against an existing database you won't be able to invoke stored procedure and function as mapped. The TVF walkthrough contains some links for workarounds in the first few paragraphs.
@John Miller2: Regarding Unique Constraints, unfortunately the answer is no. The feature didn't make the cut for EF 5. We are looking at including it in a future release depending on its relative priority. I encourage you to vote for it if it is important to you: data.uservoice.com/…/1050579-unique-constraint-i-e-candidate-key-support.
@Felipe Fujiy: Re not having a beta for 4.3.1, we are still trying to follow the rule of always having a beta, but there are a few reasons we decided to make an exception in this case:
– The scope of the changes is very small.
– It is sort of a date driven release for EF: we needed to have something out at the same time as EF 5.0 beta 1, and we wanted to publish EF 5.0 beta 1 the same day VS 11 beta and .NET 4.5 Beta went out.
– It would be a little confusing to have two betas going on at the same time.
– 4.3.1 is a patch release following the go live of a large new feature (Migrations). You can expect us to follow this pattern in the future when issues are found (although in some cases we might choose to still have a beta if the scope of the changes is very big).
Regarding using .NET 4.5 assemblies with .NET 4.0, although you might be able to get some things to work, what you are describing is not a tested or supported configuration, so I wouldn’t recommend you to do it.
As much as I hate to say it, I don't think I'm going to be making use of the new enums feature. I need it to store a string value rather than an int in the database. That way if I'm generating reports from Reporting Services, what comes out on the reports are meaningful values.
Also, it seems like the new designer should default to using DbContext if that's the new recommended API.
It would also be nice if the number of properties that you can specify for an object in the designer was increased. I.e. it would be nice to specify values for all the annotations that appear in the System.ComponentModel.DataAnnotations namespace for things like Required, RegularExpression, DisplayName, etc. It would also be nice if fields like DisplayName were generated automatically using a convention such as inserting spaces between words. And also, as I mentioned before, it should be possible to have a nullable value type in the conceptual model, but, have it be non-null on the store side.
The EF Core is embedded with .NET 4.5, so EF 5 will only be available against .NET 4.5? There's any chance to EF 5 and EF Core become only one library? I mean something independent. It's too hard to know the fact that we'll have to wait another version of .NET to receive awesome features like that aforementioned.
By the way, I'm very(I said VERY) excited with those news features!
Thank you guys.
@Jon Miller:
You can store the int of enum in your column and handle the description with reflection, using the DescriptionAttribute above each enum property so use reflection to catch the description from enum's properties.
I missed something like you said about designer, the possibility of add/edit DataAnnotations of properties. This is something that will increase a lot the creation of entities.
@Kim Tranjan: We are indeed looking at incorporating the EF functionality that today is implemented in the .NET Framework into our out-of-band releases. This is not happening in EF 5 though.
How does one update a project to V5 from the CTP? I'm trying to figure out what files need to be updated in references etc. after doing the package manager command.
@Jon Miller2: Re making the DbContext API the default for the Entity Designer, we are looking at that for VS 11 RTM but can't promise the change will make it on time.
A nice set of features.
Don't miss out on the little ones tough: Migration from L2S is still a nightmare because LINQ support is not as sophisticated yet (DateTime.Date for example is a major show-stopper).
Thanks for release!
Btw. do you have any roadmap of features you would like to implement in the near future?
Also how does one go about updating the T4 template stuff to pre-generate views?
I'm getting the following error on the old template:
(although it looks like it's still working)
Ideas?
@Jon Miller: Both enums mapped to string values and nullable properties mapped to non nullable columns are features belonging to area of simple type mapping or mapped conversions. Unfortunately EF doesn't have this feature yet.
Updated to EF5 from June 2011 CTP and started to get "Sequence contains more than one element" exceptions thrown at navigation property configuration… Something with mappings maybe
Updated to EF5 from June 2011 CTP and now any attempt to query database results in "Sequence contains more than one element" exception thrown at navigation property configuration… Something with mappings maybe
@Diego B Vega: First of all, you guys do great work! You mention though the TVF walkthrough contains some links for workarounds for Code First in the first few paragraphs. I only see stored procedures mentioned in those links, not TVFs. The main thing about using TVFs is that they are composable. I don't think those workarounds are composable, right? Therefore they are not workarounds for using TVFs with Code First. Can we expect a proper TVF solution for Code First for EF 5.0 RTM instead of workarounds? Please also have a look at code.msdn.microsoft.com/Recursive-or-hierarchical-bf43a96e for one of my workarounds where I would like to slot in TVFs. Many thanks, Remco
@Remco Blok: Thanks for the link for the recursive query workaround.
You can use the SqlQuery method to invoke a TVF the same way you would execute any arbitrary SQL query with it. You cannot compose on top of the results of SqlQuery (basically an IEnumerable<T>) using LINQ to Entities, however you can compose the TVF using SQL within the text query you pass to SqlQuery.
Adding support for mapping functions and stored procedures using Code First, e.g. to enable using TVFs in LINQ queries or updating an entity using a stored procedure is pretty high in our prioritized backlog, but we are not planning to include the feature in EF 5.0 RTM.
@Jon Miller2, @Ladislav Mrnka: Agreed that mapping string columns to enums is a relatively common thing to want to do, and Ladislav is right that it is not supported in EF 5.0 and that the way we would like to support this in the future is to support more general property transformations in the mapping.
I am a little surprised to hear you can't map a nullable property to a non-nullable column though. In general we don't block on nullability mismatches. Could you please send an example of what you are seeing? You can use the "Email blog author" link in this page to contact us. We will pick it up from there.
What kind of TPT enhancements can be expected when running 4.3.1 on the .net 4.5 beta?
@Diego B Vega. Thanks for your response. I'm trying to think here, EF 5.0 cannot RTM before .Net 4.5 RTMs, which cannot RTM before Windows 8 RTM's, which is expected by … Q4 2012? And then sometime after that we'll see TVF support in Code First…which takes us into .. 2013? I'll have to exercise patience 😉 The reason why I want to use LINQ instead of SQL is to be database agnostic, so that I can use either a MS SQL Server provider or an Oracle provider, but only write my LINQ once. See the discussion here: social.msdn.microsoft.com/…/c81dfd60-ede5-4873-9f4d-04e240c293c2. Anyway, keep up the good work EF Team!
@Remco Blok: I don’t actually have the dates for RTM, but I understand the pain of having to wait for a feature you need. Although I cannot promise much, we are working on taking the EF stack out-of-band completely so that we can release more often. Thanks for the link.
I really don't want that, but it seems like the old CTP 5 bug is back. If an entity has more than one many-to-many relationship, EF breaks. Just reproduced it on the simplest scenario with 3 entities.
Does DBContext and Generate code from model work with this release?
I'm pulling my hair getting it running in my current set up(Windows 8, VS11 Dev Preview)
Versioning… it's not rocket science you know.
"Microsoft.VisualStudio.Shell.10.0, Version=11.0.0.0"
Can anyone else confirm (or even better disprove) that having multiple many-to-many relationships within an entity in EF5 results in crashes?
In the Compatibility section of EF5, you say: 'EF 5 will work with Visual Studio 2010'. But if all the new features are available if you target .NET 4.5 and use Visual Studio 11, what's the point in using Visual Studio 2010?
Or the performance enhancements are available for VS2010 and .NET 4.0 with EF5?
@Pablo Barragan, I did a test, when I install EF5 on .NET 4 VS2010, it references a EntityFramework.dll v4.3.1
@Viktar I just wrote a little app with two many-to-many relationships and it worked as expected. Could you use the "Email blog author" link and send us a repro showing how it fails for you?
@Pablo Feel free to continue to use EF 4.3.1 when targeting .NET 4. There is no real advantage to using EF5 on .NET 4, but we didn't want it to fail if you do want to use EF5 on .NET 4.
@Diego B Vega
The issue that I'm running into with attempting to set a DateTime property to nullable when it is a NOT NULL in the database column is really easy to reproduce. I'm guessing the problem may only occur if you are doing database first development. To reproduce the problem, create a table in SQL Server and set it's data type to DATETIME and set it to NOT NULL. Then, create a new EDM and import the table. Click on the DateTime field in the designer. In the Properties window, Nullable will be set to False. If you change it to either True or (None), Visual Studio gives you the following error.
Error 3031: Problem in mapping fragments starting at line 38:Non-nullable column Test.Date in table Test is mapped to a nullable entity property.
I'm guessing maybe if I was doing model first or code first development that the problem might not happen?
@Ladislav Mrnka
I agree with what you are saying about type converters. I think it would be great if EF had a generalized way of dealing with it. I used to do a lot of programming with JavaServer faces where they had "converters" like that. Also, WPF has ValueConverters. I always liked this and wished that ASP.NET had it.
@Diego B Vega: So Code-First is *still* not a first-class citizen of the EF world?
I'm quite happy to *create* the TVFs in the database, but if I can't *use* them from EF/CF, then it's not really a viable solution for any real-world project.
@Ladislav Mrnka – We don’t have a roadmap for features just yet. We’re in planning stage at the moment and mostly focused on how we move more of the Entity Framework code out of the .NET Framework and into the NuGet package.
@lynn eriksen – The improvements are in the queries that we generate, mainly around replacing UNION with LEFT OUTER JOIN.
@OakNinja – Yes, this post walks through how to generate DbContext code from an edmx based model. blogs.msdn.com/…/ef-4-2-model-amp-database-first-walkthrough.aspx
@Mike – Yeah, the issues we are hitting are technical ones in NuGet. We are one of the first teams to be releasing on NuGet that have different assemblies for .NET 4.0 and .NET 4.5, this makes the Package Manager Console based tooling hard because the ‘tools’ folder in NuGet isn’t setup for multi-targeting yet. We’re working on a temporary fix and also working with the NuGet folks to get native support for this scenario.
Also, when updating to 5.00-beta1, I got the following line in the package installed console:
Failed to generate binding redirects for PROJECTNAME. 'object' does not contain a definition for 'References'
@Pablo Barragan & @Felipe Fujiy – Just to add to what Arthur said – The EF5 bits on .NET 4.0 are the same as what we shipped in EF4.3.1. This may change as we move through the releases and towards an RTM of EF 5. We aren’t planning to add any new features that will be available on .NET 4.0 but we may make bug fixes etc.
Just noticed something… When creating a new MVC 4 application in VS 11 beta (which includes EF 4.3.1, I believe) the default connection string goes to localdb. However, as soon as I install the EF 5 beta, some config is added to my Web.config file that sets it back to Sql Express. I assume the package does this to support .NET 4.0 and VS2010. Is there a any way you can detect the framework version being used and not add this stuff for .NET 4.5 so it will continue to use localdb?
In any case, thanks so much for your great work on this. I'm really appreciative of the massively increased release velocity that these out-of-band releases allow.
@Brian Sullivan – The setting in code will actually win over the config file setting. In the config file we register SQLEXPRESS if present and LocalDb if not, I assume your machine has a SQLEXPRESS instance which is why we are setting that as the default. The line of code MVC4 puts in global.asax actually has a bug (missing a slash in the server name), which is easy enough to fix… but we recommend removing it, upgrading to EF4.3.1 or 5.0.0-beta1 and setting LocalDb as the default in web.config. The problem with setting it in global.asax is that design time tools (such as Migrations) have no idea this line of code exists, so they will fall back to using SQLEXPRESS. The next release of MVC4 is moving to the 4.3.1 package and will use web.config rather than global.asax to set the deafult.
"The context cannot be used while the model is being created"
EF5..
how to fix?
@Leandro This happens when the context is used inside a call to OnModelCreating. It's not possible to use the context for querying, updates, etc inside OnModelCreating because EF needs a model to be able to do these things. OnModelCreating is part of the process used to create this model and hence you cannot do these things until after OnModelCreating has completed and the model has been created.
@Rowan Miller It would be great if you could include the perfromance improvements in a future release for .NET 4.0. I was really excited about these improvements
I'm not running anything inside the "OnModelBuilder"!
Also already tried without the "OnModelCreating" and failed.
OnModelCreating*
It is extremely depressing the there still is no support for TVF in Code First.
Hey Rowan,
I'm getting ready to publish modified versions of the EF4x and EF5x DbContext T4 templates that include support for DataAnnotations and IntelliSense code comments, derived from the EDMX file. I love the new type support in EF5, but I was wondering if it would be possible to add support for the <Documentation> tag in the designer, for both the Enum itself, and it's members? I've confirmed that they work in the XML, but it would be nice to be able to edit them in the UI like the other EF objects as well.
Thanks!
Robert McLaws
twitter.com/robertmclaws
@Leandro Could you use the "email blog author" link and send us a repro?
@Robert I will add your request about being able to edit documentation for enum types and members to our backlog
"We are seeing a lot of great Entity Framework questions (and answers) from the community on Stack Overflow."
One of the best statements I've ever seen from Microsoft!
@Leandro: Check this: stackoverflow.com/…/how-can-i-prevent-ef-the-context-cannot-be-used-while-the-model-is-being-create I'm not sure if this problem still exists in newer versions but previously this could happen if two context were used in the same time (by multiple threads) without previous initialization of the DbModel.
The problem was that he was showing the error message: The context can not be used while the model is created being wellness
When in fact the error was the same as was reported in: Known Issues
Can disregard the email I sent with the error.
thanks for the support
@Jon Miller: I started a suggestion on UserVoice – you can vote for it if you think this is important: data.uservoice.com/…/2639292-support-for-simple-type-mapping-or-mapped-type-con
I keep getting this error
Could not load type 'System.ComponentModel.DataAnnotations.DatabaseGeneratedAttribute' from assembly 'EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
I'm using win8 consumer preview, vs11 beta and ef 5.0.
@Ivan: EF 5.0 is a new major version and contains breaking changes, so you will need to make changes to your 4.x program and recompile. One of those changes is that some of the data annotation attributes that were included in EntityFramemowrk.dll 4.x moved to System.ComponentModel.DataAnnotations.dll in .NET 4.5. This attribute in particular moved to the System.ComponentModel.DataAnnotations.Schema namespace, so you'll need at least to add a using statement for that.
@Diego: I didn't even use this attribute (or any other). I created a new asp.net mvc 4 application, updated ef to 5.0beta, and scaffolded controller, views and dbcontext. When I run it, I get that exception at runtime. I added the using statement to my model, as you suggested, but the error persists. Thoughts?
@Ivan: In that case I am not sure what is going on. Would it be possible for you to send us a repro in a zip file? You can use the "email blog author" link in this page and we will reply with an email address. Thanks a lot in advance.
F# project with "Install-Package EntityFramework -IncludePrerelease" gave some errors…
In my solution, use Solution Folder. when run Add-Migration throw exception. I finally found if move the project out of the folder. it will work fine.
It seems [DisplayFormat(ConvertEmptyStringToNull = false)] doesn't working in the new release 4.3.1!
I have a class definition:
public class Person
{
public int Id { get; set; }
[Required]
public string Name { get; set; }
[Required]
[DisplayFormat(ConvertEmptyStringToNull = false)]
public string Description { get; set; }
}
When try add new Person with empty Description, validation exception occurred:
var person = new Person {Name="Tester", Description="" };
But in EF4.1, there is no problem with ConvertEmptyStringToNull setting . Please help! Thanks!
Hello Team,
I wonder how I can check which "version" of the current migration.
And how to remove the history of the project so that you can make a new "Add-Migration" from the beginning.
@Leandro C. Guimarães
1.I view the version through the history table.
2. roll back Update-Migration -targetmigration. then delete the migration and re set-up again.
this is my way.
As reported in an earlier comment, we are also getting an exception with EF5 beta 1 the “Sequence contains more than one element” exception.
@Diego, I see from your post here twitter.com that you are aware of this problem. Could you give any more info on the cause of it, whether there are any workarounds, and when you plan to release a fix?
There are more details on the exact exception I am getting here:
stackoverflow.com/…/ef-5-beta-1-code-first-sequence-contains-more-than-one-element
We have a large solution which uses EF Code First, we can’t wait to try out the performance improvements so I’m very keen to get past this exception.
Has any progress be made on the "circular reference error when returning a serialized object" bug?Basically, the issue is that a EF class (such as from Code First) with a virtual ICollection property will throw a "circular reference" when serialized. A [ScriptIgnore] attribute should stop the problem from happening but seems to be ignored.
With the current effort made to simplify service access with the Web API etc. this would seem to require attention. Any possibility this is going to be addressed for 5.0?
Here are links describing the problem with some way to get around it:
stackoverflow.com/…/how-did-i-solve-the-json-serializing-circular-reference-error
stackoverflow.com/…/circular-reference-exception-with-json-serialisation-with-mvc3-and-ef4-ctp5w
BTW. Good news about the DataAnnotations moving out of the EntityFramework.dll !
@Viktar, @Paul2zl,
This is a known bug in EF5 Beta 1. We are going to fix this in Beta 2, which will be dropping within the next couple of weeks.
The bug only affects Independent Association (associations where the FK does not exist in your CLR classes) so you can workaround by switching to FK associations.
Apologies for the inconvenience.
Cheers,
Andrew.
@Khaled – TVF (and SPROC) support for Code First are absolutely on our backlog, but they just didn’t make it into this release. We use to help with prioritization so please be sure to vote for the features you want to see.
@Tuomas Hietanen – Can you start up a Stack Overflow thread with the exact errors you are seeing.
@Henry Zhou – Entity Framework doesn’t process ConvertEmptyStringToNull, if you have a UI layer that takes care of replacing null with an empty string that will keep EF validation happy, but EF won’t do this for you. You could override SaveChanges on your context and add logic to take care of this.
@Willc – The best option at the moment is still to disable proxy creation. We are going to take a look at what we can do in EF to solve this problem in the next version, but not in EF 5.
Um… screenshots of… console commands? So much for copying and pasting…
Um…and this bug? fixed?…/Entity-Framework-EntityFunctionsCreateDateTime-and-Leap-Year.aspx
Will EF 5 (EF designer in VS11) support entity model devided by several files to improve parallel work of many developers and to decrease number of conflicts (during merge) caused by parallel model changing of edmx file?
@ryuuseki – Fair point, we’ll use text in the future 🙂
@Rwing – Our team is looking at that bug at the moment, I’ll get back to you with the outcome.
@Alexey – EF5 supports multiple diagrams but those diagrams still target a single EDMX file. Multiple model files isn’t something we are planning to support in EF5. Please vote for the feature on our wishlist, we use this site when prioritizing features for the next release.
@Rwing – Diego, from our team, has followed up on the blog post.
It looks like the issue when using an abstract base class in a TPT inheritance model has not been resolved.
Is this going to be fixed in the release?
See blogs.msdn.com/…/announcing-the-microsoft-entity-framework-june-2011-ctp.aspx
and thedatafarm.com/…/entity-framework-june-2011-ctp-tpt-inheritance-query-improvements
A few questions about EF 5 — since I am die-ing for access the the Geography data types in EF.
1. Will EF5 work in VS 11 only?
2. If so, I guess we are some time away from ready-for-prime-time release. If not, when might we go to RTM?
–mike
Working on VS 11 Beta I keep getting the error described in the linked stackoverflow post.
Creating a new project installing EF from the package manager console and all works fine.
A soon as I upgrade to EF 5 beta project throws following error when trying to savechanges.
Void System.Data.Objects.ObjectContextOptions.set_UseConsistentNullReferenceBehavior(Boolean)
link: stackoverflow.com/…/can-anyone-spot-why-i-keep-getting-this-error-testing-the-ef-5-beta
@Drauka: I responded to your question on Stack Overflow. It looks like you are using the wrong EntityFramework.dll for your target framework.
@Coop – EF5 will work in VS 2010 and VS 11 (there is a bug in Beta 1 that prevents the Code First Migrations commands from being used in VS 2010). To use the spatial data types you need to be targeting .NET 4.5 though, so you’ll need VS11 for that.
We don’t have a date for RTM yet but we are working on a ‘go-live’ release before the RTM of VS11 is available.
@Arthur Unfortunately your suggestions did not solve the problem.
It turned out that the problem @Drauka was having resulted from the EF June CTP failing to uninstall correctly. The Stack Overflow thread linked above contains steps to resolve this.
Can we rename the Discriminator column across TPC and TPH yet? 🙁
stackoverflow.com/…/cannot-rename-discriminator-column-in-entity-framework-4-1-code-first-database
NuGet is down, how do I get EF without NuGet?
Can someone share a pointer on how to run EF against a DB hosted at third party (in my case WinHost)? I am getting an error: Model compatibility cannot be checked because the database does not contain model metadata. Model compatibility can only be checked for databases created using Code First or Code First Migrations.
Trouble with validation of empty string.
I have a required field for which empty string is allowed, but null is forbidden. I am getting a validation error on SaveChanges(), "namespace field is required".
How do I make EDM treat empty string as a valid value?
thanks. (There are a lot of posts that bing returns that are not useful.)
Since EF5 is .NET 4.5 only, you will be leaving many people behind, unless .NET 4.5 RTM will support Windows XP. There are many WPF applications out there still running on Windows XP.
@LarsM – NuGet did have a short outage that you experienced, the NuGet team does work to make this a high availability service. In the event of an outage, if you have installed the EntityFramework package before, you can get the .nupkg file from another project and host it in a local feed; docs.nuget.org/…/hosting-your-own-nuget-feeds
@Nikhil Singhal – Code First should just ‘trust you’ if you map to an existing database that wasn’t created by Code First. When are you hitting this error? It sounds like it is probably a bug. If you can use the ‘Email Blog Author’ link at the top right of this page to send us a full exception message and stack trace that would be great.
@mycall – EF5 will work on .NET 4.0. To get some of the new features you need to be targeting .NET 4.5 though.
@Pete Mack – That's the default behavior of Data Annotations and not specific to EF, you can change it using the AllowEmptyStrings parameter – [Required(AllowEmptyStrings = true)]
I added a new feature suggestion specifically for Code First support for TVF's here: data.uservoice.com/…/2686351-code-first-support-for-table-valued-functions.
The previous feature suggestion for TVF's had 226 votes when it was marked as completed by the EF team saying it will be included in .NET 4.5, but that excluded Code First support. Hopefully we can get those 226 votes again or more on the new feature suggestion I created specifically for Code First support.
Any news on when beta 2 will drop? I have run into the 'Sequence contains more than one element' bug (stackoverflow.com/…/whats-wrong-with-my-many-to-many-abstract-implementation-sequence-contains-mor) and I am anxious to move forward.
@Remco Blok – We’ll probably add TVF support when we add SPROC support, which is very near the top of our backlog now.
@C4702 – We’ve just wrapped up the code for Beta 2, looks like we’ll have it out in the next week (provided we don’t find any major issues with it).
Please publish a step-by-step recipe how to enhance an existing app with an object graph in memory by applying EF 5 code first. Most people I talk to want to apply Code First without reading Julie Lerman's books…
For example this is a European school:
Public MustInherit Class Person
Public Property Id As Integer
Public Property Name As String
Public Property FirstName As String
Private Shared mCount As Integer = 0 'count created objects into negative numbers to dedect PKs defined by the DB
Public ReadOnly Property Count As Integer
Get
Return mCount
End Get
End Property
Public Sub New(name As String, firstName As String)
Me.Id = mCount 'use count as Id
mCount -= 1 'we use negative integers to find out whether and when the DB generates the primary key
Me.Name = name
Me.FirstName = firstName
End Sub
End Class
Public Class Student
Inherits Person
Public Sub New(name As String, firstName As String)
MyBase.New(name, firstName)
End Sub
End Class
Public Class Teacher
Inherits Person
Public Sub New(name As String, firstName As String)
MyBase.New(name, firstName)
End Sub
End Class
and School class contains List(Of Teacher) and List(Of Students).
What IDs must be changed? How do the FluentAPI calls look like for TPH/TPT/TPC ? …
How do I load the EF 4.3.1 Debug Symbols / Source for debugging?
I am trying to step into an exception and I am having trouble loading the symbols to I can view some of the variable values to see where in my model the exception is occurring (appears to be building the dynamic proxy). How do I load the EF 4.3.1 Debug Symbols / Source for debugging?
It's in a winform test harness. System.Data.Entity.Edm.EdmProperty: Name: Name 'EquipmentPckgDesc' cannot be used in type 'WinForm.EquipmentPckgDesc'. Member names cannot be the same as their enclosing type.
@Johan Botha – The source code for Entity Framework currently isn’t published, which is required to be able to step in and debug. The error you pasted doesn’t seem to be related to EF though, sounds like there is a property with the same name as it’s containing class in your code.
My team and I have really struggled with using the Entity Framework with SQL Azure. I thought these things "would just work" together! Turns out, there are tons of transient faults that happen with SQL Azure and EF has no way to handle them. That is, unless you want to write some really ugly, cumbersome wrapper code which doesn't fully get the job done.
My feature request is now the #1 "Azure" tagged EF feature request. data.uservoice.com/…/2426525-automatically-perform-retry-logic-for-sql-azure
Is it possible we'll see this implemented anytime soon?
Thank you very much.
Urgent!
My code was working fine using EF 4.1. After upgrading to 4.3.1, it does not work anymore.
It always throws exception:
Method System.Data.Entity.Database.SetInitializer: type argument 'WcScadaWebApp.DAL.TurbineContext' violates the constraint of type parameter 'TContext'.
during calling below method.
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
RegisterGlobalFilters(GlobalFilters.Filters);
RegisterRoutes(RouteTable.Routes);
Database.SetInitializer<TurbineContext>(new TurbineInitializer());
}
Could anyone please help me to figure out why?
@Matt – This feature isn't going to be in EF 5 but we are planning to do some SQL Azure related work in EF 6.
@Ren Jun – Do you have TurbineContext or TurbineInitializer defined in a separate project. If so it sounds like they may be referencing an older version of EF and you may need to upgrade the NuGet package in those project too. If that's not the problem then start up a thread on Stack Overflow and we'll take a look at it.
@Rowan Miller – (From Ren Jun) Thank you very much. That was the case as you said. It was fixed after updating reference to new EF in another project.
Hi there,
I have EF 4.3.1 installed using NuGet, but I can't see "EF 4.x DbContext Generator" in the "Add New Item" choices. Can anyone advise please?
Thanks
@Kevin – You'll need to look under the 'Online' tab to see the templates.
It appears that there may be a bug in EF 4.3.1 relating to how the indexes are created. Could you guys take a look at my question stackoverflow.com/…/unhandled-exception-after-upgrading-to-entity-framework-4-3-1 and possibly give us some insight into the changes that were made to the index generation procedure from 4.2 to 4.3? Thanks!
@Jeff, Thank you for reporting this. We will include the fix for this in a future release of EF. As a workaround, you can use Migrations to create your database and remove the redundant call to Index() on the PrivateMakeUpLessons table create code. I'll post more details on your StackOverflow question.
@Brice, Thanks for the quick follow-up. This blog is the reason I chose EF over all the other ORMs out there.
EF 5.0 doesn't support Silverlight 5 yet?
Install failed. Rolling back…
Install-Package : Could not install package 'EntityFramework 5.0.0-rc'. You are trying to install this package into a project that targets 'Silverlight,Version=v5.0', but the package does not contain a
ny assembly references that are compatible with that framework. For more information, contact the package author.
At line:1 char:16
+ Install-Package <<<< EntityFramework -IncludePrerelease
+ CategoryInfo : NotSpecified: (:) [Install-Package], InvalidOperationException
+ FullyQualifiedErrorId : NuGetCmdletUnhandledException,NuGet.PowerShell.Commands.InstallPackageCommand
@Anson, The Entity Framework is not supprted on the Silverlight runtime. However, our sister technology WCF Data Services is. msdn.microsoft.com/…/bb931106
@Jeff, just and update: the fix for the issue on duplicate indexes will be included in EF5 RTM.
|
https://blogs.msdn.microsoft.com/adonet/2012/02/29/ef4-3-1-and-ef5-beta-1-available-on-nuget/
|
CC-MAIN-2018-43
|
refinedweb
| 6,202
| 65.52
|
A Simple Camel Route in WildFly Swarm
A Simple Camel Route in WildFly Swarm
Integrations using Camel are pretty flexible. When working within a JavaEE environment you will want to use CDI, these instructions will get you up and running.
Join the DZone community and get the full member experience.Join For Free
To get started with WildFly Swarm and Camel, we’ll build an application that makes use of some common JavaEE features as part of a simple Camel route.
Camel is quite flexible when it comes to the environments it will integrate with. One of the decisions you need to make is which registry Camel will use to look up POJOs. Camel supports a wide variety of registries including JNDI, Spring ApplicationContext, a simple Map, or CDI. Since we are working in a JavaEE environment, we’ll make use of CDI.
With that in mind, let's take a look at the build script.
buildscript { repositories { mavenLocal() mavenCentral() } dependencies { classpath "org.wildfly.swarm:wildfly-swarm-plugin:1.0.0.Beta2" } } group 'com.matthewcasperson' version '1.0-SNAPSHOT' apply plugin: 'war' apply plugin: 'wildfly-swarm' sourceCompatibility = 1.8 def swarmVersion = '1.0.0.Beta2' def camelVersion = '2.17.0' repositories { mavenCentral() mavenLocal() maven { url '' } maven { url '' } } dependencies { testCompile group: 'junit', name: 'junit', version: '4.11' compile 'com.sun.xml.bind:jaxb-core:2.2.11' compile 'com.sun.xml.bind:jaxb-impl:2.2.11' compile 'org.wildfly.swarm:bootstrap:' + swarmVersion compile 'org.wildfly.swarm:undertow:' + swarmVersion compile 'org.wildfly.swarm:weld:' + swarmVersion compile 'org.apache.camel:camel-core:' + camelVersion compile 'org.apache.camel:camel-servletlistener:' + camelVersion compile 'org.apache.camel:camel-servlet:' + camelVersion compile 'org.apache.camel:camel-cdi:' + camelVersion } // For heroku task stage { dependsOn build }
We have included dependencies on the Swarm CDI library called Weld, as well as Undertow, which is the WildFly web server. Additionally we have pulled in some Camel dependencies that let us interact with servlet requests and reference CDI as a bean registry.
The web.xml file is an amalgamation of the examples provided in the servlet listener and servlet documentation examples. We use these two pieces of the Camel library to allow routes to be called via a Servlet mapping, and to kickstart Camel when the application is started.
<?xml version="1.0" encoding="ISO-8859-1"?> <web-app <display-name>My Web Application</display-name> <!-- you can configure any of the properties on CamelContext, eg setName will be configured as below --> <context-param> <param-name>name</param-name> <param-value>MyCamel</param-value> </context-param> <!-- the listener that kick-starts Camel --> <listener> <listener-class>org.apache.camel.component.servletlistener.SimpleCamelServletContextListener</listener-class> </listener> <!-- Camel servlet used in the Camel application --> >/camel/*</url-pattern> </servlet-mapping> </web-app>
Alongside the web.xml file is a blank file called beans.xml. This is used as a marker file to enable CDI in our application.
This sample application then has two classes. The first is a pretty standard CDI bean. We have declared it as a singleton, given it a custom name of myBean, and defined a single method called sayHello.
The two parameters that are taken by the sayHello method have custom Camel annotations. The first, @Header, is used to reference a value from the Camel message header. The second, @Simple, is a Camel simple language expression which references a system property.
package com.matthewcasperson.swarmdemo; import org.apache.camel.Header; import org.apache.camel.language.Simple; import javax.inject.Named; import javax.inject.Singleton; /** * A CDI bean that can be consumed by Camel */ @Singleton @Named("myBean") public class HelloBean { public String sayHello(@Header("name") final String name, @Simple("${sys.user.country}") final String country) { return "Hello " + name + ", how are you? You are from: " + country; } }
The second class is where we define our Camel route. Note that we don’t have to do anything more than define this class, and the Camel CDI integration will auto-detect it. The route we have defined accepts a request from the servlet we defined in the web.xml file, and passes the incoming data to an instance of our HelloBean class. Note that we reference the bean by its custom name of myBean, and not by the class name. Our CDI bean then returns a String, which is passed back to the caller.
package com.matthewcasperson.swarmdemo; import org.apache.camel.builder.RouteBuilder; /** * A camel route that uses our CDI bean */ class MyRoute extends RouteBuilder { @Override public void configure() { from("servlet:hello").to("bean:myBean"); } }
Once compiled, you can call, and you will get the response of the myBean bean back.
I have taken the liberty of uploading this sample project to Heroku, so you can click the following link and see this Camel route in action:.
The source code for this project can be found }}
|
https://dzone.com/articles/a-simple-camel-route-in-wildfly-swarm?fromrel=true
|
CC-MAIN-2020-29
|
refinedweb
| 801
| 51.34
|
From: christopher diggins (cdiggins_at_[hidden])
Date: 2005-06-02 09:37:42
----- Original Message -----
From: "Victor A. Wagner Jr." <vawjr_at_[hidden]>
To: <boost_at_[hidden]>; <boost_at_[hidden]>
Sent: Thursday, June 02, 2005 9:25 AM
Subject: Re: [boost] set arithmetic proposal
> At 16:16 2005-05-26, Dominic Herity wrote:
>>I've often wished that STL sets supported set arithmetic operations.
Me too.
>>Addition, multiplication and subtraction operators could be overloaded
>>with
>>set union, intersection and difference respectively.
>>
>>For example:
>> set<char> s1; // s1 <- {}
>> s1 += 'a'; // s1 <- {a}
>> set<char> s2 = s1 + 'b'; // s2 <- {a,b}
>> set<char> s3 = s2 - s1; // s3 <- {b}
>> set<char> s4 = s1 * s2; // s4 <- {a}
>>
>>Achieving the above results with the existing STL library is by comparison
>>cumbersome, error-prone and unclear.
I have a char_set class to contribute, which behaves as above. (and is
released under the boost license) at . A more recent version is
available at
char_set x1('a', 'c'); // a..c
char_set x2('d', 'f'); // d..f
char_set x3(x1 & x2); // a..f
char_set x4(char_set('a','z') | x3); // g..z
assert(!x4['a']);
assert(x4['z']);
There is also a compile-time version of the char_set which allows the
construction of compile_time sets:
For instance:
typedef char_set_range<'a', 'c'> x1_t;
typedef char_set_range<'d', 'f'> x2_t;
typedef char_set_union<x1_t, x2_t> x3_t;
typedef char_set_intersection<char_set_range<'a','z'>, x3_c> x4_t;
const x4_t X4;
assert(!X4['a']);
assert(X4['z']);
>>Although this proposal does not follow the STL design philosophy, it meets
>>a
>>real developer need. It can be implemented using global operator
>>overloads,
>>which should avoid conflicts with other uses of STL sets.
>>
>
> I guess you and I must be the only two former Pascal programmers hanging
> around boost.
Me too!
> It seems to me that "set of" from Pascal wasn't ever popular with anyone
> but those who really _really_ used them heavily (e.g. Ada dropped them
> when it was derived from Pascal).
I used them heavily for writing a Heron compiler in Delphi.
> I argued (unsuccessfully) that when the boost::dynamic_bitset<> was
> proposed that the _least_ we could do would be to fix the
> mis-characterization of operators <= and >= with regards to sets (for
> those who haven't studied sets, {b,c,d} <= {a,b,c,d} but that's not
> how it will evaluate with either std::set or boost::dynamic_bitset).
> Absent that oddity, I suspect that you could rather easily overload some
> global operators to handle what you want using any of std::set,
> std::bitset, or boost::dynamic_bitset (the backwardness of the comparison
> operators would still bother me, though). Hmmmm, maybe I should look more
> closely at the "unordered collections" (hash_set) proposal further and see
> if that correctly manages the set concept.
It might be very interesting to also overload comma to return sets, thus
allowing:
namespace true_sets
{
assert((1, 2, 3) + (4, 5) < (1,2,3,4,5,6)}
assert((1, 2) == (2, 1)}
assert(range(1, 3) == (2, 1, 3)}
}
-Christopher Diggins
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2005/06/87720.php
|
CC-MAIN-2019-13
|
refinedweb
| 517
| 52.8
|
AWS CodeArtifact Concepts
Here are some concepts and terms to know when you use CodeArtifact.
Topics
Domain
Repositories are aggregated into a higher-level entity known as a domain. All package assets and metadata are stored in the domain, but they are consumed through repositories. A given package asset, such as a Maven JAR file, is stored once per domain, no matter how many repositories it's present in. All of the assets and metadata in a domain are encrypted with the same AWS KMS key (KMS key) stored in AWS Key Management Service (AWS KMS).
Each repository is a member of a single domain and can't be moved to a different domain.
The domain allows organizational policy to be applied across multiple repositories, such as which accounts can access repositories in the domain, and which public repositories can be used as sources of packages.
Although an organization can have multiple domains, we recommend a single production domain that contains all published artifacts so that teams can find and share packages across their organization.
Repository
A CodeArtifact repository contains a set of package versions, each of which
maps to a set of assets. Repositories are
polyglot—a single repository can contain packages of any supported type. Each repository
exposes endpoints for fetching and publishing packages using tools like the
nuget CLI, the
npm
CLI, the Maven CLI (
mvn), and
pip. You can create up to 1000
repositories per domain.
Package
A package is a bundle of software and the metadata that is
required to resolve dependencies and install the software. In CodeArtifact, a package consists of a package name,
an optional namespace such as
@types
in
@types/node, a
set of package versions, and package-level metadata such as npm tags.
AWS CodeArtifact supports npm, PyPI, Maven, and NuGet package formats.
Package version
A package version identifies the specific version of a package,
such as
@types/node 12.6.9. The version number format and semantics vary
for different package formats. For example, npm package versions must conform to the
Semantic Versioning specification
Package version revision
A package version revision is a string that identifies a specific set of assets and metadata for a package version. Each time a package version is updated, a new package version revision is created. For example, you might publish a source distribution archive (sdist) for a Python package version, and later add a Python wheel that contains compiled code to the same version. When you publish the wheel, a new package version revision is created.
Upstream repository
One repository is upstream of another when the package versions in it can be accessed from the repository endpoint of the downstream repository, effectively merging the contents of the two repositories from the point of view of a client. CodeArtifact allows creating an upstream relationship between two repositories.
Asset
An asset is an individual file stored in CodeArtifact that is
associated with a package version, such as an npm
.tgz file or Maven POM
and JAR files.
Package namespace
Some package formats support hierarchical package names to organize packages into logical groups
and help avoid name collisions. For example, npm supports
scopes, see the npm scopes documentation
@types/node has a scope of
@types and a name of
node. There are many other package names in the
@types scope. In
CodeArtifact, the scope (“types”) is referred to as the package namespace and the name
(“node”) is referred to as the package name. For Maven packages, the package namespace
corresponds to the Maven groupID. The Maven package
org.apache.logging.log4j:log4j has
a groupID (package namespace) of
org.apache.logging.log4j and the artifactID (package name)
log4j.
Some package formats such as PyPI don't support hierarchical names with a concept similar to
npm scope or Maven groupID. Without a way to group package names, it can be more difficult to
avoid name collisions.
|
https://docs.aws.amazon.com/codeartifact/latest/ug/codeartifact-concepts.html
|
CC-MAIN-2022-27
|
refinedweb
| 648
| 53.21
|
Back Up GitHub and GitLab Repositories Using Golang
>>IMAGE. <filename-from-above> $
package main import ( "flag" "log" ) func main() { name := flag.String("name", "", "Your Name") flag.Parse() if len(*name) != 0 { log.Printf("Hello %s", *name) } else { log.Fatal("Please specify your name using -name") } }
The first line in the program declares the package for the program. The
main package is special, and any executable Go program must live in
the
main package. Next, the program imports two packages from the Golang standard
library using the
import statement:
import ( "flag" "log" )
The
"flag" package is used to handle command-line
arguments to programs,
and the
"log" package is used for logging.
Next, the program defines the
main()
function where the program execution starts:
func main() { name := flag.String("name", "", "Your Name") flag.Parse() if len(*name) != 0 { log.Printf("Hello %s", *name) } else { log.Fatal("Please specify your name using -name") } }
Unlike other functions you'll write, the
main function doesn't return
anything nor does it take any arguments. The first statement in the
main() function above defines a string flag,
"name", with a default value
of an empty string and
"Your Name" as the help message. The return value
of the function is a string pointer stored in the variable,
name. The
:=
is a shorthand notation of declaring a variable where its type is inferred
from the value being assigned to it. In this case, it is of type
*string—a reference or pointer to a string value.
The
Parse() function parses the flags and makes the specified flag values
available via the returned pointer. If a value has been provided to
the
"-name" flag when executing the program, the value will be stored
in
"name" and is accessible via
*name (recall that
name is a string
pointer). Hence, you can check whether the length of the string referred to
via
name is non-zero, and if so, print a greeting
via the
Printf()
function of the log package. If, however, no value was specified, you use
the
Fatal() function to print a message. The
Fatal() function prints
the specified message and terminates the program execution.
Structures, Slices and Maps
The program shown in Listing 2 demonstrates the following different things:
- Defining a struct data type.
- Creating a map.
- Creating a slice and iterating over it.
- Defining a user-defined function.
Listing 2. Structures, Slices and Maps Example
package main import ( "log" ) type Repository struct { GitURL string Name string } func getRepo(id int) Repository { repos := map[int]Repository{ 1: Repository{GitURL: "ssh://github.com/amitsaha/gitbackup", ↪Name: "gitbackup"}, 2: Repository{GitURL: "ssh://github.com/amitsaha/lj_gitbackup", ↪Name: "lj_gitbackup"}, } return repos[id] } func backUp(r *Repository) { log.Printf("Backing up %s\n", r.Name) } func main() { var repositories []Repository repositories = append(repositories, getRepo(1)) repositories = append(repositories, getRepo(2)) repositories = append(repositories, getRepo(3)) for _, r := range repositories { if (Repository{}) != r { backUp(&r) } } }
At the beginning, you define a new struct data type
Repository
as follows:
type Repository struct { GitURL string Name string }
The structure
Repository has two members:
GitURL and
Name, both of type
string. You can define a variable of this structure type using
r
:= Repository{"git+ssh://git.mydomain.com/myrepo", "myrepo"}.
You
can choose to leave one or both members out when defining a structure
variable. For example, you can leave the
GitURL
unset using
r :=
Repository{Name: "myrepo"}, or you even can leave both out.
When you leave
a member unset, the value defaults to the zero value for that type—0 for int, empty string for string type.
Next, you define a function,
getRepo, which takes an integer as argument
and returns a value of type
Repository:
func getRepo(id int) Repository { repos := map[int]Repository{ 1: Repository{GitURL: "git+ssh://github.com/amitsaha/gitbackup", ↪Name: "gitbackup"}, 2: Repository{GitURL: ↪"git+ssh://github.com/amitsaha/lj_gitbackup", Name: "lj_gitbackup"}, } return repos[id] }
In the
getRepo() function, you create a map or a hash table of key-value
pairs—the key being an integer and a value of type
Repository. The map
is initialized with two key-value pairs.
The function returns the
Repository, which corresponds to the specified
integer. If a specified key is not found in a map, a zero value of the
value's type is returned. In this case, if an integer other than 1 or
2 is supplied, a value of type
Repository is returned with both the
members set to empty strings.
Next, you define a function
backUp(), which accepts a pointer to a
variable of type
Repository as an argument and
prints the
Name of the
repository. In the final program, this function actually will create a
backup of a repository.
Finally, there is the
main() function:
func main() { var repositories []Repository repositories = append(repositories, getRepo(1)) repositories = append(repositories, getRepo(2)) repositories = append(repositories, getRepo(3)) for _, r := range repositories { if (Repository{}) != r { backUp(&r) } } }
In the first statement, you create a slice,
repositories, that will store
elements of type
Repository. A slice in Golang is an dynamically sized
array—similar to a list in Python. You then call the
getRepo() function
to obtain a repository corresponding to the key 1 and store the returned
value in the
repositories slice using the
append()
function. You do the
same in the next two statements. When you call the
getRepo() function
with the key, 3, you get back an empty value of type
Repository.
You then use a for loop with the
range clause to iterate over the elements
of the slice,
repositories. The index of the element in a slice is
stored in the
_ variable, and the element itself is
referred to via the
r
variable. You check if the element is not an empty
Repository variable,
and if it isn't, you call the
backUp() function, passing the address of
the element. It is worth mentioning that there is no reason to pass the
element's address; you could have passed the element's value itself. However,
passing by address is a good practice when a structure has a large number
of members.
When you build and run this program, you'll see the following output:
$ go run listing2.go 2017/02/19 19:44:32 Backing up gitbackup 2017/02/19 19:44:32 Backing up lj_gitbackup
Goroutines and Channels
Consider the previous program (Listing 2). You call the
backUp() function
with every repo in the repositories serially. When you actually create
a backup of a large number of repositories, doing them serially can
be slow. Since each repository backup is independent of any other,
they can be run in parallel. Golang makes it really easy to have
multiple simultaneous units of execution in a program using
goroutines.
A goroutine is what other programming languages refer to as lightweight
threads or green threads. By default, a Golang program is said to be
executing in a main goroutine, which can spawn other goroutines. A main
goroutine can wait for all the spawned goroutines to finish before
finishing up using a variable of
WaitGroup type, as you'll see next.
Listing 3 modifies the previous program such that the
backUp() function
is called in a goroutine. The
main() function
declares a variable,
wg
of type
WaitGroup defined in the sync package, and
then sets up a deferred
call to the
Wait() function of this variable. The
defer statement is
used to execute any function just before the current function returns.
Thus, you ensure that you wait for all the goroutines to finish before
exiting the program.
Listing 3. Goroutine)) for _, r := range repositories { if (Repository{}) != r { wg.Add(1) go func(r Repository) { backUp(&r, &wg) }(r) } } }
The other primary change in the
main() function is
how you call the
backUp() function. Instead of calling this function
directly, you call
it in a new goroutine as follows:
wg.Add(1) go func(r Repository) { backUp(&r, &wg) }(r)
You call the
Add() function with an argument 1 to
indicate that you'll be
creating a new goroutine that you want to wait for before you exit. Then,
you define an anonymous function taking an argument,
r of type
Repository,
which calls the function
backUp() with an additional argument, a reference
to the variable,
wg—the
WaitGroup variable declared earlier.
Consider the scenario where you have a large number of elements in your repositories list—a very realistic scenario for this backup tool. Spawning a goroutine for each element in the repository can easily lead to having an uncontrolled number of goroutines running concurrently. This can lead to the program hitting per-process memory and file-descriptor limits imposed by the operating system.
Thus, you would want to regulate the maximum number of goroutines spawned by the program and spawn a new goroutine only when the ones executing have finished. Channels in Golang allow you to achieve this and other synchronization operations among goroutines. Listing 4 shows how you can regulate the maximum number of goroutines spawned.
Listing 4. Channels"}, 3: Repository{GitURL: "ssh://github.com/amitsaha/gitbackup", ↪Name: "gitbackup"}, 4: Repository{GitURL: "ssh://github.com/amitsaha/ ↪lj_gitbackup", Name: "lj_gitbackup"}, 5: Repository{GitURL: "ssh://github.com/amitsaha/gitbackup", ↪Name: "gitbackup"}, 6: Repository{GitURL: "ssh://github.com/amitsaha/ ↪lj_gitbackup", Name: "lj_gitbackup"}, 7: Repository{GitURL: "ssh://github.com/amitsaha/gitbackup", ↪Name: "gitbackup"}, 8: Repository{GitURL: "ssh://github.com/amitsaha/ ↪lj_gitbackup", Name: "lj_gitbackup"}, 9: Repository{GitURL: "ssh://github.com/amitsaha/gitbackup", ↪Name: "gitbackup"}, 10:)) repositories = append(repositories, getRepo(4)) repositories = append(repositories, getRepo(5)) repositories = append(repositories, getRepo(6)) repositories = append(repositories, getRepo(7)) repositories = append(repositories, getRepo(8)) repositories = append(repositories, getRepo(9)) repositories = append(repositories, getRepo(10)) // Create a channel of capacity 5 tokens := make(chan bool, 5) for _, r := range repositories { if (Repository{}) != r { wg.Add(1) // Get a token tokens <- true go func(r Repository) { backUp(&r, &wg) // release the token <-tokens }(r) } } }
You create a channel of capacity 5 and use it to implement a token
system. The channel is created using
make:
tokens := make(chan bool, 5)
The above statement creates a "buffered channel"—a channel with a capacity of 5 and that can store only values of type "bool". If a buffered channel is full, writes to it will block, and if a channel is empty, reads from it will block. This property allows you to implement your token system.
Before you can spawn a goroutine, you write a boolean value, true into it ("taking" a token) and then take it back once you are done with it ("releasing" the token). If the channel is full, it means the maximum number of goroutines are already running and, hence, your attempt to write will block and a new goroutine will not be spawned. The write operation is performed via:
tokens <- true
After the control is returned from the
backUp()
function, you read a
value from the channel and, hence, release the token:
<-tokens
The above mechanism ensures that you never have more than five goroutines running simultaneously, and each goroutine releases its token before it exits so that the next goroutine may run. The file, listing5.go in the GitHub repository mentioned at the end of the article uses the runtime package to print the number of goroutines running using this mechanism, essentially allowing you to verify your implementation.
gitbackup—Backing Up GitHub and GitLab Repositories
In the example programs so far, I haven't explored using any third-party
packages. Whereas Golang's built-in tools completely support having an
application using third-party repositories, you'll use a tool called
gb for developing your "gitbackup" project. One main reason I like
gb is how it's really easy to fetch and
update third-party
dependencies via its "vendor" plugin. It also does away with the need
to have your go application in your
GOPATH, a
requirement that the built-in go tools assume.
Next, you'll fetch and build
gb:
$ go get github.com/constabulary/gb/...
The compiled binary
gb is placed in the directory
$GOPATH/bin. You'll
add
$GOPATH/bin to the
$PATH environment variable and start a new shell
session and type in
gb:
$ gb gb, a project based build tool for the Go programming language. Usage: gb command [arguments] ..
Next, install the gb-vendor plugin:
$ go get github.com/constabulary/gb/cmd/gb-vendor
gb works on the notion of projects. A project has an
"src" subdirectory
inside it, with one or more packages in their own sub-directories. Clone
the "gitbackup" project from and
you will notice the following directory structure:
$ tree -L 1 gitbackup gitbackup |--src | |--gitbackup |--main.go |--main_test.go |--remote.go ..
The "gitbackup" application is composed of only a single package, "gitbackup", and it has two program files and unit tests. Let's take a look at the remote.go file first. Right at the beginning, you import third-party repositories in addition to a few from the standard library:
- github.com/google/go-github/github: this is the Golang interface to the GitHub API.
- golang.org/x/oauth2: used to send authenticated requests to the GitHub API.
- github.com/xanzy/go-gitlab: Golang interface to the GitLab API.
You define a struct of type
Response, which matches
the
Response structure
implemented by both the GitHub and GitLab libraries above. The struct
Repository describes each repository that you fetch from either GitLab
or GitHub. It has two string fields: GitURL, representing the git clone
URL of the repository, and Name, the name of the repository.
The
NewClient() function accepts the service name
(
github or
gitlab)
as a parameter and returns the corresponding client, which then will be
used to interface with the service. The return type of this function is
interface{}, a special Golang type indicating that this function can
return a value of any type. Depending on the service name specified, it
either will be of type
*github.Client or
*gitlab.Client. If a different
service name is specified, it will return nil. To be able to fetch
your list of repositories before you can back them up, you will need to
specify an access token via an environment variable.
The token for GitLab is specified via the
GITLAB_TOKEN environment
variable and for GitHub via the
GITHUB_TOKEN environment variable. In
this function, you check if the correct environment variable has been
specified using the
Getenv() function from the os package. The function
returns the value of the environment variable if specified and an
empty string if the specified environment variable wasn't found. If
the corresponding environment variable isn't found, you log a message
and exit using the
Fatal() function from the log package.
The
NewClient() function is used by the
getRepositories() function, which returns a slice of
Repository objects obtained via an API call to
the service. There are two conditional blocks in the function to account
for the two supported services. The first conditional block handles
repository listing for GitHub via the
Repositories.List() function
implemented by the github.com/gooogle/go-github package. The first
argument to this function is the GitHub user name whose repositories
you want to fetch. If you leave it as an empty string, it returns the
repositories of the currently authenticated user. The second argument
to this option is a value of type
github.RepositoryListOptions, which
allows you to specify the type of repositories you want returned via the
Type field. The call to the function
Repositories.List() is as follows:
repos, resp, err := client.(*github.Client) ↪.Repositories.List("", &options)
Recall that the
newClient() function returns a value
of type
interface{},
which is an empty interface. Hence, if you attempt to make your function
call as
client.Repositories.List(), the compiler will complain with an
error message:
# gitbackup remote.go:70: client.Repositories undefined (type interface {} ↪is interface with no methods)
So, you need to perform a "type assertion" through which you get access
to the underlying value of client, which is either of the
*github.Client or
*gitlab.Client type.
You query the list of repositories from the service in an infinite loop
indicated by the
for loop:
for { // This is an infinite loop }
The function returns three values: the first is a list of repositories,
the second is an object of type
Response, and the
third is an error value. If the
function call was successful, the value of
err is
nil. You then iterate
over each of the returned objects, create a
Repository object containing
two fields you care about and append it to the slice repositories. Once you
have exhausted the list of repositories returned, you check the
NextPage
field of the
resp object to check whether it is equal to 0. If it is
equal to 0, you know there isn't anything else to read; you break from
the loop and return from the function with the list of repositories you
have so far. If you have a non-zero value, you have more repositories,
so you set the
Page field in the
ListOptions structure to this value:
options.ListOptions.Page = resp.NextPage
The handler for the "gitlab" service is almost the same as the "github" service with one additional detail. "gitlab" is an open-source project, and you can have a custom installation running on your own host. You can handle it here via this code:
if len(gitlabUrl) != 0 { gitlabUrlPath, err := url.Parse(gitlabUrl) if err != nil { log.Fatal("Invalid gitlab URL: %s", gitlabUrl) } gitlabUrlPath.Path = path.Join(gitlabUrlPath.Path, "api/v3") client.(*gitlab.Client).SetBaseURL(gitlabUrlPath.String()) }
If the value in
gitlabUrl is a non-empty string, you
assume that you need to
query the GitLab hosted at this URL. You attempt to parse it first using
the
Parse() function from the "url" package and exit with an error
message if the parsing fails. The GitLab API lives at <DNS of gitlab
installation>/api/v3, so you update the
Path object of the parsed URL
and then call the function
SetBaseURL() of the
gitlab.Client to set
this as the base URL.
Next, let's look at the main.go file. First though,
you should learn where "gitbackup" creates the backup of
the git repositories. You can pass the location via the
-backupdir
flag. If not specified, it defaults to $HOME/.gitbackup. Let's
refer to it as
BACKUP_DIR. The repositories are backed up in
BACKUP_DIR/gitlab/ or BACKUP_DIR/github. If a repository is not found in
BACKUP_DIR/<service_name>/<repo>, you know you'll have to make a new clone of
the repository (
git clone). If the repository
exists, you update it (
git
pull). This operation is performed in the
backUp() function in main.go:
func backUp(backupDir string, repo *Repository, wg *sync.WaitGroup) { defer wg.Done() repoDir := path.Join(backupDir, repo.Name) _, err := os.Stat(repoDir) if err == nil { log.Printf("%s exists, updating. \n", repo.Name) cmd := exec.Command("git", "-C", repoDir, "pull") err = cmd.Run() if err != nil { log.Printf("Error pulling %s: %v\n", repo.GitURL, err) } } else { log.Printf("Cloning %s \n", repo.Name) cmd := exec.Command("git", "clone", repo.GitURL, repoDir) err := cmd.Run() if err != nil { log.Printf("Error cloning %s: %v", repo.Name, err) } } }
The function takes three arguments: the first is a string that points
to the location of the backup directory, followed by a reference to a
Repository object and a reference to a
WaitGroup. You set up a deferred
call to
Done() on the
WaitGroup. The next two lines then check whether
the repository already exists in the backup directory using the
Stat()
function in the os package. This function will return a nil error value
if the directory exists, so you execute the
git pull command by
using the
Command() function from the exec package. If the directory
doesn't exist, you execute a
git clone command instead.
The
main() function sets up the flags for the
"gitbackup" program:
backupdir: the backup directory. If not specified, it defaults to $HOME/.gitbackup.
github.repoType: GitHub repo types to back up;
allwill back up all of your repositories. Other options are
ownerand
member.
gitlab.projectVisibility: visibility level of GitLab projects to clone. It defaults to
internal, which refers to projects that can be cloned by any logged in user. Other options are
publicand
private.
gitlab.urlDNS of the GitLab service. If you are creating a backup of your repositories on a custom GitLab installation, you can just specify this and ignore specifying the "service" option.
service: the service name for the Git service from which you are backing up your repositories. Currently, it recognizes "gitlab" and "github".
In the
main() function, if the
backupdir is not specified, you default
to use the $HOME/.gitbackup/<service_name> directory. To find the home
directory, use the package github.com/mitchellh/go-homedir. In
either case, you create the directory tree using the
MkdirAll() function
if it doesn't exist.
You then call the
getRepositories() function defined in remote.go
to fetch the list of repositories you want to back up. Limit the
maximum number of concurrent clones to 20 by using the token system I
described earlier.
Let's now build and run the project from the clone of the "gitbackup" repository you created earlier:
$ pwd /Users/amit/work/github.com/amitsaha/gitbackup $ gb build .. $ ./bin/gitbackup -help Usage of ./bin/gitbackup: -backupdir string Backup directory -github.repoType string Repo types to backup (all, owner, member) (default "all") -gitlab.projectVisibility string Visibility level of Projects to clone (default "internal") -gitlab.url string DNS of the GitLab service -service string Git Hosted Service Name (github/gitlab)
Before you can back up repositories from either GitHub or GitLab, you need to obtain an access token for each. To be able to back up a GitHub repository, obtain a GitHub personal access token from here with only the "repo" scope. For GitLab, you can get an access token from https://<location of gitlab>/profile/personal_access_tokens with the "api" scope.
The following command will back up all repositories from github:
$ GITHUB_TOKEN=my$token ./bin/gitbackup -service github
Similarly, to back up repositories from a GitLab installation to a custom location, do this:
$ GITLAB_TOKEN=my$token ./bin/gitbackup -gitlab.url ↪git.mydomain.com -backupdir /mnt/disk/gitbackup
See the README at here to learn more, and I welcome improvements to it via pull requests. In the time between the writing of this article and its publication, gitbackup has changed a bit. The code discussed in this article is available in the tag. To learn about the changes since this tag in the current version of the repository, see my blog post.
Conclusion
I covered some key Golang features in this article and applied them to write a tool to back up repositories from GitHub and GitLab. Along the way, I explored interfaces, goroutines and channels, passing command-line arguments via flags and working with third-party packages.
The code listings discussed in the article are available here. See the Resources section to learn more about Golang, GitHub and the GitLab API.
Resources
Getting Started with Golang and gb:
Golang by Example:
Golang Type Assertions:
GitHub Repos API:
GitLab Projects API:
Golang Interface for GitHub:
Golang Interface for GitLab:
gb:
|
https://www.linuxjournal.com/content/back-github-and-gitlab-repositories-using-golang?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+LinuxJournalHowtos+%28Linux+Journal+-+HOWTOs%29
|
CC-MAIN-2019-13
|
refinedweb
| 3,870
| 55.84
|
This page contains an archived post to the Jini Forum made prior to February 25, 2002.
If you wish to participate in discussions, please visit the new
Artima Forums.
help me..
Posted by hamid khan on May 10, 2001 at 8:52 AM
Assalamalaykum .i'm hamid khan want to know more about JTAPI (As i am a beginer).I know all the java programming but JTPAI is new for me.I am going to work on this technology very soon.So pls {if u don't mind}guide me if u can.i'll be very thankful to u.what is the procedure like ?what i've to do to import this telephony package ?I'm not getting the proper books for it . pls suggest me some good books on it. I've read about the book Essentials JTAPI by Spancer Roberts and UnderStanding JAVA TELEPHONY on net. how are these books ?Can i buy these books or search for some other writers ..pls reply.thanks.khudahafizhamid> import javax.telephony.*;> import javax.telephony.events.*;> import MyOutCallObserver;
> > /*> * Places a telephone call from 560750 to 667461> */> public class Outcall {> > public static final void main(String args[]) {> > /*> * Create a provider by first obtaining the default implementation of> * JTAPI and then the default provider of that implementation.> */> Provider myprovider = null;> try {> System.out.println("ok");> JtapiPeer peer = JtapiPeerFactory.getJtapiPeer(null);> System.out.println("ok");> myprovider = peer.getProvider(null);> } catch (Exception excp) {> System.out.println("Can't get Provider: " + excp.toString());> System.exit(0);> }> > /*> * We need to get the appropriate objects associated with the> * originating side of the telephone call. We ask the Address for a list> * of Terminals on it and arbitrarily choose one.> */> Address origaddr = null;> Terminal origterm = null;> try {> origaddr = myprovider.getAddress("560750");> > /* Just get some Terminal on this Address */> Terminal[] terminals = origaddr.getTerminals();> if (terminals == null) {> System.out.println("No Terminals on Address.");> System.exit(0);> } > origterm = terminals[0];> } catch (Exception excp) {> // Handle exceptions;> }> > > /*> * Create the telephone call object and add an observer.> */> Call mycall = null;> try {> mycall = myprovider.createCall();> mycall.addObserver(new MyOutCallObserver());> } catch (Exception excp) {> // Handle exceptions> }> > /*> * Place the telephone call.> */> try {> Connection c[] = mycall.connect(origterm, origaddr, "667461");> } catch (Exception excp) {> // Handle all Exceptions> }> }> }
|
https://www.artima.com/legacy/jini/vision/messages/135.html
|
CC-MAIN-2018-30
|
refinedweb
| 367
| 53.27
|
Welcome to our free XML Schema tutorial. This tutorial is based on Webucator's Introduction to XML Schema course.
Namespaces are used to group elements and attributes that relate to each other in some special way. Namespaces are held in a unique URI (Uniform Resource Identifier). Note that, although it is possible that an XML schema is kept at this URI, it is not required. This can be a bit confusing. It is important to understand that a namespace is a set of rules that can be enforced by an application in whatever way the application wishes.
As an example, modern HTML editors understand the namespace. It is unlikely that these editors ever visit the URI that holds the XHTML namespace. Instead, these applications have built-in functionality to support the namespace. The main reason a URI is used is to provide a unique variable name to hold the namespace. Namespace authors should use URIs that they own to prevent conflicts with each other.
This tutorial is based on Webucator's Introduction to XML Schema Course. We also offer many other XML Training courses. Sign up today to get help from a live instructor.
|
https://www.webucator.com/tutorial/learn-xml-schema/namespaces/overview-reading.cfm
|
CC-MAIN-2018-30
|
refinedweb
| 194
| 67.04
|
Software Engineer
One track lover/Down a two-way lane
You’re part of a preternaturally enlightened dev team, and have set aside an entire day for nothing but code review. However, after the first 2 hours, you realise that you have forgotten your glasses and have just been staring at colourful blurs all morning. What do you do?
The correct answer is to go back home and get your glasses, since you only live a 10 minute walk away and it’s a nice day. But assume that as you left the house this morning you discovered that a swarm of particularly belligerent hornets had just completed construction on a nest in your glasses wardrobe and that they didn’t look keen on being disturbed. WHAT THEN.
The new correct answer is of course to avoid embarrassment by pretending that you are wearing contact lenses, and remembering that you can tell a surprising amount about a file without actually reading any of it.
We can all agree that concerns should absolutely be separated. And of course any class should only ever be responsible for doing one thing. But this
UserCreator object you’ve created here is probably taking things a bit too far. If this is all that a
UserCreator has to do then
Users can create themselves for now. Otherwise what was once a simple
User.new becomes a unnecessarily hellish nightmare of grepping halfway round the world through multiple tiny files whenever you want to change anything or understand what the foo is going on.
Looking at this rather large method disguised as a class, I can see that it is technically all very DRY and that there’s nothing that can be factored in the literal sense of the word. But something tells me you aren’t a unit tester. And whilst I can work out that that 20 line block in the middle is used to decide which users we need to send emails to if you give me an afternoon and some strong coffee, I would humbly suggest that you stick it inside a
def users_to_send_emails_to so that I don’t have to.
OK, so in this class your methods are much shorter, and this is probably progress. However, you can and do have too much of a good thing. Whilst the Ruby interpreter doesn’t care about you leaping between methods every other line, most human interpreters do. I’m as happy as the next person to scroll around a file a bit, but when I start having to write a stack trace on my arm to remember where I came from, it’s probably time to munge some of those methods back together.
I see you are keen to make sure this class occupies exactly the right namespace. Nice one, namespacing is good. But by the time you get to the 6th level, it seems likely that you are trying to cram too much stuff into a small space. Consider either stop splitting namespaces quite so finely (yes I can see that those 2 Helper classes could have their own Helper namespace, but are they really hurting anything in the next one up?), or decoupling and splitting some code into an entirely different base namespace altogether.
Clarificatory comments, thumbs up! Code that requires an accompanying multi-chapter essay to understand, thumbs down.
Look closely at the second method down. If a method needs 8 arguments in order to know what job to do and how to do it, that method is way overworked. Spread the load and take some of the weight off its shoulders with a springtime refactoring. Split it into two (or more), or maybe it makes more sense to give some of these arguments to the instance in its initializer. Could you deal with 8 arguments simultaneously? Then don’t expect your methods to.
So that’s how to do code review when you’ve forgotten your glasses or have been staring directly into the sun for longer than medically recommended. If I was better at programming then I’m sure I could have come up with some far more subtle examples. On the other hand, one could argue (and I fully intend to do so) that triviality can sometimes be interesting, and is almost always more important than we would like to think. However clear, simple and well-patterned your design may be, it’s all for naught if you construct it out of mud and toenails.
Kindly translated into Spanish by David Rojo.Discussion on Hacker News
|
http://robertheaton.com/2014/06/20/code-review-without-your-eyes/
|
CC-MAIN-2016-44
|
refinedweb
| 760
| 68.7
|
We want API clients to be able to get the UCD server version, such as "6.1.1.1.628316", as shown on the
Help -> About popup.
Today we're setting a system property whose value is the server version.
Is there an out of the box API call that can give us the version number without us having to set this system property? I don't see anything in the Knowledge Center for the supported API, so I'm expecting it to be in the unsupported API, if anywhere.
Thanks! Matt
Answer by abhipatti (2677) | May 12, 2015 at 01:34 PM
As per my knowledge there is no Supported or Unsupported API to get that information.
The server does not get the version such as "6.1.1.628316" when we click Help -> About popup using a REST call so I believe there is no API..
You can do something like this using CURL
curl -X GET -k | awk -F '\>' '/productVersion/ {print $2;}' | awk -F '
33 people are following this question.
API request working in cURL and error when tried through Browser 1 Answer
Calling API commands using external CURL or HTTP requests 2 Answers
java client programs using REST API 2 Answers
How do I use a token with the uDeploy REST API? 5 Answers
import application using post process with rest api - 400 error 0 Answers
|
https://developer.ibm.com/answers/questions/190904/can-the-api-tell-me-the-ucd-server-version.html?sort=newest
|
CC-MAIN-2020-16
|
refinedweb
| 230
| 68.2
|
Hello.
As far as I know, current implementation of DRL VM doesn't support Stack
Overflow Error (SOE).
I have found it out using following test:
public class Stack {
static int depth = 0;
public static void func() {
depth++;
func();
}
public static void main(String[] args) {
try {
func();
} catch (Throwable th) {
System.out.println("First SOE depth = " + depth);
System.out.println("Caught = " + th);
}
}
}
I'm experimenting to add Stack Overflow Error (SOE) support using protected
memory page on the stack. I see and use two main schemes of exception
processing.
1. If top frame is unwindable (regular java code), then signal handler or
vectored exception handler throws regular java exception, as it can happen
now for NullPointerException.
2. If top frame is non-unwindable (for example, JNI native code) VM sets
exceptions for the current thread and continues its execution from
interrupted place. A code which works in nonunwindable mode should
periodically check that no exception is raised.
During my experiments I found some problems in current VM implementation
(including JIT)
1. Some parts of VM which use long recursion calls in non-unwindable mode
(JIT compiler, verifier) don't check that StackOverflowError has occured. I
added check that there are 256 Kbytes of free stack, before starting
compilation. But I'm afraid it is not enough sometimes.
2. If StackOverflowError is thrown during the first two commands of compiled
method, function "unwind "of the JIT cannot always unwind frame correctly.
Are there any ideas how to fix them?
I have some code developed locally, and can submit it to JIRA if anyone is
interested in trying it.
Thanks.
Pavel Afremov.
|
http://mail-archives.apache.org/mod_mbox/harmony-dev/200606.mbox/%3C783bf8b0606221051g557ac491xc55244d9f770457d@mail.gmail.com%3E
|
CC-MAIN-2014-41
|
refinedweb
| 270
| 54.12
|
in reply to
LibXML, Namespaces, xpath expression - This should be simple.
Hi
There was something wrong with you example XML, the MyImportantNode elements were closed before defining the attributes.
Assuming that was an error in posting, I think that your confusion comes from a number of reasons:
In 1 it is something to do with the way you are chaining findnodes and string_value. If you replace it with:
say "[1a]" . $nodeybits->findvalue('//@GeneralID');
[download]
you'll get Random1Random2Random3. If you want to deal with each value, you'll need a loop:
say "[1b]" . $_->findvalue('.')
foreach $nodeybits->findnodes('*/@GeneralID');
[download]
Having said that, it's a bit confusing that you're using nodeybits (the MiddleTag node) but then running XPath expressions beginning with "//", which will start at the root.
2 does what I'd expect, given the above: the first tag that matches that will be the root element, and calling string_value on that will return the entire text of the file.
The remaining ones are all because you're using $nodeybits. This is not an XPathContext object. For the examples you've given, you could use $xpc. But presumably this is cut down from a bigger program where you're doing XPath relative to the node you're looking at in the loop.
EDIT see ikegami's reply
my $inner_xpc = XML::LibXML::XPathContext->new($nodeybits);
$inner_xpc->registerNs( theNS => '');
# [3]
print $inner_xpc->findnodes('//theNS:CannotGetTagA')->string_value . "
+\n";
[download]
Worked for me and you could use $inner_xpc for your remaining queries, being careful to be consistent about using the namespace prefix (in 5, you have theNS:RootTag but miss it off for all the other element names).
HTH
FalseVinylShrub
Disclaimer: Please review and test code, and use at your own risk... If I answer a question, I would like to hear if and how you solved your problem.
Creating a new xpc for every findnodes is not the way to go.
my $inner_xpc = XML::LibXML::XPathContext->new($nodeybits);
$inner_xpc->registerNs( theNS => '');
$inner_xpc->findnodes(...)
[download]
can be written as
$xpc->findnodes(..., $nodeybits)
[download]
I apologise for the time taken to reply but I had assumed the site would email me by default if I got a reply..and as I hadn't, assumed no one had responded. (Possible server issue my end)
Setting up the namespace in the way shown worked perfectly.
This site should show PayPal options for each submit. I'd happily send each Monk a dollar or two for the mental sanity you afforded me.
Thank you so much.
PS: Matt Seargent was right. LibXML is super quick when used properly.
PerlMonks doesn't send emails.
Unless it's changed, PerlMonks has no expenses — hosting is donated by Pair — so receiving money would actually be a problem. On the other hand, the Perl Foundation does take donations, I believe. Some of that money is going towards fixing the more complicated bugs
|
http://www.perlmonks.org/?node_id=924006
|
CC-MAIN-2015-22
|
refinedweb
| 482
| 65.32
|
thestienMember
Content count46
Joined
Last visited
Community Reputation102 Neutral
About thestien
- RankMember
Repositioning circles on collision
thestien replied to CryoGenesis's topic in For Beginners
Unity - Assets and tools to get started
thestien replied to jefferytitan's topic in For Beginnershi jefferytitan, have you been on the assest store in unity and downloaded all the free bits and pieces on there also the tutorials on the unity website come with some very good assests i found a FPS tutorial but its not listed when i visist the website so maybe google unity 3d FPS tutorial that has a few nice models and stuff. the ones from the asset store and tutorials should be good enough to show people although they may not all be in the same style the can just act as spaceholders until you find the right kind of models you need.
Examples of component design in simple game such as pong
thestien replied to thestien's topic in General and Gameplay ProgrammingHi FLeBlanc i wish i was your boss because i would give you a promotion and a payrise thank you for doing that for me you have gone well above the call of duty now its in c++ i can understand your code fully and also use it as a base for future programs i make. i also think im begining to understand the benefits of learning this way over oo. and now i have the fun of learning and expanding on your example im sure it will open up a whole new rabbit hole for me to fall into but at least now i have a map and a candle to find my way out perhaps ill need to learn mapreading also lol Hi Beernutts that looks very cool i will have a good read of it soon it looks very interesting at the moment i think its quite alot of info in one project to wrap my head around but once i finish teething on this examples then i can have a good chew at yours i cant thank you enough and im sure i will be back to call on your knowledge soon
Examples of component design in simple game such as pong
thestien replied to thestien's topic in General and Gameplay ProgrammingCor i know one thing Lua is a bit complicated to understand ive read a few basic love2d tutorials and i still struggling to translate your code also i tried to run your script but i must be doing something wrong as it says no code to run, any ideas? Many Thanks again
Examples of component design in simple game such as pong
thestien replied to thestien's topic in General and Gameplay Programmingthank you so much that was a very kind thing to do the first post was more than enough but the second one is truely amazing [img][/img] i think i understand exactly how this works now and will be studing this tread for a while but now i have the tools and the theory to move away from inheritance and focus on components [img][/img] i have looked on google for this kind of example one that is simple enough to understand but shows exactly what is going on i just hope it will help future noobs like me well i know it will. thank you for helping me [img][/img] edit: i have never used Lua and i have been reading a little to try to understand your code it seems like an awesome language i am going to test your code and then try to convert it to c++ but i think i may try to learn Lua one day
Examples of component design in simple game such as pong
thestien replied to thestien's topic in General and Gameplay ProgrammingHi FLeBlanc, All i can say is wow!! or perhaps thank you very much i think i will need to re read your reply a few times but that is exactly what i needed. the reason i said pong is because it is a simple game and i have made a version using inheritance but i have been intruiged by components and now i think im really beginning to grasp it. i could see the power of it before but not how to implement it. lots of phrases in your reply i will need to google but this is more than enough for me to get started on designing a component type game well thanks again this has been an amazing response i appreciate your time and effort
Examples of component design in simple game such as pong
thestien replied to thestien's topic in General and Gameplay Programminghi Kauna thanks for your reply. im really just trying to understand how i would go about using a component system. the problem is i am struggling to find good examples and the ones i do find are a bit complex im sure i half understand it but not quite how to implement it and seeing as pong is a very simple game i figured it would only need a few components and it would be easy to see how it works. so really its just how does component design work and how do i start to make one. my above post isnt an example of my design rather a template to use to show me what component design really is. thanks again for your help
Any tutorials on creating start-up menus?
thestien replied to kelicant's topic in General and Gameplay ProgrammingHi Kelicant i think Lazy Foo tutorials are quite good for beginning game programming. there isnt a specific menu tutorial but he has an article on gamestates that could be quite useful for you. The tutorials use c++ and SDL and are quite simple in the way they are written and at the bottom of every tutorial is the source codes and any needed files. there is also tutorials for setting up SDL to work depending on your operating system and compiler. just search lazyfoo in google should be first one that comes up.
Examples of component design in simple game such as pong
thestien replied to thestien's topic in General and Gameplay ProgrammingHi Guys i would like to re-phrase my question into an actual question. #1 do you have any example of a simplest version of a component system in the context of an actual game? i have looked on google and only really seem to find snippets which i dont fully follow. #2 based on my first post am i in the right ball park in terms of how i percieve component based design? #3 do you know any good books or web pages aimed at idiots like me who cannot grasp the concept fully or ones that implement them into a working program. many thanks you help and wisdom is much appreciated
DirectX 9 teething problems
thestien replied to chris2307's topic in General and Gameplay ProgrammingI have had similar problems before i think that maybe your sprite isnt being loaded maybe the path name is wrong or the sprite is in the wrong folder. i think this is where the problem is [code] D3DXCreateTextureFromFile(d3ddev, "arrow.png", &sprite); [/code] maybe try to get the code to break at this point if it fails to load then you can see if the sprite is loaded.
Examples of component design in simple game such as pong
thestien posted a topic in General and Gameplay ProgrammingHi Guys n Girls, First [code] class cEntity { cPosition* coords; cMove* mover; cDraw* drawer; cCollision* collider; } [/code] now whether that is right or wrong im not sure but after that i am stuck as to how to implement it do i simply make a container of components and fill it from each type of different cEntity i have so i could end up with. [code] //(); } [/code]
- i am sorry for a trivial response below but i must of posted this as you posted your reply i am at work so i am slowly typing the replies. that is a very complete and concise reply and i can see how it is perfectly correct thank you for your help not only in fixing my problem but in increasing my understanding and helping me to become a better programmer. as now i can have a project that has many files and not every thing crammed in one main.cpp i had come across include guard and pragma once before and tried to use them but now i know how to do it properly i dont think i will have any problems anymore:) well not with #include's thank you for your help it is much appreciated EDIT: To Reply to rip-off the below reply was for Olof Hedman Ah ok cool i will try that later thanks for the help [img][/img] does that mean that i should put global variables in the .cpp [code] globals.h void myFunction(); globals.cpp int myInt; const int myOtherInt = 1; void myfunction() { //do stuff } [/code]
- ok i have reread the first reply and have tried to change all of my names and even tried #pragma once. still no joy though it must be something im doing i mean i know it works as they all include SDL files and they dont conflict. [code] CObject.h #ifndef _OBject_ #define _OBject_ #include "globals.h" #endif CObject.h #include "globals.h" #ifndef _OBject_ #define _OBject_ #endif CObject.h #ifndef OBject_ #define OBject_ #include "globals.h" #endif CObject.h #include "globals.h" #ifndef OBject_ #define OBject_ #endif CObject.h #pragma once #include "globals.h" CObject.h #include "globals.h" #pragma once [/code] i have 3 classes a globals.h and a main so i have 4 .h files and 4.cpp files i have tried to change each one as per the above example and each time i still get the multiple definitions i only get problems with the "globals.h" where should i put the includes? the header files for my classes dont need to know about the globals but the .cpp's do. oh i almost forgot i dont know if this will be relevant but my globals.h has both declaration and definitions should i seperate them into a .h and a .cpp? many thanks again [img][/img]
- hi guys thanks for the replies. i have tried both of these ways but it still gives the same errors i changed _COBJECT_ to COBJECT_ ect. is the problem where i am including them i mean should i include them in the .h or .cpp files? i find that if i dont include it in my main.cpp then i dont get the error( although i get lots of other errors which are caused by not including globals.h) but i need it in main any ideas thanks again guys
Include problems
thestien posted a topic in General and Gameplay ProgrammingHi Guys, i always seem to get this problem even after searching the web and i feel its time to get closure. i have a few functions that i want several classes to all use. but because all the classes are linked in some way it always says these functions are already defined. [code] -------------------------- globals.h somefunction(); --------------------------- CObject.h #include "globas.h" class CObject { }; ------------------------- CBullet.h #include "CObject.h" #include "globals.h" class CBullet : public CObject { }; --------------------------- CPlayer.h #include "CObject.h" #include "globals.h" class CPlayer : public CObject { }; main.cpp #include "CObject.h" #nclude "CBullet.h" #include "CPlayer,h" #include "globas.h" [/code] i can get the inheritance to work but i want the classes to use stuff from the globals like constants and functions . normally i would just stick all the classes and my globals in my main cpp but thats not easy to sort my code out when i have problems or need to change it. many thanks in advance
|
https://www.gamedev.net/profile/196461-thestien/?tab=friends
|
CC-MAIN-2018-05
|
refinedweb
| 1,990
| 63.83
|
Introduction: truly useful to individuals who love going on the mountains and enterprise trips.
Without a doubt, a veritable brilliant period of advanced figuring evaluation, IoT is upon us. As Gadgets and Programming lovers, we believe, Raspberry Pi, the micro Linux PC has treated the creative abilities of people in general, carrying with it a blast in innovative methodologies. So what are the conceivable outcomes that what we can do in the event that we have a Raspberry Pi and a 3-axis Accelerometer close by ? We should discover! In this task, we will sense the acceleration on 3 axes, X, Y and Z utilizing Raspberry Pi and ADXL345,a 3-axis accelerometer. So we should observe on this excursion to fabricate a framework to measure the 3-dimensional acceleration up or G-Force.
Step 1: Basic Hardware We Require
The issues were less for us since we have a ton of stuff lying around to work from. Nonetheless, we know how it's troublesome for others to assemble the right part in perfect time from the opportune spot and that is justified regardless of each penny. So we would help you in all regions. Read the following to get a complete parts list.
1. Raspberry Pi
The initial step was acquiring a Raspberry Pi board. This tiny, low-powered computer provides a cheap and generally simple base for electronics ventures, Internet of Things(IoT), Smart Cities, School Education.
2. I2C Shield for Raspberry Pi
The main thing the Raspberry Pi is genuinely missing is an I²C port. So for that, the TOUTPI2 I²C connector gives you the sense to utilize Rasp Pi with MULTIPLE I²C devices. It's accessible on DCUBE Store
3. 3-axis accelerometer, ADXL345
Manufactured by Analog Devices, the ADXL345, is a low-power 3-axis accelerometer with high-resolution 13-bit measurement at up to ±16g. We acquired this sensor from DCUBE Store
4. Connecting Cable
We had the I2C connecting cable accessible at DCUBE Store
5. Micro USB cable
The slightest confounded, yet most stringent as far as power necessity is the Raspberry Pi! The most effortless approach to power up the Raspberry Pi is by means of the Micro USB cable.
6. Web Access is a Need
Web access can be empowered through an Ethernet(LAN) cable associated with a local network and the web. On the other hand, you can associate with a wireless network using a USB wireless dongle, which will require configuration.
7. HDMI Cable/Remote Access
With HDMI cable on board, you can hook it up to a digital TV or to a Monitor. Need to spare cash! Raspberry Pi can be remotely gotten to utilizing distinctive strategies like-SSH and Access over the Web. You can use the PuTTYopen source software.
Step 2: Connecting the Hardware
Make the circuit according to the schematic appeared. Draw up an outline and take after the configuration deliberately.
Connection of the Raspberry Pi and I2C Shield
Above all else, take the Raspberry Pi and spot the I2C Shield on it. Press the Shield delicately over the GPIO pins of Pi and we are finished with this progression as simple as pie(see the snap).
Connection of the Sensor and Raspberry Pi
Take the sensor and Interface the I2C Cable with it. For the appropriate operation of this Cable, please recall I2C Output ALWAYS associates with the I2C Input. The same must be taken after for the Raspberry Pi with the I2C shield mounted over it the GPIO pins.
We prescribe the utilization of the I2C cable as it refutes the requirement for perusing pinouts, soldering, and malaise caused by even the smallest blunder. With this basic plug and play cable, you can introduce, swap out devices, or add more devices to an application easily. This makes things uncomplicated.
Note : The brown wire ought to dependably follow the Ground (GND) connection between the output of one device and the input of another device.
Web Network is Key
To make our venture a win, we require a web connection for our Raspberry Pi. For this, you have alternatives like interfacing an Ethernet(LAN) cable with the home system. Additionally, as an option, however, a helpful route is to utilize a WiFi connector. Some of the time for this, you require a driver to make it work. So lean toward the one with Linux in the depiction.
Power Supply
Plug in the Micro USB cable into the power jack of Raspberry Pi. Light it up and we are good to go.
Connection to Screen
We can have the HDMI cable associated with another screen. In some cases, you have to get to a Raspberry Pi without interfacing it to a screen or you might need to view some data from it from somewhere else. Conceivably, there are innovative and financially savvy approaches to doing as such. One of them is utilizing - SSH(remote command-line login). You can also likewise utilize the PuTTY software for that.
Step 3: Python Coding for Raspberry Pi
The Python Code for the Raspberry Pi and ADXL345 Sensor is accessible in our Github Repository.
Before going ahead to the code, ensure you read the guidelines given in the Readme document and Set up your Raspberry Pi as per it. It will simply pause for a minute to do as such.
An accelerometer is a device that measures proper acceleration; proper acceleration is not the same as coordinate acceleration (rate of change of velocity). Single- and multi-axis models of the accelerometer are accessible to identify magnitude and direction of the proper acceleration, as a vector quantity, and can be utilized to sense orientation, coordinate acceleration, vibration, shock, and falling in a resistive medium.
The code is plain before you and it's in the most straightforward structure that you can envision of and you ought to have no issues.
# dcubestore.com #
import smbus import time
# Get I2C bus bus = output on Monitor. Following few moments, it will show every one of the parameters. Subsequent to ensuring that everything works easily, you can take this venture to a greater task.
Step 5: Applications and Features
The ADXL345 is a small, thin, ultralow power, 3-axis accelerometer with high resolution (13-bit) measurement at up to ±16 g. The ADXL345 is appropriate for Cell Phone Applications. It quantifies the Static Acceleration of Gravity in Tilt-Detecting Applications and in addition Dynamic Acceleration upcoming about because of Motion or Shock. Other applications include the likes of Handsets, Medical instrumentation, Gaming and Pointing Devices, Industrial Instrumentation, Personal Navigation Devices, and Hard Disk Drive (HDD) Protection.
Step 6: Conclusion
Hope this task motivates further experimentation. This I2C sensor is extraordinarily flexible, cheap and accessible. Since it's a to a great degree impermanent system, there are interesting ways you can broaden this task and improve it even.
For example, You can start with the idea of an Inclinometer using the ADXL345 and Raspberry Pi. In the above project, we have used basic calculations. You can improvise the code for G-values, angles of slope (or tilt), elevation or depression of an object with respect to gravity. Then you can check the advance options like rotation angles for roll(front-to-back axis, X), pitch(side-to-side axis, Y) and yaw(vertical axis, Z). This accelerometer displays 3-D G-Forces. So you could utilize this sensor in various ways you can consider.
For your comfort, we have a fascinating video instructional exercise on YouTube which may help out your investigation. Trust this venture motivates further exploration. Keep contemplating! Keep in mind to seek after as more is continually coming up.
4 Discussions
Please sir help me out..
I want the whole set with a graphical display for my SUV as I have to go to many hill areas of north India ,will you please make a set of it for my personal use.It will help me a lot...A I am not so much this kind of tech guy so please help me in this...I am waiting for your reply.
my contact details :- S.Giri
Mob No :- 8446632095
What is used for the graphical interface in the first picture (gauges)?
My question too!
|
http://www.instructables.com/id/3-Axis-Accelerometer-ADXL345-With-Raspberry-Pi-Usi/
|
CC-MAIN-2018-30
|
refinedweb
| 1,371
| 55.24
|
closer look at Python's xml.dom module
Document options requiring JavaScript are not displayed
Help us improve this content
Level: Introductory
David Mertz (mertz@gnosis.cx), President, Gnosis Software, Inc.
01 Jul 2000
In this article, David Mertz examines in greater detail the use of the high-level xml.dom module for Python discussed in his previous column. Working with xml.dom is illustrated with clarifying code samples and explanations of how to code many of the elements that go into a complete XML document processing system.
What is Python? What is XML?
Python is a freely available, high-level, interpreted
language developed by Guido van Rossum. It combines a clear
syntax with powerful, but optional, object-oriented semantics.
Python is available for almost every computer platform, and has strong portability
between platforms.
XML is a simplified dialect of the Standard Generalized Markup
Language (SGML). You may and EDI
files), messages for interprocess communication between
programs, architectural diagrams (like CAD formats), and many
other purposes. You can create a set of tags to capture any
sort of structured information you might want to represent,
which is why XML is growing in popularity as a common standard
for representing diverse information.
The Document Object Model
The xml.dom module is probably the most powerful tool
available to a Python programmer when working with XML
documents. Unfortunately, the documentation provided by the
XML-SIG is currently a bit sparse. Some of this gap is filled
in by the W3C's language-neutral DOM specification. But it
would be nice for Python programmers to have a quick-start
guide to the DOM that is specific to the Python language. This
article aims to provide such a guide. As in the previous column, the sample quotations.dtd files are used in some of the
samples, and are available with the article code-sample
archive.
It is worth getting a sense of exactly what DOM is. The
official explanation is a good one:
The Document Object Model is a platform- and language-neutral
interface that will allow programs and scripts to dynamically
access and update the content, structure and style of
documents. The document can be further processed and the
results of that processing can be incorporated back into the
presented page. (World Wide Web Consortium DOM Working
Group)
DOM works by converting an XML document to a
tree -- or forest -- representation. The World Wide Web Consortium (W3C) specification gives
as an illustration a DOM version of an HTML table.
DOM defines a set of methods to traverse, prune, reorganize,
output, and manipulate a tree like this at a level of
abstraction higher, and more convenient, than the underlying
linearity of an XML document.
Convert HTML to XML
Valid HTML is almost, but not quite, valid XML. The two main
differences are that XML tags are case-sensitive, and that all
XML tags require an explicit close (as a closing tag, which is
optional for some HTML tags; for example: <img src="X.png" />).
A simple example of using xml.dom is using the
HtmlBuilder() class to convert HTML to XML.
<img src="X.png" />
"""Convert a valid HTML document to XML
USAGE: python try_dom1.py < infile.html > outfile.xml
"""
import sys
from xml.dom import core
from xml.dom.html_builder import HtmlBuilder
# Construct an HtmlBuilder object and feed the data to it
b = HtmlBuilder()
b.feed(sys.stdin.read())
# Get the newly-constructed document object
doc = b.document
# Output it as XML
print doc.toxml()
The HtmlBuilder() class is kind enough to implement
some of the underlying xml.dom.builder template functionality
it inherits, and its source is worth looking at. However, even
where we implement template functions ourselves, the outlines
of a DOM program will be similar. In the general case, we will
build a DOM instance by some means, and then operate on that
instance. The .toxml() method of a DOM instance is a simple
way to produce a string representation of the DOM instance (in
the above case, simply to print it out once generated).
Convert a Python object to XML
A Python programmer can achieve a great deal of power and
generality by exporting an arbitrary Python object instance as
XML. This allows us to handle Python objects in exactly the
manner we are accustomed to, with the option of eventually
using our instance attributes as tags in the generated XML.
With just a few lines (derived from the building.py example)
we can convert Python "native" objects to DOM objects, with
recursion on those attributes that are contained objects.
"""Build a DOM instance from scratch, write it to XML
USAGE: python try_dom2.py > outfile.xml
"""
import types
from xml.dom import core
from xml.dom.builder import Builder
# Recursive function to build DOM instance from Python instance
def
object_convert(builder, inst):
# Put entire object inside an elem w/ same name as the class.
builder.startElement(inst.__class__.__name__)
for attr in inst.__dict__.keys():
if attr[0] == '_': # Skip internal attributes
continue
value = getattr(inst, attr)
if type(value) == types.InstanceType:
# Recursively process subobjects
object_convert(builder, value)
else:
# Convert anything else to string, put it in an element
builder.startElement(attr)
builder.text(str(value))
builder.endElement(attr)
builder.endElement(inst.__class__.__name__)
if __name__ == '__main__':
# Create container classes
class
quotations: pass
class
quotation: pass
# Create an instance, fill it with hierarchy of attributes
inst = quotations()
inst.title = "Quotations file (not quotations.dtd conformant)"
inst.quot1 = quot1 = quotation()
quot1.text = """'"is not a quine" is not a quine' is a quine"""
quot1.source = "Joshua Shagam, kuro5hin.org"
inst.quot2 = quot2 = quotation()
quot2.text = "Python is not a democracy. Voting doesn't help. "+\
"Crying may..."
quot2.source = "Guido van Rossum, comp.lang.python"
# Create the DOM Builder
builder = Builder()
object_convert(builder, inst)
print builder.document.toxml()
The function object_convert() has a few limitations. For
example, it is impossible to produce a quotations.dtd
conformant XML document with the above procedure: #PCDATA text
cannot be placed directly inside a quotation class, but only
within an attribute of the class (such as .text). One simple
workaround would be to have object_convert() handle an
attribute named, for example, .PCDATA in a special manner. The
conversion to DOM could be made more sophisticated in various
ways, but the beauty of the approach is that we can start with
entirely "Pythonic" objects, and convert them in a
straightforward manner to XML documents.
quotation
.text
object_convert()
.PCDATA
It is also worth noting that elements at the same level in the
produced XML document will not occur in any obvious order. For
example, on the author's system, using the particular version
of Python he does, the second quotation defined in the source
appears first in the output. But this could change between
versions and systems. Attributes of Python objects are not
inherently ordered to start with, so this behavior makes sense.
This behavior is what we want and expect for data relating to a
database-system, but is obviously not what we would want for a
novel we marked up as XML (unless, perhaps, we wanted an update
on William Burroughs' "cut-up" method).
Convert an XML document to a Python object
It is just as easy to generate a Python object out of an XML
document as the reverse process was. In many cases, we might
well be satisfied with using xml.dom methods. But in other
situations, it is nice to use identical techniques with objects
generated from XML documents as with all our "generic" Python
objects. In the below code, for example, the function
pyobj_printer() might have been a function we already used
to handle an arbitrary Python object.
pyobj_printer()
"""Read in a DOM instance, convert it to a Python object
"""
from xml.dom.utils import FileReader
class
PyObject: pass
def
pyobj_printer(py_obj, level=0):
"""Return a "deep" string description of a Python object"""
from string import join, split
import types
descript = ''
for membname in dir(py_obj):
member = getattr(py_obj,membname)
if type(member) == types.InstanceType:
descript = descript + (' '*level) + '{'+membname+'}\n'
descript = descript + pyobj_printer(member, level+3)
elif type(member) == types.ListType:
descript = descript + (' '*level) + '['+membname+']\n'
for i in range(len(member)):
descript = descript+(' '*level)+str(i+1)+': '+ \
pyobj_printer(member[i],level+3)
else:
descript = descript + membname+'='
descript = descript + join(split(str(member)[:50]))+'...\n'
return descript
def
pyobj_from_dom(dom_node):
"""Converts a DOM tree to a "native" Python object"""
py_obj = PyObject()
py_obj.PCDATA = ''
for node in dom_node.get_childNodes():
if node.name == '#text':
py_obj.PCDATA = py_obj.PCDATA + node.value
elif hasattr(py_obj, node.name):
getattr(py_obj, node.name).append(pyobj_from_dom(node))
else:
setattr(py_obj, node.name, [pyobj_from_dom(node)])
return py_obj
# Main test
dom_obj = FileReader("quotes.xml").document
py_obj = pyobj_from_dom(dom_obj)
if __name__ == "__main__":
print pyobj_printer(py_obj)
The focus here should be on the function pyobj_from_dom(),
and specifically on the xml.dom method .get_childNodes()
which is where the real work happens. In pyobj_from_dom(),
we extract any text directly wrapped by a tag, and put it in
the reserved attribute .PCDATA. For any nested tags
encountered, we create a new attribute with a name matching the
tag, and assign a list to the attribute so we can potentially
include multiple occurrences of the tag within the parent block.
By using a list, of course, we maintain the order in which tags
were encountered within the XML document.
pyobj_from_dom()
.get_childNodes()
Aside from using our old pyobj_printer() generic function (or
more likely, something more sophisticated and robust), we can
now access elements of py_obj using normal attribute
notations.
py_obj
>>> from try_dom3 import *
>>> py_obj.quotations[0].quotation[3].source[0].PCDATA
'Guido van Rossum, '
Rearrange a DOM tree
One of the great virtues of DOM is that it allows a programmer
to manipulate an XML document in a non-linear fashion. Each
block surrounded by matching open/close tags is simply a "node"
in the DOM tree. While the nodes are maintained in a list-like
fashion to preserve order information, there is nothing special
or immutable about the order. We can easily prune off a node,
and graft it back in somewhere else in the DOM tree (even at a
different level, if the DTD allows this). Or add new nodes,
delete existing nodes, etc.
"""Manipulate the arrangement of nodes in a DOM object
"""
from try_dom3 import *
#-- Var 'doc' will hold the single <quotations> "trunk"
doc = dom_obj.get_childNodes()[0]
#-- Pull off all the nodes into a Python list
# (each node is a <quotation> block, or a whitespace text node)
nodes = []
while 1:
try: node = doc.removeChild(doc.get_childNodes()[0])
except: break
nodes.append(node)
#-- Reverse the order of the quotations using a list method
# (we could also perform more complicated operations on the list:
# delete elements, add new ones, sort on complex criteria, etc.)
nodes.reverse()
#-- Fill 'doc' back up with our rearranged nodes
for node in nodes:
# if second arg is None, insert is to end of list
doc.insertBefore(node, None)
#-- Output the manipulated DOM
print
dom_obj.toxml()
Performing the rearrangement of quotations in the above few
lines would have posed a considerable problem if we viewed an
XML document as simply a text file, or even if we used a
sequential-oriented module like xmllib or xml.sax. With
DOM, the problem is not much more difficult than any other
operation we might perform on a Python list.?
|
http://www.ibm.com/developerworks/linux/library/l-python2/
|
crawl-001
|
refinedweb
| 1,885
| 56.05
|
Article updated on July 27, 2016.
Welcome to part 3 of the introduction to the HTML Template Language (HTL), formerly known as Sightly. In this part I want to give you some more use-cases and examples that you can use in your components.
Interested in the others parts? Here they are: part 1, part 2, part 4, part 5.
HTL Arrays
Here a sample around arrays:
<!--/* Accessing a value */--> ${properties['jcr:title']} <!--/* Printing an array */--> ${aemComponent.names} <!--/* Printing the array, separating items by ; */--> ${aemComponent.names @ join=';'} <!--/* Dynamically accessing values */--> <ul data- <li>${ properties[item]}</li> </ul>
HTL Comparisons
Here some use-cases on comparing values
<div data-TEST</div> <div data-NOT TEST</div> <div data-Title is longer than 3</div> <div data-Title is longer or equal to zero </div> <div data- Title is longer than the limit of ${aemComponent.MAX_LENGTH} </div>
HTL Use-API
In my second article I explained that you can call methods from your custom-classes via the data-sly-use notation.
In this example I will show that you can also pass in parameters from your components.
<div data-sly-use. ${aemComponent.fullname} </div>
Java-code in the Use-API:
import com.adobe.cq.sightly.WCMUsePojo; public class MyComponent extends WCMUsePojo { // firstName and lastName are available via Bindings public String getFullname() { return get("firstName", String.class) + " " + get("lastName", String.class); } }
In the example above you define your parameters in the Use-API, and you are not restricted to a type.
Via the get() method that is available via the WCMUsePojo class, you can get the value of a binding.
I hope you enjoyed this part, more parts to come 🙂
Read-on
Here are the other articles of my introduction series:
- Introduction part 1
- Introduction part 2
- Introduction part 3 (current page)
- Introduction part 4
- Introduction part 5
Other posts on the topic:
And here other resources to learn more about it:
Unable to navigate to part 1 & 2 posts of this series.
@akash268 Thanks for flagging! They work now.
|
http://blogs.adobe.com/experiencedelivers/experience-management/htl-intro-part-3/
|
CC-MAIN-2017-47
|
refinedweb
| 337
| 56.15
|
You can subscribe to this list here.
Showing
2
results of 2
concordance/concordance.1
-Learn IR from other remotes. Use <filename>.
+Learn IR from other remotes. Use <filename>. Supports also files with
multiple key names, i.e. when you checked multiple commands on the Logitech
website. For each command, you have the choice to skip to the next or
previous command or to learn the IR code from the original remote.
.TP
I see no need to specify that learn-ir supports multiple commands. It should
have always, and most users are only going to be confused by this. This now
supports the format the site can send - it's lack of full support before was
simply a bug. In other words, I think this can stay as it was.
concordance/concordance.c
+ if( flag) {
+ t.c_lflag |= ICANON;
Minor spacing issue in the "if" there.
+#define read_key getchar
Please explain (in a comment) that your set_canon() function changes the
behavior of getchar() - this is not at all clear.
+#define USE_DEFAULT '\n'
This name is horrible, how about, NEWLINE or SYSTEM_NEWLINE?
Also, windows doesn't give '\r', it gives '\r\n' where as unix gives just
\n. So your stuff works kinda by accident, but that should probably be fixed.
+ /*
+ * full dump of new IR signal:
+ */
Typo?
+ "[U]pload new code, [R]etry
same key, [N]ext key, [Q]uit",
Does it make sense to maybe put this prompt string in a variable and use
that instead? I think it would make the code much more readable.
- err = learn_ir_commands(data, size, !options.noweb);
+ err = _learn_ir_commands(data, size, &options);
This doesn't need to start with an underscore. None of the mode functions do.
In fact, nothing here should start with an underscore, it's not a library,
nothing's private. However, please do rename _printburst and _printspace and
_dump_new_code to something like print_ir_burst, print_ir_space, and
dump_ir_code.
Also, I notice you do index++ at the beginning, and then have all these
cases where it you index-- or index -= 2. It seems more logical to leave
index at the current index, and only ++ in the event of success. If you
fail, you don't do anything (nuking the --), if you get a "p" you just --.
Is there any reason not to change to this approach?
--
Andreas Schulz wrote:
> TODO:
> =====
> While I managed to adapt the python bindings on my own, I would appreciate
> any support to adapt the PERL bindings as well.
Perl bindings are a bitch. I'll apply the patch without them and do the
binding support myself.... but do you think you can do me a favor and flesh
out the perl driver to use these functions? (test.pl) That would save me time.
> Any volunteers for testing on a Macintosh?
My mac died... I may be able to use one at work... I'll get it tested before
a release, but our mac support isn't solid enough for me to hold up applying
the patch though.
Thanks for the detailed comments - awesome!
I have some minor style comments below nothing huge. The only big question I
have really crosses between patch 1/2 and 2/2:
Why aren't you using a callback in learn_from_remote() (which would, I
assume pass through to LearnIR)? I realize one wasn't there before, but I
think we can all agree the code that was there before was crappy. =)
You sorta fake it up in concordance.c, but LearnIR() has a lot of knowledge
of progress, it seems and a better progress meter could be given to the user.
(I haven't finished reviewing path 2/2 yet, that's coming shortly).
===============================================
> -#include <string.h>
> +#include <string>
>> fixed VC++ warning about deprecated include
I can't reproduce it now, but I seem to remember this doing weird things in
linux for old c-string functions. Have you tested this on Linux?
+ /*
+ * to be really paranoid, we would have to narrow the search range
+ * further down to next <PARAMETER>...</PARAMETER>, but IMHO it
+ * should be safe to assume that Logitech always sends sane files:
+ */
Spacing issue there. Well, two. The 4th line is inconsistent (but right),
and the 2nd, 3rd and 4th are wrong.
I noticed you use a do-while instead of a while-do. Is there a reason? If
they're the same upon entrance to the function, should you still run?
Do-while should only be used in that scenario (despite it's regular use in
our code).
+ std::list<string> key_list;
Just do a:
using namespace std;
instead.
Normally I wouldn't nitpick on this, but since you'll be changing this
anyway would you mind switching these:
return 0;
+ /* C++ should take care of key_name and key_list */
Put the comment before the return?
+ delete [] ir_signal; /* allocated by new[] -> delete[] */
Your inconsistent here - within the same line as well as with your previous
code. This should be "delete[] ir_signal"
libconcord/remote.cpp
+ if (err != 0) { return err; }
Please don't do this - multilines. I spent a long time cleaning this stuff up.
libconcord/bindings/python/libconcord.py
Stephen - can you please do a code review on this? I'm actually learning
python, but I'm not yet qualified to do a code review.
--
|
http://sourceforge.net/p/concordance/mailman/concordance-devel/?viewmonth=200808&viewday=28
|
CC-MAIN-2015-48
|
refinedweb
| 875
| 75.2
|
Bikeshed alternatives: scoped keywords, std-qualified keywords, named keywords.
"We have to go with the odd ones, as all of the good ones are already taken". This was said a few times at different WG21 meetings. It was in a reference to keywords. C++ is a mature language with large existing codebases and an attempt to introduce a new keyword into the language will necessarily break existing code.
Quick sampling of some of the proposed keywords from Concepts, Modules,
Transaction Memory, Pattern Matching, and Coroutines papers (N3449,
N4134,
N4361,
N4466,
N4513,
[PatternMatch]) in private and public codebases reveals that identifiers
await,
concept,
requires,
synchronized,
module,
inspect,
when are used as names of variables, fields, parameters, namespaces
and classes.
This paper explores the idea of adding soft keywords to the C++
language. This will enable new language features to select best possible keyword
names without breaking existing software. The idea is simple. Soft
keyword is a named entity implicitly defined in
std or
std::experimental namespaces that participates in the name lookup
and name hiding according to existing language rules. If a name lookup finds an
implicit declaration of a soft keyword it is treated in the same way as
other context-dependent keyword resolved by the name lookup such as
typedef-name,
namespace-name,
class-name,
etc.
In the example below
yield is a soft keyword implicitly
defined in the
std namespace.
namespace N1 { void yield(int); } auto coro2() { using std::yield; yield(2); // yields value 2 N1::yield(3); // invokes N1::yield } auto coro3() { using namespace N1; yield(1); // invokes N1::yield std::yield(3); // yields value 3 } auto coro4() { using namespace N1; using namespace std; yield(4); // error: ambiguous }
Drawback of the simple model described in the introduction is that without a using declaration or using directive, the developer need to always use std:: with the soft keyword. This is troublesome as people would have to remember which keywords are the soft keywords and which are the good old "hard ones". This can be alleviated by adding a paragraph to the section 3.5 [basic.lookup] stating:
(5) if an unqualified name lookup [basic.lookup.unqual] or an argument-dependent name lookup [basic.lookup.argdep] fails to find a declaration and an identifier being looked up is a soft keyword identifier, it is treated as the corresponding context-dependent keyword
With this addition, we are getting to near perfect keyword
experience. In the following example
module is a soft keyword.
module A; // OK. Lookup did not find any Xyz::Pcmf *module; // OK bool FileHandleList::Find(LPCWSTR file) { FileHandleCachePaths::iterator module = _Find(file); // OK return module != m_hInstPaths.end(); // OK }
If a grammar construct utilizing a particular soft keyword can be interpreted as a function call when used in the template definition and being a dependent name, the current rules will result in the construct being treated as a function call. This preserves the property that a template can be correctly parsed prior to instantiation. That means that for some constructs, in templates, one must use explicitly qualified soft keywords, unless there a preceding using directive or declaration.
In the examples bellow,
inspect and
when are
soft keywords.
template <typename T> double area(const Shape& s, T u) { inspect (s) { // OK: not a dependent name when Circle: return 2*pi*radius(); when Square: return height()*width(); default: error(“unknown shape”); } std::inspect(u) { // must be qualified, otherwise will be parsed as a function call when X: return 1.0; } }
Similarly, with
yield soft keyword, in some cases, qualification
will be needed.
template <typename T> auto g() { T v; T* p; yield v; // yield expression (not a dependent name, not a function call expr) yield(5); // yield expression (not a dependent name) std::yield(v); // yield expression (not a dependent name, since qualified) std::yield *p; // yield expression (not a dependent name, since qualified) yield *p; // operator * call, yield is not a soft keyword yield(v); // function call, yield is a dependent name }
This is unfortunate, but, developers are already trained to deal with two
phase lookup in templates and take care of it, by inserting
typename,
template and
this-> as
needed. Soft keywords add one more annoyance they have to deal with, unless we
can take advantage of modules.
However, situation is not as bleak as it may seem. Modules get us to perfect keyword experience, as they allow free use of using directives / declarations without exporting using directives / declarations outside of the module.
module A; using namespace std; template<typename T, typename U> export void f(T& x, U xx) { inspect (x,xx) { // OK: not a dependent name, as the lookup finds std::inspect when {int* p,0}: p=nullptr; when {_a,int}: … // _a is a placeholder matching everything // shorthand for auto _a } }
If someone finds using directive too broad, one can define a module with all of their favorite soft keywords exported in using declarations as follows:
module Cpp17keywords; export { using std::inspect; using std::when; using std::await; using std::yield; ... }
and now, any module can take advantage of using unqualified soft keywords by
having
import Cpp17keywords; declaration.
Yes. If a source file uses
using namespace std and defines an
entity with the name matching the soft keyword
xyz in the global
namespace or in another namespace X that is available for unqualified name
lookup due to
using namespace X, then, the lookup will be
ambiguous. The fix would be to explicitly qualify the name in question with
::xyz or
X::xyz.
We can also do not break existing code, by altering paragraph 2 of section [namespace.udir] as follows (changes are in bold):
A using-directive specifies that the names in the nominated namespace can be used in the scope in which the
using-directive appears after the using-directive. During unqualified name lookup (3.4.1), the names appear
as if they were declared in the nearest enclosing namespace which contains both the using-directive and the
nominated namespace. This affects all names except the names of the soft keywords.
One may ask, why should we do this? We don't guard against introducing new
library functions in
std namespace, why should we do this for
keywords? For functions, library can rely on overloading and SFINAE to remove
function names from consideration and reduce the chance of collision. We don't
have this ability for keywords. Nevertheless, authors feel ambivalent about this
rule and would like committee guidance.
What about tools? Would soft keywords confuse them? Not necessarily. If we introduce new constructs to the language tools need to be adapt to them.
Precise tools already have to rely on name lookup to figure out if
X *
y; is a declaration of a variable of type pointer to
X or
multiplication of
X and
y. Thus, they should be able
to distinguish between identifiers and soft keywords.
Imprecise tools rely on heuristics to decide how to parse without having
complete knowledge of all the symbols. In that case, they would have to use
heuristics depending on the construct. For example, if
inspect(x)
is followed by the
{, then heuristic would be that
inspect is a keyword, otherwise, assume function name.
A version of this proposal was implemented in non-shipping version of Microsoft C++ compiler.
Here is a very rough sketch of how the wording might look for soft keywords.
As an illustration, I use the soft keywords
yield and
await.
Add
yield and
await to the table 2 (Identifiers
with special meaning).
In paragraph 3 add the text in bold.
An entity is a value, object, reference, function, enumerator, type,
class member, template, template specialization, namespace, parameter
pack, soft keyword, or
this.
Add the following paragraph after paragraph 4.
If an unqualified name lookup [basic.lookup.unqual] or an argument-dependent name lookup [basic.lookup.argdep] fails to find a declaration and an identifier being looked up is a soft keyword identifier, it is treated as corresponding context-dependent keyword
In paragraph 1, add the text in bold.
The name lookup rules apply uniformly to all names (including typedef-names (7.1.3), namespace-names (7.3), soft-keyword-names (3.12),
and class-names (9.1)) ...
Soft keywords
yield and
await are implicitly
declared in the
std::experimental namespace. In the grammar
productions,
yield-soft-keyword-name and
await-soft-keyword-name represent context-dependent keywords
resulted from the name lookup according to the rules in 3.4 [basic.lookup].
[Note: This is an illustration of soft keywords used in grammar production]
await-expression:
await-soft-keyword-name
cast-expression
N4134:
Resumable Functions v2 ()
N4361: C++ extensions for Concepts ()
N3449: Open and Efficient Type Switch for C++ ()
N4466: Wording for Modules ()
N4513: Working Draft Technical Specification for C++ Extensions for Transactional Memory ()
[PatternMatch]: Presentation from the evening session at Urbana 2014
|
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0056r0.html
|
CC-MAIN-2017-13
|
refinedweb
| 1,468
| 51.58
|
Arch Remove
Add component
The counterpart of this tool is the
Add component
Arch
User documentation
Description
The Remove tools allows you to do 2 kinds of operations:
- Remove a subcomponent from an Arch object, for example remove a box that has been added to a wall, like in the
Arch Add example.
- Subtract a shape-based object from an Arch component such as a
Arch Wall or
Arch Structure
The counterpart of this tool is the
Arch Add tool.
A box subtracted from a wall, leaving a hole in it.
Usage
Or
- Select objects to be subtracted, the last object selected must the Arch object from which the other objects will be subtracted.
- Press the
button, or Arch →
Remove from the top menu.
Scripting
See also: Arch API and FreeCAD Scripting Basics.
The Remove tool can be used in macros and from the Python console by using the following function:
removeComponents(objectsList, host=None)
- Removes the given objects in
objectsListfrom their parents.
- If a
hostobject is specified, this function will try adding the objects in
objectsListas holes to the
host.
Example:
import FreeCAD, Draft, Arch Line = Draft.makeWire([FreeCAD.Vector(0, 0, 0),FreeCAD.Vector(2000, 2000, 0)]) Wall = Arch.makeWall(Line, width=150, height=3000) Box = FreeCAD.ActiveDocument.addObject("Part::Box", "Box") Box.Length = 900 Box.Width = 450 Box.Height = 2000 FreeCAD.ActiveDocument.recompute() Draft.rotate(Box, 45) Draft.move(Box, FreeCAD.Vector(1000, 700, 0)) Arch.removeComponents(Box, Wall)
|
https://wiki.freecadweb.org/Arch_Remove
|
CC-MAIN-2021-43
|
refinedweb
| 242
| 59.6
|
For those creating dynamic text fields in Flash it often comes as a surprise that dynamic text fields do not behave as expected when modifying the alpha parameter of it or it’s parent. Often even I forget that telling a movie clip to transition out based on a tween of its alpha property doesn’t modify the text field. Without getting into the details of why this occurs lets jump straight to one solution for quickly making the alpha property work, setting it’s Blend Mode!
The Quick Fix
- Import the BlendMode class
import flash.display.BlendMode;
- Set the blendMode of your TextFields to Layer
this._panel.currentTimeTxt.blendMode = BlendMode.LAYER;
That is it!! Now you will be able to modify the alpha parameter of the field or it’s parents and the dynamic text fields will behave as expected.
|
http://mdbitz.com/2009/10/06/code-snippets-enabling-alpha-for-dynamic-text-fields-in-flash/
|
CC-MAIN-2019-18
|
refinedweb
| 141
| 60.95
|
This guide shows you how to read multiple tables from a Microsoft SQL Server database, using the Multi Table source. Use the Multi Table source when you want your pipeline to read from multiple tables. If you want your pipeline to read from a single table, see Reading from a Microsoft SQL Server table.
The Multi Table source outputs data with multiple schemas and includes a table name field that indicates the table from which the data came from. When using the Multi Table source, use one of the multi-table sinks, BigQuery Multi Table or GCS Multi File. Fusion, Cloud Storage, BigQuery, and Cloud Dataproc APIs.
- Create a Cloud Data Fusion instance.
- Ensure that your SQL Server database can accept connections from Cloud Data Fusion. To do this securely, we recommend that you create a private Cloud Data Fusion instance.
Navigate to the Cloud Data Fusion UI
When using Cloud Data Fusion, you use both the Google Cloud console and the separate Cloud Data Fusion UI. In the Google Cloud console, you can create a Google Cloud project, and create and delete Cloud Data Fusion instances. In the Cloud Data Fusion UI, you can use the various pages, such as Studio or Wrangler, to use Cloud Data Fusion features.
In the Google Cloud console, open the Instances page.
In the Actions column for the instance, click the View Instance link.
In the Cloud Data Fusion UI, use the left navigation panel to navigate to the page you need.
Store your SQL Server password as a secure key
Add your SQL Server password as a secure key to encrypt on your Cloud Data Fusion instance. Later in this guide, you will ensure that your password is retrieved using Cloud KMS.
In the top-right corner of any Cloud Data Fusion page, click System Admin.
Click the Configuration tab.
Click Make HTTP Calls.
In the dropdown menu, choose PUT.
In the path field, enter
namespaces/namespace-id/securekeys/password.
In the Body field, enter
{"data":"your_password"}. Replace your_password with your SQL Server password.
Click Send.
Ensure that the Response you get is status code
200.
Get the JDBC driver for SQL Server
using the Hub
In the Cloud Data Fusion UI, click Hub in the upper right.
In the search bar, type "Microsoft SQL Server JDBC Driver".
Click Microsoft SQL Server JDBC Driver.
Click Download. Follow the download steps shown.
Click Deploy. Upload the Jar file from the previous step.
Click Finish.
using Studio
Visit Microsoft.com.
Choose your download and click Download.
In the Cloud Data Fusion UI, click the menu menu and navigate to the Studio page.
Click the + button.
Under Driver, click Upload.
Click to select the JAR file, located in the "jre7" folder.
Click Next.
Configure the driver by typing a Name and Class name.
Click Finish.
Deploy the Multiple Table Plugins
In the Cloud Data Fusion web UI, click Hub in the upper right.
In the search bar, type "Multiple table plugins".
Click Multiple Table Plugins.
Click Deploy.
Click Finish.
Click Create a Pipeline.
Connect to SQL Server
In the Cloud Data Fusion UI, click the menu menu and navigate to the Studio page.
In Studio, click to expand the Source menu.
Click Multiple Database Tables.
Hold the pointer over the Multiple Database Tables node and click Properties.
Under Reference name, provide a reference name that will be used to identify your SQL Server source.
Under JDBC Connection String, provide the JDBC connection string. For example,
jdbc:sqlserver://mydbhost:1433. Learn more.
Click Validate.
Click the X button.
What's next
- Learn more about Cloud Data Fusion.
- Follow one of the tutorials.
|
https://cloud.google.com/data-fusion/docs/how-to/reading-from-sqlserver-multi
|
CC-MAIN-2022-40
|
refinedweb
| 606
| 68.16
|
Details
- Type:
Sub-task
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: HA branch (HDFS-1623)
-
- Labels:None
- Hadoop Flags:Reviewed
Description.
Activity
- All
- Work Log
- History
- Activity
- Transitions
Oops, I had missed a git add in the previous upload
Looks like a pretty good first draft, Todd. Since this isn't intended for commit, I didn't review it for style. One question:
Given that the lagLenth is set to 1000, will there be a problem if an FSEditLogOp is serialized to more than 1000 bytes? Or if an EOF is encountered in the middle of a serialized op? I suspect some exception might get thrown in this case which would cause the tailer to fail.
There are two aspects to this problem.
- How standby gets the edit logs and reads them?
- How it synchronizes the edit logs with the updates coming from the datanodes?
I am taking following approach.
Access to logs:
---------------
- A new configuration parameter will be added, which tells the locations of edit logs that standby can use to read. This configuration will have subset of locations that are configured as edit log locations on the primary.
- The standby will use JournalSet implementation of the JournalManager interface to read the transactions. (Refer
HDFS-1580, HDFS-2158).
- If no transactions are available EditLogInputStream throws NoMoreTransactionsException, the standby just sleeps for a short timeout, and retries again.
- On failover, the standby reads the edit logs for the last time after the primary has been fenced in following steps:
- close the edit log input stream.
- open the edit log input stream.
- read
A close in first step could be useful since edit log is stored on nfs, to make sure that latest changes are visible.
Updates from datanodes
----------------------
Standby maintains latest generation stamp, based on the records from the editlog. This is used for processing updates from datanodes as follows:
- The standby receives block report/block received/block deleted (pending
HDFS-395) message from the datanode.
- If the block's GS is less than the current GS of the standby, the standby will process it.
- If the block's GS is greater than the current GS of the standby, the standby will buffer this block received/deleted and waits until it sees corresponding generation stamp in the edit log. Otherwise it processes the block received.
Hi Jitendra. If you plan to use the generation stamp to synchronize the DN's block info with the edit stream, I think we need to add nextGenerationStamp calls in a few places. In particular, in allocateBlock, we don't bump the generation stamp, so the second block of a new file will have the same GS as the first block if no other actions happen.
Have you enumerated the various coordinations that we need to consider? The above deals with allocateBlock (namespace op) vs blockReceived (dn op), but I wonder if there are other places we need to synchronize the DN action after some NN action and aren't bumping the GS.
One thing I was considering was threading transaction IDs through the various operations - for example one possibility is for the active NN to send the current txid to the DN in every DatanodeCommand. Then, any reports from DN->NN include that txid, and the standby can block until it's hit that txid. Using txid instead of generation stamp means we don't have to consider each type of operation as a special case. Thoughts?
I and Jitendra considered Transaction ID before looking at GS. Transaction ID does not work because there are three parties involved - client, datanode and namenode.
Take this example:
- DN1 sends heartbeat to primary NN at txid T and learns about T.
- A client meanwhile creates a file at T+1 and allocates block at T+2.
- DN1 now is unable to send heartbeat or communicate with primary NN. Hence it is stuck at transaction T.
- Client complets writing a block to DN1. DN1 reports this to backup node as block received with T. At this point in time, if SNN has reached T and has not processed T+1 or T+2, it tries to handle BR(T), because it can. However, it fails to process it without the knowledge of the file.
We could get around this, if client also is tracking transactions and sends it to the datanode, adding unnecessary complexity and changes.
> I think we need to add nextGenerationStamp calls in a few places.
I agree.
> Have you enumerated the various coordinations that we need to consider?
IMO, we need to consider synchronization with edit logs for any message that Datanode sends to the standby, i.e. for every method in DatanodeProtocol. I think we need synchronization in only those methods that are referring to blocks. Here is the list of all methods and my classification based on synchronization needed or not.
- registerDatanode :
- I think no synchronization is needed, because there is no corresponding datanode info coming from edit logs.
- reportBadBlocks:
- Synchronization is not needed because the blocks being reported bad must have been reported earlier in a block report or a block received message by the datanode. Therefore if the block is not found in the block map of the standby, it only means its a deleted block.
- commitBlockSynchronization:
- Synchronization is needed for the same reason as in block received case.
- blockReport:
- Synchronization is needed because standby may not even have seen a block that is reported in block report.
- blockReceived:
- Synchronization is needed because standby may not even have seen a block that is reported in block received.
- sendHeartbeat :
- No synchronization is needed with edit logs.
- errorReport:
- Standby can just ignore this?
- versionRequest:
- Standby can just ignore this?
- processUpgradeCommand:
- Ignored by Standby.
From the list above, it seems to me that coordination is only required for block related info received from datanode vs that received in edit logs. Therefore using generation stamp is a good choice because all blocks have a generation stamp. Is that a valid conclusion?
Considering the txid approach, it seems it won't work. Consider following case:
Standby receives a block received message and doesn't find the block in its map. It is possible for two reasons:
a) the standby hasn't seen the edit log for the allocate block.
b) the standby has seen and processed an allocate block and also a delete for that block.
The standby needs to be able to distinguish between the above two possibilities to correctly process the block received.
Now it may be possible that the allocate and/or delete happened after the last command from the namenode, and the last transaction id known to the datanode is older than the allocate/delete. Then the standby won't know how to process the received block.
This is a very early version of the patch.
In this patch I am letting the datanode call to block if generation stamp is not up to date. I will upload a patch very soon which won't block the call and instead queue the message. The patch is not review ready yet, but indicates the approach being taken.
General direction looks good. I've a few comments.
There's a comma in configuration key.
In FSEditLog, check the states before transitioning them.
Im not sure the tailer will work as is. What happens if you open an inprogress input stream with this? As I understand it, you'll end up with lastTxnId in the middle of the segment.
In stopReadingEditLogs(), instead of doing the start stop, to ensure up to dateness, you could have a call on EditLogTailer#applyLatestUpdates(). Then EditLogTailerThread could call this in the loop also.
The attached patch doesn't block the call, rather stores the unprocessed messages.
Im not sure the tailer will work as is. What happens if you open an inprogress input stream with this? As I understand it, you'll end up with lastTxnId in the middle of the segment.
It will be fixed if FileJournalManager doesn't return in-progress segments. The stand by will lag a bit, but upon a failover, it will catch up. Need to evaluate the impact on number of stored datanode messages.
In FSEditLog, check the states before transitioning them.
initJournals checks the state. Do you mean I should rather check the state in initJournalsForRead/Write?
initJournals checks the state.
Ah, so it does. Ignore that comment then.
- can we move EditLogTailer to the ha package?
- should probably have some brief javadoc there. Also, if it is a public class, it needs InterfaceAudience/Stability annotations
- why do we sleep 60 seconds between tails? I think keeping the standby as up to date as possible is important. Though it's not an immediate goal of today's project, we should keep in mind the secondary goal of serving stale reads from the standby.
- using the terminology "sharedEditDirs" implies that we only support directories here. Instead, shouldn't we call it "sharedEditUris"? Same with the configs, etc.
- the code in stopReadingEditLogs seems really race prone. We need better inter-thread coordination than just sleeping.
- The name waitForGenStamp implies that it waits for something, but in fact this is just isGenStampInFuture
- need license on PendingDataNodeMessages
- need javadoc in lots of spots - explain the purposes of the new class, etc
- getMaxGsInBlockList could be moved to a static method in BlockListAsLongs
- DataNodeMessage and subclasses: make the fields final
- needs unit tests - there are some in my earlier patch for this issue that could be used to verify EditLogTailer, I think.
I want to also do some thinking on synchronization for the datanode messages, etc. Will write more later.
Thanks for early review Todd.
The patch is still in works. To reduce the amount of memory required to store the pending messages I am considering following two approaches.
1) Instead of storing the entire block report, storing only those blocks that have newer gs. This will reduce the memory required to store pending messages.
2) Allow reading segments from the middle, but only in following two cases
1) The segment is finalized
2) The segment is in progress and a threshold of time has passed, just to avoid opening in_progress file too frequently.
Cool, I'll look forward to your next revision. Let me know if I can help in any way.
Another thing I was considering while reading your patch is that it would be nice if the messages went through the same code path regardless of whether the NN is in standby or active mode. That way we have fewer code paths to debug. Does that seem feasible?
Here's an updated version of the previous patch provided by Jitendra. It does the following things:
- Rebased on current
HDFS-1623branch.
- Adds infrastructure to MiniDFSCluster to be able to start HA NNs.
- Adds a test based roughly on Todd's earlier test.
- Fixes up a few corner cases.
- Should address all of Todd's feedback.
Todd, could you please take a look at this? The one piece of feedback I didn't take was changing sharedEditsDirs to sharedEditsUris, since doing so would be inconsistent with all the other places in the code where we refer to those collections as "dirs."
Also, I should have mentioned, this test doesn't exercise the DN message portion at all. I intend to write some tests for that and update the patch accordingly next.
- Why is getMaxGsInBlockList static? seems it could just be a member function of BlockListAsLongs
- Storage has a javadoc @param shouldLock but the parameter doesn't seem to be in the method signature.
FSEditLog.java:
- There's a typo "UNITIALIZED" instead of "UNINITIALIZED" in the javadoc in FSEditLog
- The comment before sharedEditsDirs in FSEditLog should be javadoc-style
- In the FSEditLog state machine, how does the transition from OPEN_FOR_READING work when going active? The javadoc could use a little bit more there (do we go first to CLOSED?)
- Similar to above - would be good to add Preconditions.checkState checks in initJournalsForWrite and initSharedJournalsForRead - it's not obvious what state they should be to avoid orphaning open streams, etc.
- Assertion in logEdit: can you add a text error message like "bad state: " + state so that if the assertion fails we're left with more useful info?
FSN:
- Do you really need to make all of the FSN methods VisibleForTesting? We have a class called NameNodeAdapter which you can probably use to "reach in" without changing visibility. Or, why not just make a non-HA DFSClient talking to the first NN in the test case?
- editLogTailer should be defined up higher in the file, no?
- it seems like the recoverUnclosedStreams should happen just changing to writer mode, rather than at stopReadingEditLogs (doesn't seem obvious that this method would mutate the dir state). Otherwise when we clean-shutdown the standby, it might try to move around some logs, no?
- Why does matchEditLogs accept null now? It's called with the result of FileUtil.listFiles which never returns null
- A couple spurious changes in FileJournalManager, JournalManager
MiniDFSCluster:
- the builder method should be haEnabled not setHaEnabled to match the pattern of the other methods
- Need a license on TestEditLogTailer
Thanks a ton for your review, Todd. Here's an updated patch which addresses all of your comments except the following..
Because in the case of pre-transactional ELFIS, we pass in INVALID_TXID, even though the log is not yet in progress.
I'm still working on producing an updated patch which also adds testing for the pending DN message queues.....
How about update the generation stamp after logging the new block? The pipeline will be setup with the new generation stamp and block received will also contain the new gen-stamp.
How about update the generation stamp after logging the new block
The issue is that we need the new block to have the new genstamp, so we have to call nextGenerationStamp before we create the new block. Alternatively, we could split up nextGenerationStamp into two parts – one which increments, and another which logs. But then we may have an issue if there is a crash so the edit log includes the block allocation but not the SET_GENERATION_STAMP.
I've temporarily worked around this by changing the FSEditLogLoader code to only call notifyGenStampUpdate in OP_ADD, not in OP_SET_GENSTAMP. So, the new block is added to the block manager before it's notified as OK to handle the DN messages.
BUT – there's another issue with using genstamps as our "gating" mechanism for the DN messages - comment to follow.
I got this patch in conjunction with
HDFS-1108 and HDFS-1971 to properly replicate the creation of a new file, but then moved on to working on setReplication and ran into issues there. The issue I'm seeing is this:).
The solution I'm thinking of is that we have to track the transaction ID when we send comments to DNs. So, if a setReplication command at txid=123 causes invalidation of two blocks, we'd send the INVALIDATE command with "txid=123". Then, when the DN does delete these blocks, it would ack back with that txid to both NNs. The SBNN wouldn't process this message until it had loaded that txid.
A bit of a simplification from this would be that any command being processed from an NN will include the NN's txid, which the DN records in BPOfferService as "latestCommandTxId". Then, any calls to the NN would include this txid. This is a bit more conservative than tracking it with each block command, but probably less prone to bugs.
I plan to take a pass at implementing this latter approach.
another.
Another issue that we have to tackle before this can provide hot standby is this:.
Thoughts?
I chatted with ATM offline about this. In our opinion it makes the most sense to commit this patch pretty much as-is, even though we know there are some issues as described above. Then, we can open separate follow-on JIRAs for each of the distinct issues:
- one JIRA for dealing with the standby losing block locations because OP_ADD and OP_CLOSE replace the BlockInfos
- one JIRA to make the standby not manage replication and invalidation queues until it enters active mode
- one JIRA to figure out if we need to track txids through the NN<->DN messaging instead of just using the genstamps as Jitendra has done in the patch here.
Having this basic tailing infrastructure committed will let us start to investigate and fix the above issues in parallel and make faster progress.
Agreed?
Agree, let's get this one in and have a separate jira for each, no reason they need to be closed out first and this way we can attack them in parallel.
+1 to the latest patch. Nit: you can remove the diff against HAUtil.
Before committing I ran the rest of the edit-log tests and discovered a couple small issues. This patch has the following fixes:
- fix TestEditLog to init journals for write to fix failing test cases
- fix FSNamesystem to not use /tmp/hadoop-name as a default shared edits directory (fixes a failing TestEditLog case)
- add shared edits dirs to hdfs-default.xml
- fix MiniDFSCluster to not fail if running HA tests on a clean build dir (Files.deleteRecursively was failing if the dir didn't exist)
- remove empty diff in HAUtil
- rebase on new tip of branch
Since the above changes are pretty simple I'll commit under Eli's +1 tomorrow morning .
Delta lgtm
Committed to HA branch. Thanks Jitendra and Aaron for the original revs of the patch, and Eli for reviewing.
Here's a sketch of some basic code to do this. The approach is similar to what's done in the AvatarNode – a thread follows the most recent edit file with a predefined "lag" behind the file length.
This is by no means done, but wanted to get some code started.
|
https://issues.apache.org/jira/browse/HDFS-1975?attachmentOrder=desc
|
CC-MAIN-2015-40
|
refinedweb
| 2,992
| 63.7
|
really appreciate the effort, but you see if you used OCR during scanning/converting, then the document would have been in text instead of pictures and the file would not have been more than 20-30Mb.
thanks for the share
you can use teamviewer , its a remote assistance software and there is an option of file transfer as well which is much faster than any messenger ,
Helo PWs , here I charge refrigerant in ac system , as you have already seen how i have flushed and installed back all the system , right now the only thing left is the final leak test , then change dryer and charge gas , I guess it a very very easy task and you can easily do that if you have proper tools , there is nothing high tech in it, still I haven’t see any ac shop doing all that in proper way, I have also done few blunders but I have no other option , first one is that you have to use nitrogen gas instead of air to pressure check the system , as you know that there is no moisture in nitrogen gas so it’s a best deal to use nitrogen to pressure check the system , but its not possible for me/us so I use compressed air which is full of moisture , second thing is using leak detecting dye for leak test , I try my best to find that but fails , most of the shop keeper even never listen about such thing , so again no option other than first pressured with air (instead of nitrogen) and then use a surf liquid to check leak,,,,,,,,,if you find any leaking O ring then change it with original one , local one wont last long especially when system heat up ,<FONT face=Arial><FONT size=2><FONT face=Verdana>One thing I wont able to share with you is the expansion valve service , I take its pic but unfortunately I lost some of my data during data transfer from one pc to other , expansion valve is the alternate of orifice tube which reduces pressure, and I never see any car ac system with orifice tube but there are car ac systems which come with orifice tube , any how all Pakistani cars comes with expansion valve systems , which some times get stuck and need service/replacement , it s installed inside the evaporator casing under the dash board , simply remove it and flush it , I gave it a test by putting its sensor in cold water , it was working fine so I haven’t change it , if it need to change then buy 100% same , in my case the ac system is filled with steel mesh of broken compressor , means it really need some serious service/flushing from inside and for that I use CTC which you guys have already seen , I guess you can use carburetor cleaner as well as it also evaporated very quickly and leave no residues , always remove all the O rings before flushing with CTC as it can damage the rubber , one more thing I want to say that you can easily flush the evaporator and condenser with the system compressor installed in car , its little tricky but possible , but I recommend you batter buy a external compressor because you cant charge refrigerant without that, <?xml:namespace prefix = o<TD style="BORDER-BOTTOM: windowtext 1pt solid; BORDER-LEFT: windowtext 1pt solid; PADDING-BOTTOM: 0in; BACKGROUND-COLOR: transparent; PADDING-LEFT: 5.4pt; WIDTH: 6.65in; PADDING-RIGHT: 5.4pt; BORDER-TOP: windowtext 1pt solid; BORDER-RIGHT: windowtext 1pt solid; PADDING-TOP: 0in; mso-border-alt: solid windowtext .5pt" vAlign=top width=638 colSpan=3>
</TD></TR></TBODY></TABLE><o:p></o:p>When I charge the gas the ambient temp was 28DC , means my low and high side pressure must be between 40-50 and 175-210 psi respectively , I don’t know how ac shops charge gas with out looking at ambient temp , one thing you have listen from them is that the low side pressure must be around 20 psi , don’t know the logic behind all that , and many ac shops just connect one guage with low side only , no need to check the high side , again no logic for that , always remember that you have to take the reading of high and low side guage while the engine is running on stable RPM of about 1500 , its very important <o:p></o:p>LOW pressure gauge ,,,,,,,,,,,,When the reading is between 25 and 40 psi with the A/C running, stop charging at that time . The system is fully charged and should be cooling normally. DO NOT add any more refrigerant. If the gauge is over 50 psi, you have overcharged the system with too much refrigerant. ,,,,,,,,,,,High pressure gauge,When the reading gets up around to 250 psi, stop charging , The system is fully charged and should be cooling normally. DO NOT add any more refrigerant.<o:p></o:p>If the low side and the high side pressure is low that indicate low charge,,,,,,,,,,,,If the low side is low and the high side is high that indicate the blockage in the expansion valve ,,,,,,,,,,If the low side is high and the high side is low and if the gauges needles are vibrating than it mean there is a fault in compressor valves ,,,,,,,,,,,,If the low side and your high side are high that means system is over charge, stop pressure is over 100 psi.,,,,,,,,,,,,,,If low side is closer to 30 then colder the car is and if the high side is closer to 150 then colder the car isOne more thing is that you have to be careful about the comp oil , you must use the right oil in the right quantity , oil grade and quantity usually stamped on the back of compressor , so batter buy you own oil and take it to the ac shop and ask the ac mechanic to first flush the compressor by that new oil and then remove all the oil from the comp and then fill the new and clean oil in the exact quantity , Pour the oil very slowly into the intake port or low side of the compressor. This is where the large line entered. While pouring in the refrigerant oil, rotate the hub and clutch slowly to let the oil enter the compressor. Once that is completed, lay the compressor down on the hub for 10 minutes to let the oil seep into and lubricate the front seals to prevent leaks on startup. , I have seen ac mechanic use old comp oil which is pulled from the kitchen fridge and freezer compressor , that oil has a typical smell and a experienced person can easily find out what the hell is that , that oil cant be used in R134a units , and also it cant be mixed with the oil which is already there in the compressor , so I suggest pull the comp out and first flush it with new oil , and then fill it with new oil , That’s it for now and this is the last major post of this thread , few more will come soon like compression and engine oil pressure check , Any how check the pic they will mostly speak them self Best of luck ,
cant you make some video of how to recharge AC gas? pictures are hard to understand for someone totally noob on how to do it! video would be better and easy for you and for followers!
You can also shroud the compressor so air can flow to the back of it from its rear, this shroud also serves purpose to shield the compressor from the blazing heat of the exhaust manifold.
In older cars specially Hondas of the 80s and 90s that came from Japan, the compressor had such a shroud made of sheet metal, The shroud air intake used to face the radiator fan air draft path, with engine running and the car on a lift, you could test this by actually feeling the air being exhausted sideways from the rear of the compressor.
Sadly our local zabardast A/C mechanics threw out those shrouds (aka shields) because it used to cut their hands.
I did a few Mehran A/C installations once, in which the A/C compressor was mounted in front of the engine like this one and the alternator at the back, I also made a shroud for it, it really helped keeping the compressor working happily all day long.
btw - nice work - proper A/C workings, I wish more and more people follow such practice so we can actually have better quality service from these chor-bazaar commercial shops.
from where we can buy this aluminum tape
You can buy it from Ac spare part shops it also called DUCT tape.
many thanks , well about the comp sheld , i think there is no enough space becuse as you know its not fitted with original comp and the clearance between comp and condencer is very less , but i will try to do some thing for that , there is a plastic cover installed below the comp which covers all the fan belts against mud/water , its not installed there and i guess comp will get some air from there , its installed on my 07 cultus and i noticed that the comp is all and all covered so that the only air which it gets is from the condencer fan ,,,,,,,,,,,,,,means it gave a -ve effect and heatup the comp ,,,,,,,,,,i guess removing that plastic sheld can also improve the comp cooling ,,,,,,,,,,,,,and about mechanics ,,,,,,,,,,,i dont know whats wrong with me and my temparement that when i ever i go to any mechanic/workshop ,,,,,,,i just end up in messing with them ,,,,,,,,,,so i always avoide going to workshop and i prefer buying/collecting tools and use your own skill ,,,,,,,,,,,,,,,you can see how complicated these jobs are ,,,,,,,,,,,,new tech engines ,,,,,,,,efi , vvti , and many many other things,,,,,,,,,,,,,its some times hard for us then how can these illiterate mechanics can master all that ,,,,,,,,,,,they can hardelly wirte there names and i can never ever allow any mechanic to even open my car bonnet ,,,,,any how thanks for the guidence and help that was the last work on this car , now i am planning to gave a compression and oil pressure check , although car just fly on road , engine accleration is really very impressive but still i want to gave a test ,
today i just noticed that my thread has been crosed 50K views , that really amazing ,
^with exactly 50 pages atm!@op man this thread is an amazing job, WAY BETTER then the cultus' original manual book I BET keep up the good work of spreading wisdom!
this is good manual if some one have little skill. only deficiency in this thread or manual is engine overhauling. which goodfriend already said i didn't thought of posting it on PW so didn't make the pictures. i will suggest again overhaul engine of some other cultus and help some one cultus owner and complete this manual.
can we use aluminium baking sheet if aluminium tape is not available?
well its a mistake that i havent save any pic for that , later i even offer my friends to overhaul there cultus engine just for the pic but they refuse as its already fit and no need to overhaul,,,,,,,yes you can use the alminum foil used in kitchen ,,,,,,,,,but the tape is availave on every fridge spare parts store at just 150Rs
what about the transmission oil. have you changed that?
yes i have changed it with guard gear oil , next time i will prefer caltex , caltex not as thick as zic and guard ,
That mudshield for the belts is necessary in two factors, one obviously that it protects the belts from mud/water etc. secondly it ensures that all air is being exhausted from the engine bay under the bulkhead firewall when the car moves forward. All the plastic panels that go under the car are put on for this reason too.
You should see our mechanics work on diesel engines or even european origin petrol engines, they usually completely screw it up and then try to prove themselves right, The old BMW 316i timing belt replacement requires a E torx socket to undo the cam pulley then you need a tool to lock the cam at TDC, the tool is just a plate of steel with a square cutout that can hold the cam cutout, The cam pulley has some freeplay to compensate for machining tolerances hence they use a cam lock tool. Our mechanics try to set it by "andaaza" and result in a bad running engine or damage.
Same with the poor Fiat uno Diesel.
I can imagine your agony when you see such monkeys with tools trying to work on engines, And I hate it when instead of investing in tools that would ease their work they damage the car or something else. e.g. on most MB and BMW engines there are some really odd spaced bolts and nuts that you cannot reach with a ratchet, it is required that you use an offset spinner that looks like a bone. Takes half a minute to undo such bolts with such tool. Our dimagh se farigh mechanics refuse to invest in such items.
It is partly our local people's fault too - they insist on "jugaarbazi" rather than correct repair, they wait for failure then do maintenance rather than do preventive maintenance. Its the same everywhere, HVAC, cars, building etc.
@Xulfiqar
Fiat Uno 1.7D! I have sweet memories with that car. The only major issue i was having wen i bought that car was it used to give excessive Whitish blue smoke at cold start But only at idle. I was unable to get it solved even after a lot of struggle which includes consulting Fiat authorised mechanic, The most famous Diesel laborotary in LAhore & my routine mechanic.But the I day luckyly the problem got solved Within an hour just for Rupees 500/-. That time my car was at workshop for Bumper's paint job & in adjacent work shop a HTV mechanic who used to work on Bed Ford, Fiat, Mazda & Nissan UD etc was visiting some of his friends he see the car giving exessive smoke & he him self ask my mechnic that why dont he fix that, My mechnic said that "fault is beyond our limits so we didnt". Then the HTV mechanic him self ask to fix the issue, my mechanic call me to get go ahead which i allows & believ me that guy fix it in minutes.
That HTV mechanic also did it based on "Andaza" but obviously he was atleast capable enough to Daignose the root cause. I must say that Raje motor is the responsible for gettin g this famous brand & Model a big flop is Pakistan Coz neither they imported sufficient spare parts for that car nor they hire & developed good service men.
Its dilemma of our nation that we used to do short term corrective measures instaed of preventive:(
sir, this thread is a user manual for cultus owner ... i guess this thread will have 100,000 views soon !!!
|
https://www.pakwheels.com/forums/t/cultus-engine-and-body-overhaul-and-rebuild/190779?page=50
|
CC-MAIN-2017-30
|
refinedweb
| 2,557
| 53.51
|
Reliable integer root function?
Does Sage have an integer_root(x, n) function which reliable (!) returns floor(root(x,n)) for n-th roots? I think that it should be offered as a Sage function if not.
This seems to work:
def integer_root(x, n): return gp('sqrtnint(%d,%d)' %(x,n))
Integer n-th root of x, where x is non-negative integer.
// Related: question 10730.
Edit: The answer of castor below shows a second way to define such a function:
def integer_root(x, n): return ZZ(x).nth_root(n, truncate_mode=1)[0]
Which version will be faster?
This exists at least for square roots: it is called
sqrtrem()and also provides the remainder.
|
https://ask.sagemath.org/question/30375/reliable-integer-root-function/?sort=votes
|
CC-MAIN-2021-04
|
refinedweb
| 114
| 60.72
|
Updated.
Tracing Binaries to Builds.
Planning the Activity:
- ReplaceInFile. This is a code activity whose sole purpose is to replace all occurrences of a regular expression in a text file with a specified string.
- UpdateVersionInfo. This is a XML activity that extracts the version component from the current build number, finds the files matching a specification (e.g. “AssemblyInfo.*”) within a specified directory, then uses the ReplaceInFile activity to update the version information in those files to match the build number. This activity will also use the GetBuildDetail and FindMatchingFiles activities provided with TFS Build.
Developing the:
- Right click on the toolbox and select Choose Items…
- The Choose Toolbox Items dialog box will open. Select the System.Activities Components tab.
- Click Browse and select Microsoft.TeamFoundation.Build.Workflow.dll from the following location:
<Program Files (x86)>\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies
- Verify that the activities appear in the list and that they are checked, then click OK. The TFS Build activities should now appear in the toolbox..
:
- Update Version Info <Sequence>
- Validate Arguments <Sequence>
- Validate SourcesDirectory <If>
- String.IsNullOrEmpty(SourcesDirectory) Or (Not Directory.Exists(SourcesDirectory)) <Condition>
- Throw ArgumentException <Then>
- Validate FileSpec <If>
- String.IsNullOrEmpty(FileSpec) <Condition>
- Throw ArgumentException <Then>
- Validate RegularExpression <If>
- String.IsNullOrEmpty(RegularExpression) <Condition>
- Throw ArgumentException <Then>
- Get the Build <GetBuildDetail>
- BuildDetail <Result>
- Extract Version Info <Assign>
- VersionInfo <To>
- New System.Text.RegularExpressions.Regex(RegularExpression).Match(BuildDetail.BuildNumber).Value <Value>
- Form Qualified Spec <Assign>
- FileSpecToMatch <To>
- Path.Combine(SourcesDirectory, “**”, FileSpec) <Value>
- Find Matching Files <FindMatchingFiles>
- FileSpecToMatch <MatchPattern>
- MatchingFiles <Result>
- Handle Matching Files <If>
- MatchingFiles.Any() <Condition>
- <Then>
- Process Matching Files <Sequence>
- Log Version to Set <WriteBuildMessage>
- Enumerate Matching Files <ForEach<System.String>>
- Update Version Info in File <Sequence>
- Log Update <WriteBuildMessage>
- Update Version in File <ReplaceInFile>
- Warn No Matches Found <WriteBuildWarning>:
.
Testing the Activity
It’s a good idea to setup some unit tests to validate your custom activities outside the context of the build process. To get started, create a new unit test project and add a Workflow Activity to serve as your test workflow.
- Add a new C# Test Project to your solution
- Add a project reference to your Activity Library
- Add references to Microsoft.TeamFoundation.Build.Client and Microsoft.TeamFoundation.Build.Workflow
- Add a new C# Workflow Activity item to that project.
Integrating the Activity”).
.”
>>IMAGE.
.
:
- Open Team Explorer
- Right click on the Builds folder and select New Build Definition
- Setup your build definition as desired
- Select the Process tab and click
- Click New to open the New Build Process Template dialog
- Click Select an Existing XAML file, then click Browse
- Browse for the build process template XAML file you checked into version control and click OK
The XAML in your custom build process template will be parsed and you should see your new arguments appear in the build process parameter list as shown below:
Deploying the Activity.
.
Conclusion.
Thanks Jim! This is very timely and a good opportunity for us to get started with Team Build 2010 customization. Bob
Nice step-by-step. What if any are the plans for the RTM release for more activities and an improved build process template?
Excellent post Jim! Very nice that you took the time and implemented a "real-world" customization. This is one activity that most team build users will want to use in their build processes.
/Jakob
I’m having issues with this, no matter what I try – I end up with this error message.
TFB210503: An error occurred while initializing a build for build definition ReedBuildActivities: Cannot create unknown type ‘{clr-namespace:BuildActivities}ReedDeploymentActivity’.
I’ve removed all the custom functionality / references from my task (of type CodeActivity<bool>) and it still fails.
I’ve tried restarting the build service, to work around the dll caching issue too.
Also – these seems to be an issue if you include supporting DLL’s into the source-control path that the build controller looks at (in my case, a third party Javascript compression DLL that supports my build activity).
I’m assuming it’s because these extra DLLs don’t support the "CodeActivity" contract – (you get a weird error about missing endpoints, and the build controller just stops)
Also – Be careful, as these new activities target the .Net 4.0 Client Profile framework – meaning you won’t be able to utilize System.Web, etc, until you target the full framework (I’m not sure if working with the full framework is even possible, as I’ve not been able to get anything to work).
Any help with these issues would be appreciated.
Cheers,
Dave
Hi! I couldn’t find a FileCopy workflow activity in the toolbox (beta2)… is there any way to copy files (not folders, I saw the CopyFolder activity and is not what I need) in a build definition?
Thanks!
Dave, any extra assemblies _should_ get loaded when your activity assembly is loaded. It sounds your activity isn’t getting loaded. Do you have the Build Controller’s custom assembly path set correctly? Have you checked the fusion logs to see why the load is failing?
BTW – I’m running into a similar issue that Andres has reported with my build controller not finding the custom assembly even though it’s configured to the correct version control path. Here’s the error from the build report:
TFB210503: An error occurred while initializing a build for build definition TeamBuildTestingTest for QueueNewBuild Activity: Cannot create unknown type ‘{clr-namespace:CommunityTeamFoundation.Build.Workflow.Activities}QueueNewBuild’.
We’ve been experiencing the same problems as Dave (not finding custom assembly) and found a comment on which refers to this blog and includes the following step:
Add the namespace xmlns:<ns>="clr-namespace:<ActivityNamespace>;assembly=<AssemblyName>" to your build process’ root element
I edited the xaml file for the build definition to add the assembly=<AssemblyName> to the xmlns entry and I don’t get the message any more – however I didn’t see anything in Jim’s blog to indicate that I needed to do this – have I missed something
Jim, thanks for the great post. I have been working through the issues, but I am stumped on this one: The argument validator fails because the sourcesDirectory is not set. Where is that supposed to be set? Is it manual or automatic? Thanks for any help.
Hi Jim,
I try to do the same than you, but I have a problem.
I want to edit my workflow definition (checkout it from TFS et open it with Visual Studio Editor) / I have my activity build in my toolbox but I can’t drag n drop it to the workflow (i have a black circle). I Can drag n drop TFS / MSBuild activities but not mine.
I think i need to reference it somewhere: How do you reference your custom Acitivty in the XAML Build definition / where do you deploy it (locally)?
Regards,
Florent.
Hi Jim,
I followed your tutorial, create my own activity, add it to the toolbox. All is working fine here.
My problem is that I can’t drag / drop it from toolbox to builddefinition (DefaultTemplate.XAML). I can drag n drop out of the box activity, but not mine.
Any idea?
Regards,
Florent.
@David Jensen: there’s a “SourcesDirectory” variable that gets set by the Agent Scope Activity in the default build process template. I bind the SourcesDirectory parameter on my UpdateVersionInfo activity to that variable. It should be configured that way in the sources I provided in the .zip file. Let me know if you still have issues.
@Florent: You’ll notice that, in the sample I provided, I had branched a copy of the default build process template into a folder in the ActivityPack.Tests unit test project. That gives the workflow designer the project context it needs to load the activities you’re building. It also lets you modify your template without affecting any production builds that may be running on TFS.
Alternatively, you can right click on the Toolbox and select the Choose Items… command, select the System.Activities tab, then click Browse… to select your custom activity assembly. Generally, however, I’ve found this to be less reliable than working with a copy of the XAML file in a project (with Build Action set to None and Copy to Output Directory set to Do not copy).
Jim,
I am trying to figure out how to plug in some post complilation activities to a build using the 2010 DefaultTemplate instead of resorting to the UpgradeTemplate.
Essentially, our application consists of a solution file referencing a Web Site project and multiple class libraries which we build, precompile and aspnet_merge the web assemblies as part of our build process. We do some other things at this stage as well such as clean up of the build drop files, deployment the appliation to a server (db and web), execution of NUnit tests, etc. In the past, we’ve utilized a series of <Exec /> commands for these activities in the TFSBuild.proj files (we’re moving from TFS 2005) and leveraged some commands from imported targets such as MSBuild.Community.Tasks.Targets and Microsoft.TeamFoundation.Build.targets.
So my question is : What is the best way to approach this problem? Should I follow what you are describing here and plug this in to our build solution to be called after the projects have been built? I’ve been trying to experiment this approach using a combination of your sample ActivityPack project in conjunction with some code from an article written by Aaron Hallberg for Beta 1 but have had no success integrating that into this project. I am getting lots of frustrating compilation and valiation errors on that Beta 1 code that I have so far been unable to resolve. In this example, he is Invoking a Script which in theory should work for me to perform these types of actions.
Are there other build templates available that could help with this? Am I on the right track?
Reference : Writing Custom Activities for TFS Build 2010 (Beta 1)
Hi Jim,
Thanks for the beautiful article.
A couple of days back, we upgraded from TFS 2010 RC to TFS 2010.
We are using TFS build templates for our automated builds. And we added some custom activities to the default build process template. I am just curious to know if there are any changes in default build template from TFS 2010 RC version and TFS 2010 latest release?
Will my RC version’s Build template (which has a couple of custom activities in it) work as-is with the latest version of the TFS 2010? Any source where I can find out the list of differences between RC version and latest version of TFS 2010, if any?
Thanks,
Manish Gupta
cse.manish@gmail.com
As with Dan, I am finding myself having to add assembly=<AssemblyName> to the auto-generated xmlns:local. Which would be OK, except that I have to do it *every time* I modify the template.
Is there a way around this? It's driving me crazy.
Thanks for the great article by the way: it's pretty much the only usable article available online currently.
I have a similar issue as Florent has. I can't drag and drop the activity. However, I can't understand your answers. Can you eleborate more? Thanks!
Hi Jim,
Very good article. I have set this up within my environment and I am currently have a couple of minor issues.
1.) After making the custom changes to the Default Build Template which I did indeed branch to my other project, I then checked in that version to the location where the production templates are. Now if I open that new Default Build Template from that area all looks good except the custom activity "UpdateVersionInfo". Instead of showing the activity is shows an XAML error saying that it could not be displayed because "UpdateVersionInfo" could not be found in assembly "XYZ". That assembly was indeed built and was checked into source control under a separate area where I have my custom activity assemblies. In addition I properly set the controller's activity assembly path to that location. Can one NOT see a custom activity from outside the project in which it was modified? If so, how does one customize it then for production and not test? I thought the reason for branching was to customize for testing purposes and then checkin to the production area and customize (variable wise) for the real builds.
2.) When I execute a build that uses the new custom build template and code activity I get the error "Please specify a valid sources directory to search for matching files Parameter name: SourcesDirectory". Where do I set this variable in the process and what do I set it to with regards of PRODUCTION?
Thanks,
Sean Evans
sevans@bizzuka.com
|
https://blogs.msdn.microsoft.com/jimlamb/2010/02/12/create-a-custom-wf-activity-to-sync-version-and-build-numbers/
|
CC-MAIN-2016-40
|
refinedweb
| 2,145
| 54.12
|
OpenCV 3.4 compilation: import cv2 crashes in Python
I compile OpenCV 3.4 and configuration, compilation, ... runs just fine.
However, when I try to run
import cv2 in a Python file this results in:
****************************************************************** * FATAL ERROR: * * This OpenCV build doesn't support current CPU/HW configuration * * * * Use OPENCV_DUMP_CONFIG=1 environment variable for details * ****************************************************************** Required baseline features: SSE - OK SSE2 - OK SSE3 - OK SSSE3 - OK SSE4.1 - OK POPCNT - OK SSE4.2 - OK AVX - NOT AVAILABLE OpenCV(3.4.1) Error: Assertion failed (Missing support for required CPU baseline features. Check OpenCV build configuration and required CPU/HW setup.
This happens if I compile OpenCV with
-D ENABLE_AVX=ON, but the system should have AVX features. I am running a headless
Ubuntu 16.04 with kernel
4.13 on a server that has an Intel Sandybridge CPU that supports SSE, SSE2, SSE3, SSSE3, SSE4, SSE4.1, SSE4.2, and AVX.
If I set the cmake configuration with
-D ENABLE_AVX=OFF (which I do not prefer, since it should be considerably slower), I can not even compile OpenCV. Cmake complains about several errors during configuration.
What could be the possible reasons?
My desktop computer also uses an Intel Sandybridge CPU and the same configuration runs just fine there, although I am using Arch Linux there.
|
http://answers.opencv.org/question/189234/opencv-34-compilation-import-cv2-crashes-in-python/
|
CC-MAIN-2019-04
|
refinedweb
| 214
| 68.16
|
NAME
VOP_GETPAGES, VOP_PUTPAGES -- read or write VM pages from a file
SYNOPSIS
#include <sys/param.h> #include <sys/vnode.h> #include <vm/vm.h> int VOP_GETPAGES(struct vnode *vp, vm_page_t *ma, int count, int reqpage, vm_ooffset_t offset); int VOP_PUTPAGES(struct vnode *vp, vm_page_t *ma,: vp The file to access. ma Pointer to the first element of an array of pages representing a contiguous region of the file to be read or written. count The number of bytes that should be read into the pages of the array. sync VM_PAGER_PUT_SYNC if the write should be synchronous. rtvals An array of VM system result codes indicating the status of each page written by VOP_PUTPAGES(). reqpage The index in the page array of the requested page; i.e., the one page which the implementation of this method must handle. offset Offset in the file at which the mapped pages begin. The status of the VOP_PUTPAGES() method is returned on a page-by-page basis in the array rtvals[]. The possible status values are as follows: VM_PAGER_OK The page was successfully written. The implementation must call vm_page_undirty(9) to mark the page as clean. VM_PAGER_PEND The page was scheduled to be written asynchronously. When the write completes, the completion callback should call vm_object_pip_wakeup(9) and vm_page_io_finish(9) to clear the busy flag and awaken any other threads waiting for this page, in addition to calling vm_page_undirty(9). VM_PAGER_BAD The page was entirely beyond the end of the backing file. This condition should not be possible if the vnode's file system is correctly implemented. VM_PAGER_ERROR The page could not be written because of an error on the underlying storage medium or protocol. VM_PAGER_FAIL Treated identically to VM_PAGER_ERROR VM_PAGER_AGAIN The page was not handled by this request. The VOP_GETPAGES() method is expected to release any pages in ma that it does not successfully handle, by calling vm_page_free(9). When it succeeds, VOP_GETPAGES() must set the valid bits appropriately. VOP_GETPAGES() must keep reqpage busy. It must unbusy all other successfully handled pages and put them on appropriate page queue(s). For example, VOP_GETPAGES() may either activate a page (if its wanted bit is set) or deactivate it (otherwise), and finally call vm_page_wakeup(9) to arouse any threads currently waiting for the page to be faulted in.
RETURN VALUES
If it successfully reads ma
This manual page was written by Doug Rabson and then substantially rewritten by Garrett Wollman.
|
http://manpages.ubuntu.com/manpages/oneiric/man9/VOP_PUTPAGES.9freebsd.html
|
CC-MAIN-2014-41
|
refinedweb
| 402
| 65.01
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.