text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
dict_next
NameName
dict_next — Fetch the next key/value pair
SynopsisSynopsis
#include "ecdict.h"
|
int **dict_next** ( | a, | |
| | i, | |
| | key, | |
| | val
); | |
ECDict <var class="pdparam">a</var>;
ECDict_Iterator * <var class="pdparam">i</var>;
const char ** <var class="pdparam">key</var>;
const char ** <var class="pdparam">val</var>;
DescriptionDescription
If there is another key/value pair in the dictionary, fetch it.
- a
The ECDict. An ECDict is a typedef of an ec_hash_table.
- i
The iterator. An ECDict_Iterator is a typedef of an ec_hash_iter.
- key
The current key.
- val
The value associated with the
key.
Returns
1 if there is a next key. Otherwise,
0 is returned.
* key points to the next key and
* val to the associated value.
WarningWarning
When a key or value is returned, the memory is owned by the dictionary. Your memory can become invalid if something else removes an entry from the dictionary after you have queried it.
It is legal to call this function in any thread. | https://www.sparkpost.com/momentum/3/3-api/apis-dict-next/ | CC-MAIN-2021-17 | refinedweb | 159 | 61.02 |
j: Next unread message
k: Previous unread message
j a: Jump to all threads
j l: Jump to MailingList overview
It does.
However I changed something since my last email, now I'm getting a different error. Now I get:
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/dccollins/local/src/yt-hg/yt/data_objects/grid_patch.py", line 147, in __getitem__ self.get_data(key) File "/Users/dccollins/local/src/yt-hg/yt/data_objects/grid_patch.py", line 190, in get_data self._generate_field(field) File "/Users/dccollins/local/src/yt-hg/yt/data_objects/grid_patch.py", line 135, in _generate_field self[field] = self.pf.field_infofield File "/Users/dccollins/local/src/yt-hg/yt/data_objects/field_info_container.py", line 308, in __call__ dd = self._convert_function(data) TypeError: unsupported operand type(s) for =: 'NoneType' and 'float'
That leads me to believe this is operator error-- I'll poke around more and get back to you.
Thanks, d.
On Mon, Jun 20, 2011 at 8:58 PM, Matthew Turk matthewturk@gmail.com wrote:
Hi Dave,
Does it show up in the list of detected fields when debug mode is on?
-Matt
On Mon, Jun 20, 2011 at 7:53 PM, david collins antpuncher@gmail.com wrote:
Hi--
I have (a probably stupid) problem. I have a field that I'm writing out to some grids. The field is called 'AvgElec0', and only exists on level>0 grids (non-root-grids). I've defined this field
def _AEx(field,data): return data['AvgElec0'][:,:-1,:-1] add_field('AEx',function=_AEx,validators=[ValidateSpatial(0)],take_log=False,not_in_all=True)
(the slice is for the centering of the field). When I do something like pf.h.grids[1]['AEx'] I get a key error, "AvgElec0," even though double checking the field is in fact in that grid. If I change the code so it's written on all levels, the same pf.h.grids[1]['AEx'] works fine, as one would expect. Has the not_in_all behavior changed? Might I be doing something stupid?
Thanks, d.
-- Sent from my computer.
Yt-dev mailing list Yt-dev@lists.spacepope.org
Yt-dev mailing list Yt-dev@lists.spacepope.org
-- Sent from my computer. | https://mail.python.org/archives/list/yt-dev@python.org/message/JNGW6OTETIC53HUUZ6C6R2OLCEAZKLJ4/ | CC-MAIN-2019-51 | refinedweb | 364 | 53.17 |
- Name
count: that identifies the tag.
- Parameter List
(users): May accept zero or more arguments.
- Body: An optional body can be supplied to some tags using a semicolon and a closing tag
There can be many different usages of these four elements depending on the tag's implementation. Let's look at a few examples of how Leaf's built-in tags might be used:
#(variable) #extend("template"): I'm added to a base template! #endextend #export("title"): Welcome to Vapor #endexport #import("body") #count(friends) #for(friend in friends): <li>#(friend.name)</li> #endfor
Leaf also supports many expressions you are familiar with in Swift.
- etc.
#if(1 + 1 == 2): Hello! #endif #if(index % 2 == 0): This is even index. #else: This is odd index. #endif
Context¶
In the example from Getting Started, we used a
[String: String] dictionary to pass data to Leaf. However, you can pass anything that conforms to
Encodable. It's actually preferred to use
Encodable structs since
[String: Any] is not supported. This means you can not pass in an array, and should instead wrap it in a struct:
struct WelcomeContext: Encodable { var title: String var numbers: [Int] } return req.view.render("home", WelcomeContext(title: "Hello!", numbers: [42, 9001]))
That will expose
title and
numbers to our Leaf template, which can then be used inside tags. For example:
<h1>#(title)</h1> #for(number in numbers): <p>#(number)</p> #endfor. #endif
You can also write comparisons, for example:
#if(title == "Welcome"): This is a friendly web page. #else: No strangers allowed! #endif
If you want to use another tag as part of your condition, you should omit the
#for the inner tag. For example:
#if(count(users) > 0): You have users! #else: There are no users yet :( #endif
You can also use
#elseif statements:
#if(title == "Welcome"): Hello new user! #elseif(title == "Welcome back!"): Hello old user #else: Unexpected page! #endif req.view.render("solarSystem", SolarSystem())
We could then loop over them in Leaf like this:
Planets: <ul> #for(planet in planets): <li>#(planet)</li> #endfor </ul>
This would render a view that looks like:
Planets: - Venus - Earth - Mars
Extending templates¶
Leaf’s
#extend tag allows you to copy the contents of one template into another. When using this, you should always omit the template file's .leaf extension.
Extending is useful for copying in a standard piece of content, for example a page footer, advert code or table that's shared across multiple pages:
#extend("footer")
This tag is also useful for building one template on top of another. For example, you might have a layout.leaf file that includes all the code required to lay out your website – HTML structure, CSS and JavaScript – with some gaps in place that represent where page content varies.
Using this approach, you would construct a child template that fills in its unique content, then extends the parent template that places the content appropriately. To do this, you can use the
#export and
#import tags to store and later retrieve content from the context.
For example, you might create a
child.leaf template like this:
#extend("master"): #export("body"): <p>Welcome to Vapor!</p> #endexport #endextend
We call
#export to store some HTML and make it available to the template we're currently extending. We then render
master.leaf and use the exported data when required along with any other context variables passed in from Swift. For example,
master.leaf might look like this:
<html> <head> <title>#(title)</title> </head> <body>#import("body")</body> </html>
Here we are using
#import to fetch the content passed to the
#extend tag. When passed
["title": "Hi there!"] from Swift,
child.leaf will render as follows:
<html> <head> <title>Hi there!</title> </head> <body><p>Welcome to Vapor!</p></body> </html>
Other tags¶
#count¶
The
#count tag returns the number of items in an array. For example:
Your search matched #count(matches) pages.
#lowercased¶
The
#lowercased tag lowercases all letters in a string.
#lowercased(name)
#uppercased¶
The
#uppercased tag uppercases all letters in a string.
#uppercased(name)
#capitalized¶
The
#capitalized tag uppercases the first letter in each word of a string and lowercases the others. See
String.capitalized for more information.
#capitalized(name)
#contains¶
The
#contains tag accepts an array and a value as its two parameters, and returns true if the array in parameter one contains the value in parameter two.
#if(contains(planets, "Earth")): Earth is here! #else: Earth is not in this array. #endif
#date¶
The
#date tag formats dates into a readable string. By default it uses ISO8601 formatting.
render(..., ["now": Date()])
The time is #date(now)
You can pass a custom date formatter string as the second argument. See Swift's
DateFormatter for more information.
The date is #date(now, "yyyy-MM-dd")
#unsafeHTML¶
The
#unsafeHTML tag acts like a variable tag - e.g.
#(variable). However it does not escape any HTML that
variable may contain:
The time is #unsafeHTML(styledTitle)
Note
You should be careful when using this tag to ensure that the variable you provide it does not expose your users to an XSS attack. | https://docs.vapor.codes/leaf/overview/ | CC-MAIN-2022-21 | refinedweb | 850 | 65.93 |
Auto-tuning a convolutional network for Mobile GPU¶
Author: Lianmin Zheng
Auto-tuning for a specific device is critical for getting the best performance. This is a tutorial about how to tune a whole convolutional network.
The operator implementation for Mobile GPU in TVM is written in template form. The template has many tunable knobs (tile factor, vectorization, unrolling, etc). We will tune all convolution, depthwise convolution and dense arm devices. You can go to Mobile GPU Benchmark to see the results.
Install dependencies¶
To use the autotvm package in tvm, we need to install some extra dependencies. (change “3” to “2” if you use python2):
pip3 install --user psutil xgboost tornado
To make tvm run faster during tuning, it is recommended to use cython as FFI of tvm. In the root directory of tvm, execute (change “3” to “2” if you use python2):
pip3 install --user cython sudo make cython3
Now return to python code. Import packages.
import os import numpy as np import nnvm.testing import nnvm.compiler import tvm from tvm import autotvm from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner from tvm.contrib.util import tempdir import tvm.contrib.graph_runtime as runtime
Define network¶
First we need to define the network in nnvm symbol API.
We can load some pre-defined network from
nnvm.testing.
We can also load models from MXNet, ONNX and TensorFlow (see NNVM
tutorials NNVM Compiler Tutorials for more details).
def get_network(name, batch_size): """Get the symbol definition and random weight of a network""" input_shape = (batch_size, 3, 224, 224) output_shape = (batch_size, 1000) if "resnet" in name: n_layer = int(name.split('-')[1]) net, params = nnvm.testing.resnet.get_workload(num_layers=n_layer, batch_size=batch_size) elif "vgg" in name: n_layer = int(name.split('-')[1]) net, params = nnvm.testing.vgg.get_workload(num_layers=n_layer, batch_size=batch_size) elif name == 'mobilenet': net, params = nnvm.testing.mobilenet.get_workload(batch_size=batch_size) elif name == 'squeezenet_v1.1': net, params = nnvm.testing.squeezenet.get_workload(batch_size=batch_size, version='1.1') elif name == 'inception_v3': input_shape = (1, 3, 299, 299) net, params = nnvm.testing.inception_v3.get_workload(batch_size=batch_size) elif name == 'custom': # an example for custom network from nnvm.testing import utils net = nnvm.sym.Variable('data') net = nnvm.sym.conv2d(net, channels=4, kernel_size=(3,3), padding=(1,1)) net = nnvm.sym.flatten(net) net = nnvm.sym.dense(net, units=1000) net, params = utils.create_workload(net, batch_size, (3, 224, 224)) elif name == 'mxnet': # an example for mxnet model from mxnet.gluon.model_zoo.vision import get_model block = get_model('resnet18_v1', pretrained=True) net, params = nnvm.frontend.from_mxnet(block) net = nnvm.sym.softmax(net) else: raise ValueError("Unsupported network: " + name) return net, params, input_shape, output_shape
Start RPC Tracker¶
TVM uses RPC session to communicate with ARM boards. During tuning, the tuner will send the generated code to the board and measure the speed of code on the board.
To scale up the tuning, TVM uses RPC Tracker to manage distributed devices. The RPC Tracker is a centralized master node. We can register all devices to the tracker. For example, if we have 10 phones,
Register devices to RPC Tracker¶
Now we can register our devices to the tracker. The first step is to build tvm runtime for the ARM devices.
For Linux: Follow this section Build TVM Runtime on Device to build tvm runtime on the device. Then register the device to tracker by
python -m tvm.exec.rpc_server --tracker=[HOST_IP]:9190 --key=rk3399
(replace
[HOST_IP]with the IP address of your host machine)
For Android: Follow this readme page to install tvm rpc apk on the android device. Make sure you can pass the android rpc test. Then you have already registred your device. During tuning, you have to go to developer option and enable “Keep screen awake during changing” and charge your phone to make it stable.
After registering devices, we can confirm it by querying rpc_tracker
python -m tvm.exec.query_rpc_tracker --host=0.0.0.0 --port=9190
For example, if we have 2 Huawei mate10 pro, 11 Raspberry Pi 3B and 2 rk3399, the output can be
Queue Status ---------------------------------- key total free pending ---------------------------------- mate10pro 2 2 0 rk3399 2 2 0 rpi3b 11 11 0 ----------------------------------
You can register multiple devices to the tracker to accelerate the measurement in tuning.
Set Tuning Options¶
Before tuning, we should apply some configurations. Here I use an RK3399 board
as example. In your setting, you should modify the target and device_key accordingly.
set
use_android to True if you use android phone.
#### DEVICE CONFIG #### target = tvm.target.create('opencl -device=mali') # Replace "aarch64-linux-gnu" with the correct target of your board. # This target host is used for cross compilation. You can query it by :code:`gcc -v` on your device. target_host = 'llvm -target=aarch64-linux-gnu' # Also replace this with the device key in your tracker device_key = 'rk3399' # Set this to True if you use android phone use_android = False #### TUNING OPTION #### network = 'resnet-18' log_file = "%s.%s.log" % (device_key, network) dtype = 'float32' tuning_option = { 'log_filename': log_file, 'tuner': 'xgb', 'n_trial': 1000, 'early_stopping': 450, 'measure_option': autotvm.measure_option( builder=autotvm.LocalBuilder( build_func='ndk' if use_android else 'default'), runner=autotvm.RPCRunner( device_key, host='localhost', port=9190, number=10, timeout=5, ), ), }
Note
How to set tuning options
In general, the default values provided here work well.
If you have enough time budget, you can set
n_trial,
early_stopping larger,
which makes the tuning run longer.
If your device runs very slow or your conv2d operators have many GFLOPs, considering to
set timeout larger., try_winograd=True): if try_winograd: for i in range(len(tasks)): try: # try winograd template tsk = autotvm.task.create(tasks[i].name, tasks[i].args, tasks[i].target, tasks[i].target_host, 'winograd') tasks.append(tsk) except Exception: pass # tuner_obj.tune(n_trial=min(n_trial, len(tsk.config_space)), early_stopping=early_stopping, measure_option=measure_option, callbacks=[ autotvm.callback.progress_bar(n nnvm graph print("Extract tasks...") net, params, input_shape, out_shape = get_network(network, batch_size=1) tasks = autotvm.task.extract_from_graph(net, target=target, target_host=target_host, shape={'data': input_shape}, dtype=dtype, symbols=(nnvm.sym.conv2d, nnvm.sym.dense)) # run tuning tasks print("Tuning...") tune_tasks(tasks, **tuning_opt) # compile kernels with history best records with autotvm.apply_history_best(log_file): print("Compile...") with nnvm.compiler.build_config(opt_level=3): graph, lib, params = nnvm.compiler.build( net, target=target, target_host=target_host, shape={'data': input_shape}, params=params, dtype=dtype) # export library tmp = tempdir() if use_android: from tvm.contrib import ndk filename = "net.so" lib.export_library(tmp.relpath(filename), ndk.create_shared) else: filename = "net.tar" lib.export_library(tmp.relpath(filename)) # upload module to device print("Upload...") remote = autotvm.measure.request_remote(device_key, 'localhost', 9190, timeout=10000) remote.upload(tmp.relpath(filename)) rlib = remote.load_module(filename) # upload parameters to device ctx = remote.context(str(target), 0) module = runtime.create(graph, rlib, ctx) data_tvm = tvm.nd.array((np.random.uniform(size=input_shape)).astype(dtype)) module.set_input('data', data_tvm) module.set_input(**params) # evaluate print("Evaluate inference time cost...") ftimer = module.module.time_evaluator("run", ctx, number=1, repeat=30) prof_res = np.array(ftimer().results) * 1000 # convert to millisecond print("Mean inference time (std dev): %.2f ms (%.2f ms)" % (np.mean(prof_res), np.std(prof_res))) # 3 hours on a 32T AMD Ryzen Threadripper.
Extract tasks... Tuning... [Task 1/17] Current/Best: 25.30/ 39.12 GFLOPS | Progress: (992/1000) | 751.22 s Done. [Task 2/17] Current/Best: 40.70/ 45.50 GFLOPS | Progress: (736/1000) | 545.46 s Done. [Task 3/17] Current/Best: 38.83/ 42.35 GFLOPS | Progress: (992/1000) | 1549.85 s Done. [Task 4/17] Current/Best: 23.31/ 31.02 GFLOPS | Progress: (640/1000) | 1059.31 s Done. [Task 5/17] Current/Best: 0.06/ 2.34 GFLOPS | Progress: (544/1000) | 305.45 s Done. [Task 6/17] Current/Best: 10.97/ 17.20 GFLOPS | Progress: (992/1000) | 1050.00 s Done. [Task 7/17] Current/Best: 8.98/ 10.94 GFLOPS | Progress: (928/1000) | 421.36 s Done. [Task 8/17] Current/Best: 4.48/ 14.86 GFLOPS | Progress: (704/1000) | 582.60 s Done. [Task 9/17] Current/Best: 10.30/ 25.99 GFLOPS | Progress: (864/1000) | 899.85 s Done. [Task 10/17] Current/Best: 11.73/ 12.52 GFLOPS | Progress: (608/1000) | 304.85 s Done. [Task 11/17] Current/Best: 15.26/ 18.68 GFLOPS | Progress: (800/1000) | 747.52 s Done. [Task 12/17] Current/Best: 17.48/ 26.71 GFLOPS | Progress: (1000/1000) | 1166.40 s Done. [Task 13/17] Current/Best: 0.96/ 11.43 GFLOPS | Progress: (960/1000) | 611.65 s Done. [Task 14/17] Current/Best: 17.88/ 20.22 GFLOPS | Progress: (672/1000) | 670.29 s Done. [Task 15/17] Current/Best: 11.62/ 13.98 GFLOPS | Progress: (736/1000) | 449.25 s Done. [Task 16/17] Current/Best: 19.90/ 23.83 GFLOPS | Progress: (608/1000) | 708.64 s Done. [Task 17/17] Current/Best: 17.98/ 22.75 GFLOPS | Progress: (736/1000) | 1122.60 s Done. Compile... Upload... Evaluate inference time cost... Mean inference time (std dev): 128.05 ms (7.74 ms)
Total running time of the script: ( 0 minutes 0.002 seconds)
Gallery generated by Sphinx-Gallery | https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html | CC-MAIN-2019-09 | refinedweb | 1,500 | 54.79 |
Given.
Naive Pattern Searching:
Slide the pattern over text one by one and check for a match. If a match is found, then slides by 1 again to check for subsequent matches.
C
// C program for Naive Pattern Searching algorithm ; }
Python
# Python program for Naive Pattern Searching def search(pat, txt): M = len(pat) N = len(txt) # A loop to slide pat[] one by one for i in xrange(N-M + 1): # For current index i, check for pattern match for j in xrange(M): if txt[i + j] != pat[j]: break if j == M-1: # if pat[0...M-1] = txt[i, i + 1, ...i + M-1] print "Pattern found at index " + str(i) # Driver program to test the above function txt = "AABAACAADAABAAABAA" pat = "AABA" search (pat, txt) # This code is contributed by Bhavya Jain
Java
// Java program for Naive Pattern Searching public class NaiveSearch { public static void search(String txt, String pat) { int M = pat.length(); int N = txt.length(); /* A loop to slide pat one by one */ for (int i = 0; i <= N - M; i++) { int j; /* For current index i, check for pattern match */ for (j = 0; j < M; j++) if (txt.charAt(i + j) != pat.charAt(j)) break; if (j == M) // if pat[0...M-1] = txt[i, i+1, ...i+M-1] System.out.println("Pattern found at index " + i); } } public static void main(String[] args) { String txt = "AABAACAADAABAAABAA"; String pat = "AABA"; search(txt, pat); } } // This code is contributed by Harikishore
Output:
Pattern found at index 0 Pattern found at index 9 Pattern found at index 13
What is the best case?
The best case occurs when the first character of the pattern is not present in text at all.
txt[] = "AABCCAADDEE"; pat[] = "FAA";
The number of comparisons in best case is O(n).
What is the worst case ?
The worst case of Naive Pattern Searching occurs in following scenarios.
1) When all characters of the text and pattern are same.
txt[] = "AAAAAAAAAAAAAAAAAA"; pat[] = "AAAAA";
2) Worst case also occurs when only the last character is different.
txt[] = "AAAAAAAAAAAAAAAAAB"; pat[] = "AAAAB";
Number of comparisons in worst case is O(m*(n-m+1)). Although strings which have repeated characters are not likely to appear in English text, they may well occur in other applications (for example, in binary texts). The KMP matching algorithm improves the worst case to O(n). We will be covering KMP in the next post. Also, we will be writing more posts to cover all pattern searching algorithms and data structures.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. | https://www.geeksforgeeks.org/searching-for-patterns-set-1-naive-pattern-searching/ | CC-MAIN-2018-09 | refinedweb | 441 | 79.19 |
Homework 2
Due by 11:59pm on Tuesday, 2/16. See Lab 0 for instructions on submitting
assignments.
Using OK: If you have any questions about using OK, please refer to this guide.
Readings: You might find the following references useful:
Required questions: Accumulate
Show that both
summation and
product are instances of a more
general function, called
accumulate, with the following signature:
from operator import add, mul def accumulate(combiner, base, n, term): """Return the result of combining the first N terms in a sequence. The terms to be combined are TERM(1), TERM(2), ..., TERM(N). COMBINER is a two-argument function. Treating COMBINER as if it were a binary operator, the return value is BASE COMBINER TERM(1) COMBINER TERM(2) ... COMBINER TERM(N) >>> 3: Filtered Accumulate
Show to extend the
accumulate function to allow for filtering the results
produced by its
term argument. The function
filtered_accumulate has the
following signature:
def true(x): return True def false(x): return False def odd(x): return x % 2 == 1, true, 5, identity) # 0 + 1 + 2 + 3 + 4 + 5 15 >>> filtered_accumulate(add, 11, false, 5, identity) # 11 11 >>> filtered_accumulate(add, 0, odd, 5, identity) # 0 + 1 + 3 + 5 9 >>> filtered_accumulate(mul, 1, odd, 5, square) # 1 * 1 * 9 * 25 225 >>> # Do not use while/for loops or recursion >>> from construct_check import check >>> check(HW_SOURCE_FILE, 'filtered_accumulate', ... ['While', 'For', 'Recursion', 'FunctionDef']) True """ "*** YOUR CODE HERE ***" return _______
filtered_accumulate(combiner, base, pred, n, term) takes
the following arguments:
combiner,
base,
termand
n: the same arguments as
accumulate.
pred: a one-argument predicate function applied to the values of
term. with a single return statement containing
a call to
accumulate. Do not write any loops, def statements, or
recursive calls to
filtered_accumulate.
Hint: It may be useful to use one line if-else statements, otherwise known as ternary operators. The syntax is described in the Python documentation:
The expression
x if C else yfirst evaluates the condition,
Crather than
x. If
Cis true,
xis evaluated and its value is returned; otherwise,
yis evaluated and its value is returned
Use OK to test your code:
python3 ok -q filtered_accumulate
Question 4: Repeated
Implement
repeated(f, n):
fis a one-argument function that takes a number and returns another number.
nis a non-negative integer
repeated returns another function that, when given an argument
x, will
compute
f(f(....(f(x))....)) (apply
f a total
n times). ***"
Hint: You may find it convenient to use
compose1 from the textbook:
def compose1(f, g): """Return a function h, such that h(x) = f(g(x)).""" def h(x): return f(g(x)) return h
Use OK to test your code:
python3 ok -q repeated
Question 5: 6:. Do not use any assignment statements; however, you
may use
def statements.
Hint: If you're stuck, try implementing
pingpongfirst using assignment and a
whilestatement. Any name that changes value will become an argument to a function in the recursive definition. 8:
Extra questions
Extra questions are not worth extra credit and are entirely optional. They are designed to challenge you to think creatively!
Question 9: Y combinator!
Question 10: | https://inst.eecs.berkeley.edu/~cs61a/sp16/hw/hw02/ | CC-MAIN-2018-05 | refinedweb | 521 | 52.49 |
howdy,
this is my first experience with class objects. i created this class however i have some questions about why it works.
by pure luck i got the "GetName()" and "SetName()" functions to work by adding the pointer ref's - something i saw in a book - but i dont know what they (the pointers) do.
in the "Label()" function i can name objects but i dont think i am creating objects. how would i create multiple objects of this class from user input.
this is my first try with classes so any other suggestions would be appriciated.
ThanksThanksCode:#include <iostream.h> #include <math.h> using namespace std; class SBeam { public: char SBeamName; SBeam(); ~SBeam(); char* GetName(); char* SetName(); char Label(); private: int Depth(); char ItsName[6]; int SbeamDepth; int depth; }; SBeam::SBeam() { } SBeam::~SBeam() { } char* SBeam::SetName() { cout<<"Enter Beam Name: \n"; cin.getline(ItsName, 6); Depth(); return ItsName; } char* SBeam::GetName() { return ItsName; } int SBeam::Depth() { cout<<"Enter Beam Depth: "; cin>>depth; return depth; } char SBeam::Label() { cout<<"The Beam Label: "<<GetName()<<", "<<depth<<"!\n"; return 0; } int main() { SBeam B1; B1.SetName(); B1.Label(); return 0; }
M.R. | http://cboard.cprogramming.com/cplusplus-programming/9832-first-class-try.html | CC-MAIN-2016-18 | refinedweb | 189 | 72.87 |
Assume that we want to find all
li
soup.find_all("li", {"class": KNOWN_STRING})
soup.select("li[class^="+KNOWN_STRING)
I would use regex in this approach.
import re soup.find_all('li', {'class': re.compile(r'regex_pattern')})
Because you have a known string but an arbitrary (I'm assuming unknown) number you can use a regular expression to define the pattern of what you expect the string to be. Example:
re.compile(r'^KNOWN_STRING[0-9]+$')
This would find all known strings with one or more numbers at the end. See this for more about regular expressions in Python.
Would this be correct given two digits in the id? soup.find_all('li', {'class': re.compile(r'^TheMatch v-1 c-[0-9][0-9]+$')}). I assume that it wouldn't.
For two digits at the end you would do:
soup.find_all('li', {'class': re.compile(r'^TheMatch v-1 c-[0-9]{2}$')})
The
+ just means one or more of the previous regular expression.
What I did was specify in brackets
{2} after the regular expression the number of instances I was expecting to be there
2. | https://codedump.io/share/hP4sPuOuOEjX/1/python-beautifulsoup---find-all-elements-whose-class-names-begin-with-some-string | CC-MAIN-2016-50 | refinedweb | 185 | 62.04 |
A new issue has been created in JIRA.
---------------------------------------------------------------------
View the issue:
Here is an overview of the issue:
---------------------------------------------------------------------
Key: JELLY-137
Summary: Tag libraries should fail on unknown tag name
Type: Improvement
Status: Unassigned
Priority: Major
Project: jelly
Components:
core / taglib.core
Versions:
1.0-beta-5
Assignee:
Reporter: Hans Gilde
Created: Fri, 10 Sep 2004 10:14 PM
Updated: Fri, 10 Sep 2004 10:14 PM
Description:
If a tag can't be found by a tag library, it's output as XML. This is fine for tags where
Jelly can't find a TagLibrary for the namespace.
But, is a TagLibrary is found but the tag isn't, I think that there should be an error either
at parse time or run time. For most libraries, I don't think that users want to output XML
if they misspell a tag name.
The TagLibrary always has the opportunity to return a tag that outputs the XML. So, for the
cases that do need to output XML if there's no tag found, there's a solution.
--------------------------------------------------------------------- | http://mail-archives.apache.org/mod_mbox/commons-dev/200409.mbox/%3C581233012.1094879677547.JavaMail.apache@nagoya%3E | CC-MAIN-2017-43 | refinedweb | 178 | 78.38 |
Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
HTML Imports are a way to include and reuse HTML documents in other HTML Working Draft. If you wish to make comments regarding this document, please send them to public-webapps@w3.org (subscribe, archives) with a Subject: prefix of
[html-imports]..
import"
HTMLLinkE:
HTML Imports, or just imports from here on, are HTML documents that are linked as external resources from another HTML document. The document that links to an import is called an import referrer. For any given import, an import referrer ancestor is its import referrer or any import referrer ancestor of its import referrer.
An import referrer which has its own browsing context the import referrer, an import is represented as a
Document, called the imported document. The imported documents don't have browsing context.
The set of all imports associated with the master document forms an import map of the master document. The maps stores imports as its items with their import locations as keys. The import map is empty at beginning. New items are added to the map as import fetching algorithm specifies.
import"
To enable declaring imports>
HTMLLinkElementInterface
partial interface LinkImport { readonly attribute Document? import; }; HTMLLinkElement implements LinkImport;
On getting, the
import attribute must return null, if:
linkdoes not represent an import
linkelement is not in a
Document
Otherwise, the attribute must return the imported document for the import, represented by the
link element.
The same object must be returned each time.
Here's how one could access the imported document, mentioned in the previous example:
var link = document.querySelector('link[rel=import]'); var heart = link.import; // Access DOM of the document in /imports/heart.html var pulse = heart.querySelector('div.pulse');ors sate of "has an import that is blocking scripts" can change each time an existing import is completely loaded or new import loading is started. HTML parser has changes to unblock it for each of such timings.
Each document has an import link list, each of whose item is consist of link, the
link element and location, a URL.
Also, the item is optionally marked as cycle.
The list is empty at beginning and the item is added as import request altorighm specifies.
An imported document has zero or more import ancestors. The import ancestor is a document. If the import link list of document A contains an non-cycle item whose location points document B, A is a import ancestor of B. B is also called the import parent of the
Document. The import ancestor is transitive: If document C is a import ancestor of document B and document B is a import ancestor of document A, C is a import ancestor of document A.
An imported document also has one or more import predecessor. The import predecessor is a document. If the URL of document A is located before the URL of document B in the import link list of B's import parent, A is import predecessor of B. The import predecessor is transitive. If document A is import predecessor of document B and B is redecessor of document C, A is import predecessor of C.
The
Document that is in either import ancestors or import predecessors of document A, or is linked from non-cycle linking structure of import link lists forms a directed asyclic graph (DAG). Each node of the DAG is a document and its edge is a link. It cannot be a tree because more than one link can point same import. Any cycle is marked by the import request algorithm and excluded from dependency calculation. The edges of each node is ordered in terms of import link list. The import predecessors selection is aware of the order.
In the figure,
The difference between the import referrer and the import parent is that import referrer reflects the state of the node tree and that the import parent is built by the algorithm described in this document.
When user agents attempt to obtain a linked import, they must also run the import request algorithm, which is equivalent to running these steps:
linkelement that creates an external resource link to the import.
All imports linked from documents that is the master document or the one in the import map must be loaded using potentially CORS-enabled fetch with mode set to "Anonymous".
When an import is fetched, the user agent must run the import fetching algorithm, which must be equivalent to running these steps:
linkelement which makes the external resource link to the import.
Document, the document's address of which is LOCATION
EOFcharacter
The loading attempt must be considered successful if IMPORT is not null on the algorithm completion, and failed otherwise.
linkelement fires a simple event called
loadfor successful loading attempt. For failed attempt, it fires a simple event named
error.
Content Security Policy imports.
All import dependents must be loaded before
DOMContentLoaded is fired. See Bug 23526..
Insert following step between step 2 and step 3 of document.write() method: imports must be considered as input sources of the style processing model of the master document.
Between declarations from different documents, the document order in terms of order of appearance is defined based on the document order of the
link elements of their import referrer ancestors which are in the same document. If thre are more than one of such documents, the comparison result in the first document, in the document order, wins.
Events in imports. | http://www.w3.org/TR/html-imports/ | CC-MAIN-2014-35 | refinedweb | 922 | 55.44 |
Continuing (from Part II) on, we will discuss the third type of binding available to ActiveX controls within InfoPath. This one is called “Node Binding” (which corresponds to the “Field or Group (any data type)“ in the drop down of the custom control wizard).
What happens if you want your ActiveX control to interact with data that other parts of your form will interact with? For example, you want your ActiveX control to get and set data which a repeating table elsewhere also sets and gets from. Stream binding writes to a seperate namespace, so no part of the InfoPath solution will be able to interact with that data. Node binding in the solution for this. Node binding, as the name indicates, allows the ActiveX control to connect directly to the DOM. The ActiveX is passed in an IXMLDOMNode pointer and anything done by the ActiveX control gets reflected in the InfoPath form DOM. So this type of binding allows you to bind to any part of the DOM whether it be a leaf node, the root node, or anything in between.
As an example, let’s say that you had a repeating table which has different values which you would like to graph. With a graphing ActiveX control, you could bind it to the same node as the repeating table. Then the chart control could pull the data and create a graph appropriately. Any changes to the repeating table would then get reflected in the graphs.
The only down side to this type of binding is that you will most likely have to write these type of controls yourself. With simple binding and stream binding, you may be able to find controls you can use out of the box, but controls that have a property for an IXMLDOMNode are very few.
This ends the three part series on the types of binding for custom controls in InfoPath. I hope this has been helpful. Please use the comment thing below if you have any questions or need any clarifications. | https://blogs.msdn.microsoft.com/ajma/2004/07/09/custom-control-support-in-infopath-part-iii-node-binding/ | CC-MAIN-2016-36 | refinedweb | 339 | 69.31 |
Find all pairs of number whose sum is equal to a given number in C++
This C++ program is able to find all the pairs of numbers in an array whose sum is equal to a specific number by using a nested loop and conditional statements. This problem is based on the application of array which is useful in many places. Following is a short and simple solution to the above task.
How to find all the pairs of number in an array whose sum is equal to a given number
Problem statement
There is an array of size n where n is the number of elements in it. Find all the pair of numbers whose sum is equal to a number given by the user.
- The size of array is given by user.
- Array elements cal be in any order.
Problem approach
- Declare an array of size n.
- Store the array elements as per the user input.
- Take the input
- Find the required pairs of numbers and print them on the screen.
Program/Source code
Following C++ program is able to find all the pair of numbers in an array whose sum is equal to a given number, is written and successfully compiled in CodeBlocks v16.01 .
/* C++ program to find the pairs of number present in array whose sum is equal to a given number** ***give different sizes for array & the their values to get different result*** ** enter a valid number sum otherwise noting will print on screen ** */ #include <iostream> using namespace std; int main() { int size, num; cout<<"Enter the size of array: "; cin>>size; int array[size]; // array declaration cout<<"Enter array elements: "; for(int i=0;i<size;i++) cin>>array[i]; // input array values cout<<"Enter the number whose pairs are to be found: "; cin>>num; for(int i=0;i<size-1;i++){ // nested for loop for(int j=i+1;j<size;j++){ if(array[i]+array[j]==num) cout<<array[i]<<" and "<<array[j]<<endl; // print pairs } } return 0; }
Output Example
Enter the size of array: 8 Enter array elements: 5 6 4 1 5 3 9 8 Enter the number whose pairs are to be found: 9 5 and 4 6 and 3 4 and 5 1 and 8 Process returned 0 (0x0) execution time : 26.536 s Press any key to continue.
Program explanation
- Initialize an array and store values in it.
- Take the number input from the user whose pairs are needed to find out.
- Use nested loop statements and conditional statements to find the pairs.
- Print the pairs on the screen.
Also read
Find nature of roots and actual roots of Quadratic equation in C++ | https://www.codespeedy.com/find-all-pairs-of-number-whose-sum-is-equal-to-a-given-number-in-cpp/ | CC-MAIN-2019-43 | refinedweb | 448 | 65.25 |
You can subscribe to this list here.
Showing
1
results of 1
Bugs item #3611474, was opened at 2013-04-20 13:07
Message generated for change (Tracker Item Submitted) made by tombert
You can respond by visiting:
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: 69. Other
Group: current: 8.6.0
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Thomas Perschak (tombert)
Assigned to: Nobody/Anonymous (nobody)
Summary: wish crash when modifying auto_load procedure
Initial Comment:
Win7, MinGW gcc 4.7.2
For debug I put a "puts" into the "auto_load" procedure like this:
proc auto_load {cmd {namespace {}}} {
## @tombert crash!
puts "hello"
global auto_index auto_path
...
When starting the "wish" it crashes immediately. The "tclsh" works fine.
thx!
----------------------------------------------------------------------
You can respond by visiting: | http://sourceforge.net/p/tcl/mailman/tcl-bugs/?viewmonth=201304&viewday=20 | CC-MAIN-2014-23 | refinedweb | 146 | 55.95 |
Voronoi Module¶
Overview
Details
The
freud.voronoi module contains tools to characterize Voronoi cells
of a system.
- class
freud.voronoi.
Voronoi(box, buff)¶
Compute the Voronoi tessellation of a 2D or 3D system using qhull. This uses
scipy.spatial.Voronoi, accounting for periodic boundary conditions.
Module author: Benjamin Schultz <baschult@umich.edu>
Module author: Yina Geng <yinageng@umich.edu>
Module author: Mayank Agrawal <amayank@umich.edu>
Module author: Bradley Dice <bdice@bradleydice.com>
Since qhull does not support periodic boundary conditions natively, we expand the box to include a portion of the particles’ periodic images. The buffer width is given by the parameter
buff. The computation of Voronoi tessellations and neighbors is only guaranteed to be correct if
buff >= L/2where
Lis the longest side of the simulation box. For dense systems with particles filling the entire simulation volume, a smaller value for
buffis acceptable. If the buffer width is too small, then some polytopes may not be closed (they may have a boundary at infinity), and these polytopes’ vertices are excluded from the list. If either the polytopes or volumes lists that are computed is different from the size of the array of positions used in the
freud.voronoi.Voronoi.compute()method, try recomputing using a larger buffer width.
- Parameters
box (
freud.box.Box) – Simulation box.
buff (float) – Buffer width.
- Variables
buffer (float) – Buffer width.
nlist (
NeighborList) – Returns a weighted neighbor list. In 2D systems, the bond weight is the “ridge length” of the Voronoi boundary line between the neighboring particles. In 3D systems, the bond weight is the “ridge area” of the Voronoi boundary polygon between the neighboring particles.
polytopes (list[
numpy.ndarray]) – List of arrays, each containing Voronoi polytope vertices.
volumes ((\(\left(N_{cells} \right)\))
numpy.ndarray) – Returns an array of volumes (areas in 2D) corresponding to Voronoi cells.
compute¶
Compute Voronoi diagram.
- Parameters
positions ((\(N_{particles}\), 3)
numpy.ndarray) – Points to calculate Voronoi diagram for.
box (
freud.box.Box) – Simulation box (Default value = None).
buff (float) – Buffer distance within which to look for images (Default value = None).
computeNeighbors¶
Compute the neighbors of each particle based on the Voronoi tessellation. One can include neighbors from multiple Voronoi shells by specifying
numShellsin
getNeighbors(). An example of computing neighbors from the first two Voronoi shells for a 2D mesh is shown below.
Retrieve the results with
getNeighbors().
Example:
from freud import box, voronoi import numpy as np vor = voronoi.Voronoi(box.Box(5, 5, is2D=True)) pos = np.array([[0, 0, 0], [0, 1, 0], [0, 2, 0], [1, 0, 0], [1, 1, 0], [1, 2, 0], [2, 0, 0], [2, 1, 0], [2, 2, 0]], dtype=np.float32) first_shell = vor.computeNeighbors(pos).getNeighbors(1) second_shell = vor.computeNeighbors(pos).getNeighbors(2) print('First shell:', first_shell) print('Second shell:', second_shell)
Note
Input positions must be a 3D array. For 2D, set the z value to 0.
- Parameters
positions ((\(N_{particles}\), 3)
numpy.ndarray) – Points to calculate Voronoi diagram for.
box (
freud.box.Box) – Simulation box (Default value = None).
buff (float) – Buffer distance within which to look for images (Default value = None).
exclude_ii (bool, optional) – True if pairs of points with identical indices should be excluded (Default value = True).
computeVolumes¶
Computes volumes (areas in 2D) of Voronoi cells.
New in version 0.8.
Must call
freud.voronoi.Voronoi.compute()before this method. Retrieve the results with the volumes attribute.
getNeighbors¶
Get well-sorted neighbors from cumulative Voronoi shells for each particle by specifying
numShells.
Must call
computeNeighbors()before this method.
plot¶
Plot Voronoi diagram.
- Parameters
ax (
matplotlib.axes.Axes) – Axis to plot on. If
None, make a new figure and axis. (Default value =
None)
- Returns
Axis with the plot.
- Return type
- | https://freud.readthedocs.io/en/stable/voronoi.html | CC-MAIN-2019-43 | refinedweb | 610 | 62.14 |
Chat in your browser.
- Go to the room to which you want to add a bot.
- From the menu at the top of the page, select Configure webhooks.
- Under Incoming Webhooks, click ADD WEBHOOK.
- Name the new webhook 'Quickstart Webhook' and click SAVE.
- Copy the URL listed next to your new webhook in the Webhook Url column.
- Click outside the dialog box to close.
Step 2: Create the Python script
Create a file named
quickstart.py in your working directory and copy the
following code:
from httplib2 import Http from json import dumps # # Hangouts Chat incoming webhook quickstart # def main(): url = '<INCOMING-WEBHOOK-URL>' bot_message = { 'text' : 'Hello from Python script!'} message_headers = { 'Content-Type': 'application/json; charset=UTF-8'} http_obj = Http() response = http_obj.request( uri=url, method='POST', headers=message_headers, body=dumps(bot_message), ) print(response) if __name__ == '__main__': main()
Be sure to replace the value for the
url variable in the code snippet
with the URL provided to you by Hangouts Chat when you registered the incoming
webhook. (If you need to find the URL again, go to the Hangouts Chat room,
select Configure webhooks, and then copy the URL associated with your new
incoming webhook in the Webhook Url column.)
Step 3: Run the sample
Run the sample by running the following command from your working directory:
$ python3 quickstart.py
Further reading
Troubleshooting
403: The caller does not have permission
Incoming webhooks for rooms will fail with this error if not all users in your domain have bots enabled. See the webhooks documentation for more information. | https://developers.google.com/hangouts/chat/quickstart/incoming-bot-python | CC-MAIN-2019-47 | refinedweb | 256 | 63.59 |
IPs, and its MAC addresses. However, the instance shuts down the guest OS and loses its application state. Essentially, a stopped instance resets to its power-on state and no data is saved. Stop an instance if you want to change the machine type, add or remove attached disks, change the minimum CPU platform, add or remove GPUs, or apply sizing recommendations..
A stopped instance does not incur charges, but all of the resources that are attached to the instance continue to incur charges. For example, you are charged for persistent disks and external IP addresses according to the price sheet, even if an instance is stopped. To stop being charged for attached resources, you can reconfigure a stopped instance to not use those resources, and then delete the resources.
If you need to retain the guest OS and application state, suspend the instance.
Restrictions
You cannot stop an instance with a local SSD attached. Compute Engine does not prevent you from shutting down an instance from inside the guest operating system if the instance has a local SSD, so take precautions.
Local SSDs
You cannot stop an instance that has a local SSD attached. Instead, you must migrate your critical data off of the local SSD to a persistent disk or to another instance before you delete the instance completely. Compute Engine does not prevent you from shutting down the guest operating system on an instance with a local SSD, so take, go to the VM instances page.
Select one or more instances that you want to stop.
Click Stop.
gcloud
Use the
instances stop command and specify one or more instances that you
want to stop.
gcloud compute instances stop example-instance-1 example-instance-2
API
In the API, construct a
POST request to stop an instance.
A
TERMINATED instance still exists with its configuration settings and
instance metadata, but it loses its in-memory data and virtual machine state. Any
resources that are attached to the terminated instance remain
attached until you manually detach those resources or delete the instance.
After the instance is in the
TERMINATED state, you can
restart the instance
or delete it. You can also leave an instance in a
TERMINATED state indefinitely. However, if you do not plan to
restart the instance, delete it instead.
Stopping an instance through the OS
Permissions required for this task
To perform this task, you must have the following permissions:
compute.instances.setMetadataon the instance if using instance-level public SSH keys
compute.project.setCommonInstanceMetadataon the project if using project-wide SSH keys
Optionally, you can stop an instance through the guest operating system
Permissions required for this task
To perform this task, you must have the following permissions:
compute.instances.starton the instance
To start a stopped instance, use the
instances().start method.
This boots up a stopped virtual machine instance that is currently in the
TERMINATED state.
The
start method restarts an instance in a
TERMINATED state, whereas
methods such as
reset() and
sudo reboot only work on instances
that are currently running. Almost all instances can be restarted, as long as
the instance is in a
TERMINATED state.
Console
In the Google Cloud Console, go to the VM instances page.
Go to the VM instances page
Select the boxes next to one or more instances to start.
Click Start.
gcloud
To reset your instance using
gcloud compute:
gcloud compute instances start example-instance
API
In the API, make a
POST request to the following URI, replacing the
project, zone, and instance name appropriately:
To restart your instance using the client libraries, construct a request
to the
instances().start method:
def restartInstance(auth_http, gce_service): request = gce_service.instances().start(project="myproject", zone="us-central1-a", instance="example-instance") response = request.execute(auth_http) print response
For more information about this method, see the
instances().start
reference documentation.
Restarting an instance that has encrypted disks
Permissions required for this task
To perform this task, you must have the following permissions:
compute.instances.startWithEncryptionKeyon the instance
If the instance you want to restart uses customer-supplied encryption keys, you must provide those keys when trying to restart the instance.
Console
In the Google Cloud Console, go to the VM instances page.
Go to the VM Instances page
Click the name of the instance that you want to start. This opens the instance details page.
Click. If you are using an RSA-wrapped key, use the
gcloud beta component:
gcloud compute instances start INSTANCE_NAME \ --csek-key-file ENCRYPTION_KEY
Replace the following:
INSTANCE_NAME: the name of the instance
ENCRYPTION_KEY: the encryption key that you use to encrypt persistent disks that are attached to the instance
API
In the API, construct a POST request to attached disk that is encrypted with a customer-supplied encryption key.
Resetting an instance
Permissions required for this task
To perform this task, you must have the following permissions:
compute.instances.reseton the instance state.
You can perform a reset on a running instance by using the Reset button
in the Cloud Console
, the
instances reset command in
gcloud, or by making a
POST request in the API.
Console
In the Google Cloud Console, go to the VM instances page.
Go to the VM instances page
Select one or more instances to reset.
Click Reset.
gcloud
To reset your instance using
gcloud compute:
gcloud compute instances reset example-instance
API
In the API, make a
POST request to the following URI, replacing the
project, zone, and instance name appropriately:
To reset your instance using the client libraries, construct a request
to the
instances().reset method:
def resetInstance(auth_http, gce_service): request = gce_service.instances().reset(project="myproject", zone=.
gcloud compute instances deletefollowed by
gcloud compute instances create: This is a completely destructive restart that initializes the instance with any information passed into
gcloud compute instances create. You can then select any new images or other resources you'd like to use. The restarted instance will probably have a different IP address. This method potentially swaps the physical machine hosting the instance.
What's next
- Learn how to schedule instances to start and stop automatically.
- Use the interactive serial console to troubleshoot your instance.
- Learn how to change the machine type. | https://cloud.google.com/compute/docs/instances/stop-start-instance?hl=ar | CC-MAIN-2021-17 | refinedweb | 1,033 | 53.31 |
Have you ever noticed how procedural many things in nature appear? Clouds are a perfect example of this. Even though all clouds are unique, they really just look like slight variations of one another. Wanna make some clouds so you can stay glued to your computer without having to go outside to look at them? I will show you how to procedurally generate clouds and render them using OpenGL.
We will use Perlin Noise to make clouds. Ken Perlin developed Perlin Noise in the 80s. A Perlin Noise function is a seeded pseudo random number generator. The noise function will always give the same result for the same seed. The random values between two seeds will also smoothly interpolate between one another. These two features make Perlin Noise perfect for the procedural generation of anything with a pseudo random appearance. Clouds are an ideal example! Ken Perlin has written a great tutorial on Perlin Noise. It is available at.
I will not explain too much math in this tutorial. The code will be without any optimization or unusual techniques, so you can grasp the general steps involved in a fast breezy manner. I am assuming you know how to initialize OpenGL and already understand texture mapping. The code presented is easily portable.
Let us get started!
A basic noise map 32x32
float map32[32 * 32];
We start by declaring an array of size 32*32 as our basic noise map. This noise map is a building block to form.....
The cloud map
float map256[256 * 256];
The cloud map will hold our cloud.
Random noise generator
Next, we set random noise values ranging from -1 to 1 into our 32*32 map. First, we need a noise generator. Here is a popular one.
float Noise(int x, int y, int random) { int n = x + y * 57 + random * 131; n = (n<<13) ^ n; return (1.0f - ( (n * (n * n * 15731 + 789221) + 1376312589)&0x7fffffff)* 0.000000000931322574615478515625f);
Note the function takes in a random integer to generate different noise patterns.
Set noise to map
Now our function to set noise for the 32*32 noise map:
void SetNoise(float *map) { float temp[34][34];
We declare a temporary array to make our function cleaner. Why does the array hold 34*34 elements instead of 32*32? This is because we will need extra elements for side and corner mirroring, as we shall see soon.
int random=rand() % 5000; for (int y=1; y<33; y++) for (int x=1; x<33; x++) { temp[x][y] = 128.0f + Noise(x, y, random)*128.0f; }
Here we insert the noise values one by one into the temporary array. Each time the function is called, a different set of noise values is generated. The color values of our cloud range from 0 to 256.
Seamless cloud
for (int x=1; x<33; x++) { temp[0][x] = temp[32][x]; temp[33][x] = temp[1][x]; temp[x][0] = temp[x][32]; temp[x][33] = temp[x][1]; } temp[0][0] = temp[32][32]; temp[33][33] = temp[1][1]; temp[0][33] = temp[32][1]; temp[33][0] = temp[1][32];
We mirror the side and corner elements so our final cloud will be seamless without any ugly borders showing.
Smooth...
for (int y=1; y<33; y++) for (int x=1; x<33; x++) { float center = temp[x][y]/4.0f; float sides = (temp[x+1][y] + temp[x-1][y] + temp[x][y+1] + temp[x][y-1])/8.0f; float corners = (temp[x+1][y+1] + temp[x+1][y-1] + temp[x-1][y+1] + temp[x-1][y-1])/16.0f; map32[((x-1)*32) + (y-1)] = center + sides + corners; } }
Finally, we take the values from the center, corners and sides. We weigh and average them up to give the noise a smoother look. This is our first smoothing process.
Making the noise less blocky
The current noise map consists of a bunch of pixels with random values. There is too much noise change between neighboring pixels, though. To solve this problem we will use a second smoothing process known as interpolation. We will average the value of each pixel value with that of its neighbors' values.
float Interpolate(float x, float y, float *map) { int Xint = (int)x; int Yint = (int)y; float Xfrac = x - Xint; float Yfrac = y - Yint;
Parameter x and y are float indices between two neighboring integer indices. We obtain them by scaling integer indices with an octave factor, this we shall see later.
int X0 = Xint % 32; int Y0 = Yint % 32; int X1 = (Xint + 1) % 32; int Y1 = (Yint + 1) % 32;
Next, we define neighboring integer indices. We applied modulus because the noise map row and column consist of only 32 dots.
float bot = map[X0*32 + Y0] + Xfrac * (map[X1*32 + Y0] - map[X0*32 + Y0]); float top = map[X0*32 + Y1] + Xfrac * (map[X1*32 + Y1] - map[X0*32 + Y1]); return (bot + Yfrac * (top - bot)); }
Finally we get noise values from the neighboring indices, average them with delta and return a blunted noise value located at float x and y indices.
Overlap the octaves
Now we will make a couple of noise layers called octaves. The first octave is a blowup of a single 32*32 noise map to a 256*256 map. The second octave is a blowup of four 32*32 maps to four 128*128 maps which are tiled together. This process goes on for higher octaves.
The octaves are then overlapped together to give our cloud more turbulence. We will use four octaves for our cloud. You can use more octaves if you like.
void OverlapOctaves(float *map32, float *map256) { for (int x=0; x<256*256; x++) { map256[x] = 0; }
We start working with the 256*256 map by clearing its old values
for (int octave=0; octave<4; octave++) for (int x=0; x<256; x++) for (int y=0; y<256; y++) { float scale = 1 / pow(2, 3-octave); float noise = Interpolate(x*scale, y*scale , map32);
Here we scale the x and y indices with the values of 1/8, 1/4, 1/2 and 1 consisting of four octaves. The scaled x, y indices and 32*32 map are then sent as parameters for interpolation to return a smoother noise value.
map256[(y*256) + x] += noise / pow(2, octave); } }
The octaves are added together with the proper weight factors.
You could replace pow(2, i) with 1<
Filter the noise with exponential function
This is the last function we need to get a cloud! We use an exponential filter to make the 256*256 noise map look more like a cloud.
void ExpFilter(float *map) { float cover = 20.0f; float sharpness = 0.95f; for (int x=0; x<256*256; x++) { float c = map[x] - (255.0f-cover); if (c<0) c = 0; map[x] = 255.0f - ((float)(pow(sharpness, c))*255.0f); } }
Putting it all together
float map32[32 * 32]; float map256[256 * 256]; void Init() { SetNoise(map32); } void LoopForever() { OverlapOctaves(map32, map256); ExpFilter(map256); }
Moving & rendering the clouds in OpenGL
At last the code to render the cloud in OpenGL:
void DrawGLScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); LoopForever(); //Our cloud function char texture[256][256][3]; //Temporary array to hold texture RGB values for(int i=0; i<256; i++) //Set cloud color value to temporary array for(int j=0; j<256; j++) { float color = map256[i*256+j]; texture[i][j][0]=color; texture[i][j][1]=color; texture[i][j][2]=color; } unsigned int ID; //Generate an ID for texture binding glGenTextures(1, &ID); //Texture binding glBindTexture(GL_TEXTURE_2D, ID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, 256, 256, GL_RGB, GL_UNSIGNED_BYTE, texture); glMatrixMode(GL_TEXTURE); //Let's move the clouds from left to right static float x; x+=0.01f; glTranslatef(x,0,0); glEnable(GL_TEXTURE_2D); //Render the cloud texture glBegin(GL_QUADS); glTexCoord2d(1,1); glVertex3f(0.5f, 0.5f, 0.); glTexCoord2d(0,1); glVertex3f(-0.5f, 0.5f, 0.); glTexCoord2d(0,0); glVertex3f(-0.5f, -0.5f, 0.); glTexCoord2d(1,0); glVertex3f(0.5f, -0.5f, 0.); glEnd(); SwapBuffers(hDC); }
Animating the clouds
That will be in part 2. I hope you enjoyed this tutorial. Let us take a break and play some games! | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/simple-clouds-part-1-r2085/ | CC-MAIN-2017-47 | refinedweb | 1,395 | 64 |
?GXT JavaDocs:
GXT FAQ & Wiki:
Buy the Book on GXT:
Follow me on Twitter:
Hmmm, thinking in what you say I see that its true that my app have a bad code that is the call of validate on this passwordField (while it should only validate the confirmPassword).
I'll change this option and see what its happening.
About the setAllowScoreFail(boolean) I don't know if is an option needed for other ones. Maybe is usefull thinking in that if someone adds a "informative" PasswordField into a FormPanel and then uses the method FormPanel.isValid() the PasswordField isValid will be called.
I guess what u guess are talking about
a confirm field. connected to the password field.
I am using your nice widget already.
Some additional feature would be to extend this field with and extra field to let the user retype the password set via a flag attribute and builtin validator. Just some idea.
Code:
final TextField<String> retype = new TextField<String>(); retype.setFieldLabel("Retype"); retype.setAllowBlank(false); retype.setPassword(true); retype.setValidator(new Validator<String, Field<String>>(){ public String validate(Field<String> field, String value) { if(pwdfld.getValue().equals(value)) // pwdfld = your Password Textfield return null; else return "Value do not match the password!"; // I know bad english :D } });
It just came to my mind and iam using such field extra.
I have problem with the PasswordField you should also overwrite the setValue() and getValue() methods...
Cuz the FormBinding is not working...
The field is not getting updated.
Code of the Formbinding method:
Code:
/** * Updates the field's value with the model value. */ public void updateField() { Object val = model.get(property); if (convertor != null) { val = convertor.convertModelValue(val); } field.setValue(val); }
It is still not working the binding to that field. Iam trying to find out why.
EDIT2: ok thinking about it, problems were this set/get for the Value... but also for the binding the getName/setName which is used to bind the property. I overwrote them as well... BUT the real problem (i think) is the EventListener of the FormBinding i will have to manually bind it... and use your getInputField() method right? This listener listen on the Change event of the field.
Code:
/** * Creates a new binding instance. * * @param field the bound field for the binding */ public FieldBinding(Field field, String property) { this.field = field; this.property = property; changeListener = new Listener<FieldEvent>() { public void handleEvent(FieldEvent be) { onFieldChange(be); } }; modelListener = new ChangeListener() { public void modelChanged(ChangeEvent event) { if (event.type == ChangeEventSource.Update) onModelChange((PropertyChangeEvent) event); } }; }
Edit3: ok solved, i had to add costum binding and use this inputfield method
Code:
binding = new FormBinding(form,true); binding.setStore(form.getStore()); binding.addFieldBinding(new FieldBinding(form.getPassword().getInputField(),"password")); binding.bind(form.getStore().getAt(0));
Sorry for confusing
Regex problem
Regex problem
Hi gslender,
I was having an issue with the regex in this in GXT 1.2.1 and GWT 1.5.3 so I changed it to Javascript compatible regex.
Not sure if anyone else is or will have issues, but this is what worked for me:
Code:
boolean lowletters = false; // LETTERS if (value.matches("^.*[a-z].*$")) { // at least one lower case // letter score += 2; lowletters = true; } boolean upletters = false; if (value.matches("^.*[A-Z].*$")) { // at least one upper case // letter score += 5; upletters = true; } boolean numbers = false; // NUMBERS if (value.matches("^.*[0-9].*$")) { // at least one number score += 5; numbers = true; } boolean specials = false; // SPECIAL CHAR if (value.matches("^.*[-!\\\"#$%&'()*+,./:;<=>?@\\[\\]\\^_`{|\\\\}~].*$")) { // at least one of // -!"#$%&'()*+,./:;<=>?@[]^_`{|\}~ score += 5; specials = true; }
Mmm, so I wonder which Regex is supposed to work?
It seems odd that Java regex is not the same outcome in GWT.. I know its target is Javascript, but I would have thought Google would have handled the Regex differences - does anyone know for sure what is supposed to happen in Java vs GWT etc...GXT JavaDocs:
GXT FAQ & Wiki:
Buy the Book on GXT:
Follow me on Twitter:
Some of the reading I did to pinpoint this statedSince your java
source is churned into javascript by gwt, a regex string literal in
your source code will become a regex string literal -in javascript-.
true - but did you also read this... !!
...and the GWTShell executes your *Java* code, so the regexps have to
be Java-regexp-compatible for your app to work in Hosted Mode.
In a few words:
- if you use JavaScript-only regexp constructs, your app will fail in
Hosted Mode, so developping and debugging will become a pain
- if you use Java-only regexp constructs, your app will run in Hosted
Mode but will fail in "web mode"
So you should base your developments on the JavaScript regexp syntax;
and if it fails in Hosted Mode, then find an alternate that's still
JavaScript-compaible and happens to also be Java-compatible. If you
base your devs on the Java regexp syntax, you'll only notice the
incompatibilities when testing in web mode, which generally happen
late in the development process...GXT JavaDocs:
GXT FAQ & Wiki:
Buy the Book on GXT:
Follow me on Twitter:
bad class file: C:\Program Files\Google Web Toolkit\PasswordField\PasswordField.jar(ext/ux/pwd/client/PasswordField.class)
class file has wrong version 50.0, should be 49.0
Please remove or make sure it appears in the correct subdirectory of the classpath.
import ext.ux.pwd.client.PasswordField;
1 error
I'm using Netbeans and gxt1.2.1 with gwt1.5.3
I added the <inherits name='ext.ux.pwd.PasswordField'/>
and the lib! I'm a bit lost can someone help please?
this might be due to the version of the JVM/JDK - you are using v5.x and the lib was built using v6.x
I'll try and rebuild for v5.x and repostGXT JavaDocs:
GXT FAQ & Wiki:
Buy the Book on GXT:
Follow me on Twitter: | http://www.sencha.com/forum/showthread.php?39893-ext.ux.pwd.PasswordField&p=273201&viewfull=1 | CC-MAIN-2013-48 | refinedweb | 976 | 57.57 |
#include causes the contents of another file to be compiled as if they actually appeared in place of the #include directive. The way this substitution is performed is simple, the Preprocessor removes the directive and substitutes the contents of the named file. Compiler allows two different types of #include’s, first is standard library header include, syntax of which is as follows,
#include <filename.h>
Here, filename.h in angle brackets causes the compiler to search for the file in a series of standard locations specific to implementation. For example, gcc compiler on Linux, searches for standard library files in a directory called /usr/include.
Other type of #include compiler supports is called local include, whose syntax is as follows,
#include "filename.h"
filename in double quotes “” causes compiler to search for the file first in the current directory and if it’s not there it’s searched in the standard locations as usual. Nevertheless, we can write all our #include in double quotes but this would waste compiler’s some time while trying to locate a standard library include file. Though, this doesn’t affect runtime efficiency of program, however, it slows down compilation process. For ex.,
#include "stdio.h"
Notice here that stdio.h is a standard library file which compiler, first, should required to search in the current directory before locating it in standard location. A better reason why library header files should be used with angle brackets is the information that it gives the reader. The angle brackets make it obvious that
#include <strig.h>
references a library file. With the alternate form
#include "string.h"
it’s not clear if string.h is a library header or local file with same name is being used.
Remember that header files are suffixed with .h extension by convention. We can write our own header files and include them in programs. For ex.,
/* sll.h */ #include <stdio.h> #include <stdlib.h> #define T 1 #define I 2 #define S 3 #define D 4 #define Q 5 #define TRAVERSE "enter 1 to TRAVERSE the list..." #define INSERT "enter 2 to INSERT new value in the list..." #define SEARCH "enter 3 to SEARCH a value in the list..." #define DELETE "enter 4 to DELETE a value from the list..." #define QUIT "enter 5 to QUIT the program..." typedef struct NODE { struct NODE *link; int value; } Node; /* function declarations */ int traverse(Node **); void insert(Node **, const int); int search(Node **, const int); int delete(Node **, const int);
Notice here that header file named sll.h contained declarations to be used in a particular program. The fact that everything in the header file is compiled each time it’s #include’d suggests that each header file should only contain declarations for one set of functions or data. It’s better to contain several header files, each containing the declarations appropriate for a particular function or module than put up all declarations for the entire program in one giant header file.
Sanfoundry Global Education & Learning Series – 1000 C Tutorials. | https://www.sanfoundry.com/c-tutorials-file-inclusion/ | CC-MAIN-2018-22 | refinedweb | 506 | 56.76 |
Using Coding Katas, BDD and VS2010 Project Templates: Part 1
- |
-
-
-
-
-
-
Read later
Reading List
This three-part series on using coding katas in practice Behavior Driven Development was written by the late Jamie Phillips, a well-known member of Boston's Agile and .NET communities. When we saw the first draft of this article we were all eager to publish it, but he passed away before we could finish the editing process. With the permission of wife Diana, we proudly present his final work.
I’m good, but couldn’t I be better?
Using Coding Katas, BDD and VS2010 Project Templates.
Regardless of your technical expertise, knowledge and experience there are always opportunities to strengthen your skills and be the “black belt” of your “coding dojo”. Through the use of Coding Katas to practice Behavior Driven Development and implementing a new VS2010 project template, we can hone our skills and “get better”.
This three part article will cover the topics of Coding Katas, Behavior Driven Development (BDD) and Project Template Wizard in VS 2010 and is intended to take the reader on a recent journey of discovery that improved my skills even after 8 years of writing C# code.
Part 1: Coding Katas
Even for seasoned software developers there are always areas that we can improve when it comes down to actually coding the design that we want to produce for any given requirement. The fact of the matter is that many times we make those adjustments to our coding best practice on code that we intended to go in to production; which is a natural state of affairs on our line of work. But if you project that idea into a different scenario all together and look at it from the point of view of martial arts; you wouldn’t exactly want to start adjusting your particular style of martial arts in a defensive scenario on the street! You might come out worse off than you intended. Instead, you would practice and refine your movements on a repetitive basis and hone your skills to the point that they became second nature. You would no longer have to think about the style or form, but the situation at hand. In many martial arts, practitioners will work on the same form (movements) repeatedly in order to engrain those movements into their memory, thus making it second nature to the practitioner.
Coding Katas (a term coined by Dave Thomas co-author of The Pragmatic Programmer) refers to a similar theory in software development. Here the developer takes a benign problem (such as the Fibonacci series) and works on producing throw away code that will resolve the problem, concentrating on the style and technique used more than the actual resolution of the problem. It has been pointed out that this approach to self improvement is for the “sandal-wearing hippies of the software world” and maybe they are right, however the instant payback that I had when I started practicing Katas surprised even my skeptical self – although as a practitioner of Tai Chi I did relate to the idea. So where does BDD and VS 2010 Project Template Wizard fit in to all of this? Well, quite simply it was a journey, an evolution if you like.
My first port of call was the concept of Coding Kata and the implementation of the Bowling Kata as I saw it being demonstrated at a session of TechEd 2010. David Starr and Ben Day gave an excellent interactive session that went through the Bowling Kata and dissected the “movements” into small chunks so that the audience could appreciate the test, code, refactor cycle that was involved. And that was it! As a Unit Test evangelist it made perfect sense! What better way to build up good working practices than to actually practice the steps that lead to robust code! As soon as I could (actually during the session) I started trying it out. My first attempts were not great as I was still concentrating on the problem itself and not the how I was writing the code; and that was my first lesson. In order to really reap the benefits of Katas you need to repeat the same Katas until you feel that the whole, test, code, refactor cycle is as tight and complete as you will get. Once you have done that then you can move on to another Kata.
The Kata that I first worked on was the Bowling Kata as shown to me at TechEd by Dave and Ben and originates from Uncle Bob (Robert C. Martin). The idea is to create a mechanism to score a game of ten pin-bowling. Without going in to too much detail of the scoring (you can easily search for it online), a player has 2 opportunities in each go (Frame) to knock down all of the pins (10 pins to be exact – hence the name “10 Pin Bowling”). If they knock them all down in one throw of the ball, it is considered a Strike. If they use both throws of the Frame to knock down all the pins, then it is considered a Spare. A game consists of 10 Frames, with Frame 10 being an opportunity to throw the ball 3 times if the player manages a strike or spare within the first 2 throws. I created the following “matrix” of scenarios to help me figure out how to score a game:
From this I can start to take a look at the Use Case scenarios for writing a 10 Pin Bowling Game Score Engine:
- Use Case 1: When bowling all zeros, the score should equal 0.
- Use Case 2: When bowling all ones, the score should equal 20.
(we did not throw a Strike or a Spare in the final frame – so no bonus ball)
- Use Case 3: When bowling all strikes, the score should equal 300.
- Use Case 4: When bowling all spares, the score should equal 150.
- Use Case 5: When bowling a strike in the first frame and 8 pins in frame 2 and the remainder are zero, the score should equal 26.
For those of you that are familiar with SCRUM and TFS, this would typically be listed in as Conditions of Acceptance in the Product Backlog Item. Now we have our Use Cases we can start to look at writing the unit tests that we want to create for each scenario, and in true TDD fashion, without having the code of the engine itself written. The starting point of the Kata is to create a Unit Test project and take the first scenario and write the test for it:
/// <summary> /// Unit test methods for testing the bowling game engine /// </summary> [TestClass]
public class BowlingTests { /// <summary> /// Given that we are playing bowling
/// When I bowl all gutter balls
/// Then my score should be 0 /// </summary> [TestMethod] public void Bowl_all_gutter_balls() { // Arrange // Given that we are playing bowling Game game = new Game(); // Act
// when I bowl all gutter balls for (int i = 0; i < 10; i++) { game.roll(0); game.roll(0); } // Assert
// then my score should be 0 Assert.AreEqual(0, game.score()); } }
In order for this code to compile, the game engine class (Game) and the methods, roll and score, need to be created (hence they are shown red in the code snippet). The methods don’t necessarily have to do anything and nor should they – after all, in TDD we fail first.
Coding Tip:
In VS 2010 we can now create classes whilst in the code by simply hitting CTRL + . (Period) when the caret is next to an undefined keyword (as in the case above with Game):
The advantage of this method of creating the class in this way is that the focus remains on the code where you are as opposed to other methods that force the focus on the newly created class. This same keyboard shortcut can also be used for methods as well:
So if we have approached TDD in its truest sense, our implementation class should look like this:
public class Game { public void roll(int p) { throw new NotImplementedException(); } public int score() { throw new NotImplementedException(); } }
And of course when we run the unit test it will fail (fail first, remember?):
So now we go back and implement only what is needed to pass the test:
public class Game { public void roll(int p) { // throw new NotImplementedException();
} public int score() { return 0; } }
Now, when we run all of the unit tests, we should be green:
We then take on the next scenario and write the test first:
/// <summary> /// Given that we are playing bowling /// When I bowl all single pins /// Then my score should be 20 /// </summary> [TestMethod] public void Bowl_all_single_pins() { // Arrange
// Given that we are playing bowling
Game game = new Game(); // Act
// when I bowl all single pins for (int i = 0; i < 10; i++) { game.roll(1); game.roll(1); } // Assert
// then my score should be 20 Assert.AreEqual(20, game.score()); }
When we run this test, it will fail (obviously):
So we return to the implementation and make the necessary changes, making sure we don’t break the previous tests:
public class Game { private int[] _rolls = new int[21]; private int _currentFrame = 0; public void roll(int pinsKnockedDown) { _rolls[_currentFrame++] = pinsKnockedDown; } public int score() { int retVal = 0; for (int index = 0; index < _rolls.GetUpperBound(0); index++) { retVal += _rolls[index]; } return retVal; } }
We run all of our tests to check the changes made:
This cycle continues throughout the Kata until all scenarios have been coded and all of the tests pass:
At this stage you can consider your Kata complete if you have refactored as much as possible, removing duplicate code, whilst retaining the integrity of the functionality – i.e. your tests keep passing after each refactor.
To truly benefit from the Kata, it is worth noting that the same Kata should be repeated for a few sessions until you are confident that you have refactored and optimized your code as much as possible. You must always start from scratch (or at least the minimum base for your environment), the reason is that we are doing the Katas to build up the coding strengths and in order to do that; you must go through all of the “movements” to get to your end product – a working solution – and therefore jumping steps in your approach is counterproductive as you are not getting the full benefit of the exercise. You will see, as I explain later on in this series, that there will be commonalities when you change Katas and those commonalities can actually be “baked in” to your approach through generic macros or better still – Project Templates.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Wrong:1~21
Right:1~10
Re: a little bug
by
Leandro Tramma | https://www.infoq.com/articles/BDD-Katas-1/ | CC-MAIN-2017-47 | refinedweb | 1,844 | 58.86 |
Represent your model as a json object, and use redis as backend
Represent your model as a json object, using redis as backend.
This tool was thought for implementing complex models on a fast backend as redis.
To install with npm, type
npm install redis-modelize
Inside your project, type
var rm = require('redis-modelize');var model = rm.init(modelObj, {prefix: 'optional:some:redis:prefix'});var redis-client = rm.client;
rm.client part of the code returns the instance of redis client that this module uses internally.
modelObj is the object that describes your model. For example:
//Define your modelvar modelObj = {global: {keywords: {type: 'set'}}};//modelizevar model = rm.init(modelObj, {prefix: 'prefix'});//Now you have methods to interact with redisnew model.gobal().setKeywords(['any', 'array', 'of', 'keywords'], function(err, resp) {/*this is a callback*/});
The hierarchy of the model object is:
modelObj => namespace => property
So in this example, global is the namespace, and keywords is the property. When you modelize, you get a constructor with each namespace defined, and get several methods for each property.
As you can imagine, what setKeywords method of the example really does under the hood, is
redis.sadd('prefix:global:keywords', ['any', 'array', 'of', 'keywords'], callback);
How does it know that you want to use sadd?
Because we defined that the property keywords is of type set.
Posible values for type are (the redis types): string, set, list, zset and hash.
Depending on which type you define your property, are the ammount of methods you get to manipulate it.
The last parameter of all methods is a callback function with the signature
function(err, resp).
Ok, this example is kind of too simple and really it's an overkill when you can simply use redis module.. lets make things a little more complicated.
Imagine we have
var modelObj = {user: {_obj: {type: 'hash',reverse: ['email'],props: {fullname: {mandatory: true},email: {mandatory: true},password: {mandatory: true}phone: {mandatory: false}}},projects: {type: 'set', refs: true}},project: {_obj: {type: 'hash',props: {name: {mandatory: true},user: {refs: true},description: {mandatory: false}}}},global: {keywords: {type: 'set'}}}var model = rm.init(modelObj);
Now things get interesting...
Fist of all, the only type that requires different ammount of parameters is hash.
Parameters for properties of type:
{refs: true|false}. If refs is true, your property will get two extra methods, called 'getRef'+<field name with first letter in upper case> and 'setRef'+<field name with first letter in upper case>. I'll explain them later.
Of coure, you can add any parameter you like, but they will not be considered.
Ok, now lets explain the _obj property that you see in user and project.
The _obj property is the only one that has special meaning. You can think of it as the constructor of a class.
With the model of the previous example, you could do the following:
new model.user({fullname: 'John Doe',email: 'john.doe@example.com',password: hashlib.md5('password')});
Simple, isn't it? But there are some special considerations of the _obj property:
model.user.reverse('john.doe@example.com', function(err, id) {var johnDoeUser = this});
model.user.reverse('john.doe@example.com', function(err, id) {//'this' references an instance of user.this.get(function(err, userFields) {console.log(userFields.fullname); //Prints 'John Doe'});});
Ok, suppose now that you know John Doe's id is 1... an other way of getting an instance of user with John Doe's data is
new model.user(1, function(err, id) { ... });
This means, that if the parameter you pass to the constructor is not an object, it takes it as an identifier.
Fields of hash objects also get methods to manipulate them. In this example, you could add phone number to John doing
//set phone number within callback functionnew model.user(1, function(err, id) {this.setPhone('555-555555');});//or set phone number directlynew model.user(1).setPhone('555-555555');
If a property or a field was defined with refs: true, it means that it gets two special methods: setRef+ and getRef+. What it means is that the property/field references an object in the model. You could use it like this:
//add a project to John Doevar jdUser = new model.user(1);var awproj = new model.project({name: 'Awesome Project', description: 'This project will be awesome'}, function(err, id) {this.setRefUser(jdUser);jdUser.setRefProjects(this);});
If you later do
awproj.getRefUser(function(err, refUserInstance){ ... }) the second argument of the callback function is an instance of the referenced user. | https://www.npmjs.com/package/redis-modelize | CC-MAIN-2015-22 | refinedweb | 747 | 50.02 |
wow im glad i joined this forum haha. ok so my newest problem is with arrays.
#1. somehow the array im getting is -8,5,8,9,9,3,4,6,0 instead of 0-6. i dont even know how its getting 9 slots.
#2. i dont know how to make it so you can pick 1 slot that you want to view. i gave it my best shot but the way I have it my program crashed when you try to choose a slot.
thank you. heres my code:
Code:#include <iostream> using namespace std; int main () { int a; int b; int x; int array [7]; int *z; z = array; cout<< "press 1 to see the whole array, press 2 to pick 1 slot.\n"; cin>> a; switch (a) { case 1: cout<< "this be yo' array dawg:\n"; for (x = 0; x < 7; x++) { cout<< array [x]; break; case 2: cout<< "which slot would you like to view? 0-6\n"; cin>> b; cout<< array [*z]; break; default: cout<< "NO\n"; break; } } cin.get(); } | https://cboard.cprogramming.com/cplusplus-programming/120605-arrays.html | CC-MAIN-2017-22 | refinedweb | 176 | 95.71 |
Sun Studio Express Program December 2006 Build available
Sun Studio Express Program December 2006 Build available.:
( Dec 19 2006, 03:03:22 PM PST )
Permalink
What About Binary Compatibility?
»What will happen if I try to run my application on a newer Solaris OS release?
»What will happen if I try to compile my application using one version of the Sun Studio compilers and link it with libraries compiled with earlier compiler versions?
Important questions to consider, and luckily both Solaris and the Sun Studio compilers guarantee a certain degree of binary compatibility between releases.
Here are some things to keep in mind
f95compiler are not guaranteed to be compatible with future releases
Specific Questions I get often:
»Can I compile my application on Solaris 10 and run it on Solaris 9 and Solaris 8?
No. Might work, but since you compiled it on Solaris 10, it might also be using system interfaces that did not exist on Solaris 8 and 9 or have changed in Solaris 10.
»Can I compile my application on Solaris 8 and run it on Solaris 9 and Solaris 10?
Yes! This is what binary compatibility is all about. (See above)
»Can I compile and build my shared library on Solaris 10 and use it on Solaris 9 and Solaris 8?
No. Might work, but since you compiled it on Solaris 10, it might also be using system interfaces that did not exist on Solaris 8 and 9 or have changed in Solaris 10.
»If I compile the code in my shared library using the Sun Studio 11 compilers, can my customers who are still using Forte 6 Update 1 compilers link with these shared libraries?
No. You must always link with the same compiler used to create the newest objects in your application or library. So, if Sun Studio 11 compilers are used to compile the code in a shared library, Sun Studio 11 compilers must be used when linking with that shared library.
October 23-27 meeting of ISO SC22 WG14, the C programming language committee (Part 3 of 3)
Highlights of the October 23-27 meeting of ISO SC22 WG14, the C programming language committee:
Here are the details on Decimal Floating Point, Bounds Checking, and couple of DRs.
Details (Part III):
DTR 24732, Decimal Floating Point.
Discussion of the National Bodies comments on the draft were reviewed in detail. This lead to some minor changes to the draft. There will be a small review committee to incorporate these changes. The WG14 convenor will then forward that draft to SC22 for final ballot.
IBM is the only company known to have hardward to support Decimal Floating Point at this time ...
This document is a proposed Part II to TR24731, Bounds Checking via dynamic allocation. There is no invention here. All of the functions already exist in implementations, particularly in POSIX. The document is organized the same way as Part I. The intent of this document is to provide programmers with an additional mechanism to address issues of buffer overflow, etc., as does Part I. A key difference is that the mechanisms are via dynamic memory allocation, and heavily dependent on malloc. There are some minor changes that need to be made, the document is still a working draft, but it has sufficient substance to be ready for CD registration. Small editorial group to go over the document.
DR 329 (N1181) Math functions and directed rounding
The result is DBL_MIN*DBL_EPSILON, a subnormal number. But, if the implementation does not support subnormal numbers, such as IBM S/360 hex floating-point, then it is either zero or DBL_MIN, depending upon the current rounding direction mode. Hence, the sentence "Thus, the remainder is always exact." in footnote 204 in C99+TC1+TC2 (N1124) is wrong. This problem also applies to remquo and fmod.
Issue for non-754 implementations.
remove the sentence from the footnote, move the rest of the edits to annex f, instead of section 7
DR 332 (N1194) gets is generally unsafe
Committee consensus to have words crafted to deprecate gets. This is a political issue in response to lots of bashing of the committee being non responsive to the security issues. Which of course is not true, but WG14 wants to soften the rhetoric.
Proposed Technical Corrigendum:
Add to subclause 7.26.9 (future library directions for ):
The gets function is obsolescent, and its use is deprecated.
[Note: Rationale wording might be useful.] [Editorial note: add a forward reference to this from gets subclause 7.19.7.7.]
Reviewed state in Portland 10/06 Expedited, will include in TC3 pending results of the disposition of this DR (DR 332) by WG14
This effort is essentially in the 'stalled' proposal list within the Evolution Working Group (EWG) in C++ (WG21/J16). Tom believes it solves real problems with the preprocessor. There is opposition within this committee to changing the C Standard.
October 23-27 meeting of ISO SC22 WG14, the C programming language committee (Part 2 of 3)
Highlights of the October 23-27 meeting of ISO SC22 WG14, the C programming language committee:
Here are the details on augmenting the interface of malloc, modeling sequence points to aid C++ work on concurrency, and committee discussions on revising the C standard.
Details (Part II):
Tom Plum Report #2, WG21/N1085 C++ Proposal to augment the interface of malloc, et al.
WG21/N1085 is a proposal by Howard Hinnant. Ulrich wants to add alignment considerations to this C++ proposal. POSIX is now using a function called posix_memalign() that seems suitable for the original proposal. That function came from an X/Open function named memalign(). Some of the members of J11 believe a function of this sort would be useful for C as well. Ulrich expressed concern that C++ might put a different spin on this. Bill Plauger prefers that we not push this work on to C++. Sees it as simply adding to layers of complexity. It's not clear whether or not C++ will even ever adopt this. Bill Plauger sees it as low on the list of things to do.
After much debate and several straw polls we agreed to this Liaison Report
WG14 urges WG21 to incorporate into the next C++ revision ("C++0x"):
N1188 is a paper that explains the C language sequence point model, and may be suitable as an addition to the C99 rationale. The difficulty seems to be in determining all the possible orderings of sequence point, which makes teaching it difficult.
Making use of the model here is important in providing liaison input to C++ for their concurrency model. We want to make sure our underlying model does not get broken by C++ concurrency. We have plenty of time to get this in the C99 rationale, but there is more urgency in nailing down our model for C++ concurrency.
Ulrich believes that the state of C compiler development is such that the C Standard is well behind the technology being used by the community. Virtually all major C compiler developers have developed extensions to the language that go well beyond the Standard. We can either subsume ourselves to C++, or plan on revising the C Standard to adopt existing technologies. Bill Plauger pointed out that our pace has been deliberate, and that adoption of C99 has been slow. We probably should consider reopening the Standard soon, and look at adopting things like multi-treading, security features, and others. Ulrich believes we should focus on existing practice, features that are in wide use, minimize the risk of standardizing features that no one will use. Doug Gwyn believes that C will have longevity in embedded programming, but that if we work on a 5 year schedule we should probably consider starting now. There are a number of things that C can probably do better than other languages. Round table discussion on whether or not we should consider revising the C Standard. David Keaton believes that are real commercial needs that C can address, such as security. General feeling that making a decision to revise the C Standard with a focus on existing practice would be a good thing. Many developers make use of extensions to the language, some do not. Invention of new feature sets is not a good idea. John Benito: No one is saying no. If we are going to do this we will need to work on our charter, and proceed from there.
The next step. John Benito (WG14 Convenor): We need to start putting a charter together that will define the scope a revision to the C Standard. Tom Plum is in favor of using the wiki as a vehicle to propose ideas. Bill Plauger suggests using a full day in London to process such a list. Start by generating a list, filter it based on criteria, then let the world know what we are doing. We are not ruling out proposals from the outside.
October 23-27 meeting of ISO SC22 WG14, the C programming language committee (Part 1 of 3) Highlights of the October 23-27 meeting of ISO SC22 WG14, the C programming language committee:
I'll cover the other topics in some future postings
( Nov 03 2006, 03:44:58 PM PST )
Permalink
Sun Studio 11 Training Now Available!
href=
This web based training introduces Solaris software platform developers to Sun Studio software. It provides developers with an overview of how to compile, edit, debug, and analyze performance with Sun Studio.
( Sep 05 2006, 11:34:58 AM PDT ) Permalink Comments [0]
Resolving problems creating reproducible binaries using Sun Studio Compilers.
If you are in an environment that requires the ability to reproduce
identical object files, this becomes a problem.
Both C and C++ have an undocumented flag (-xglobalstatic) that can be
passed to the compiler front-end which will force the use of a static
globalization prefix based on the source filepath, instead of the usual
algorithm that guarantees a unique prefix. To pass the flag to cc use
the -W0 option as follows:
cc -W0,-xglobalstatic
To pass the flag to CC use the -Qoption ccfe option as follows:
CC -Qoption ccfe -xglobalstatic
The drawback to using the filepath to generate the globalization prefix
is the increased risk of a namespace collision at link time between
static data with the same name that has been globalized from two files
with the same filepaths. Though rare it does occur more often than
using a randomly generated prefix.
If you do run into namespace collisions, you might need to assign
the globalization prefix. For C this can be done with the
wizard option -W0,-xp<prefix>, for example:
-W0,-xp\$XAqalkBBa5_D2Mo
For C++ this can be done with the wizard options
-Qoption ccfe -prefix -Qoption ccfe <prefix>, for example:
-Qoption ccfe -prefix -Qoption ccfe \$XA0ZlkBtDTxEGkV.
( Jul 24 2006, 02:45:48 PM PDT ) Permalink Comments [1]
Linux Technology Preview, Build 24 - June 2006
Sun Studio Express Program June 2006 Build
Sun Studio Express Program June 2006 Build.
With the June 2006 Build, the Sun Studio Express Program is featuring the Data Race Detection Tool (DRDT), a cool new tool for datarace detection in OpenMP, threaded and parallel programs. DRDT works with code written using the POSIX thread API, the Solaris Operating System(R) thread API, OpenMP directives, Sun parallel directives, Cray(R) parallel directives, or a mix of these.
So go check out the Sun Studio Express Program at:
Some of the other features being introduces with the June 2006 Build include:
Report bugs on Sun Studio Compilers and Tools at bugs.sun.com
You can now report bugs on Sun Studio Compilers and Tools at bugs.sun.com.
Here is how:
The real story on the cc command and the -xregs=frameptr option:
C99 inline function and the Sun C Compiler
Note, the C standard says that inline is only a suggestion to the C compiler. The C compiler can choose not to inline anything, and attempt to call the actual function.
The Sun C compiler does not inline C function calls unless optimizing at -xO3 or above. And that inlining is done by the backends. And then only if the backend's heuristics decide it is profitable to do so. The Sun C compiler gives no way to force a function to be inlined.
For static inline functions it is simple. Either a function defined with the inline function specifier is inlined at a reference or a call is made to the actual function. The compiler can choose which to do at each reference. The Sun C compiler decides if it is profitable to inline at -xO3 and above. When not profitable to inline, or at an optimization of less than -xO3 a reference to the actual function will be generated. If any reference to the actual function is generated the function definition will be generated in the object code. Note if the address of the function is taken, the actual function will be generated in the object code.
Extern inline functions are more complicated. There are two types of extern inline functions, an inline definition which never provides an extern (global) definition of the function and an extern inline function which always provide a global definition of the function. To quote the C99 standard:
[.
[#7] EXAMPLE The declaration of an inline function with
external linkage can result in either an external
definition, or a definition available for use only within
the translation unit. A file scope declaration with extern
creates an external definition. The following example shows
an entire translation unit.
inline double fahr(double t)
{
return (9.0 * t) / 5.0 + 32.0;
}
inline double cels(double t)
{
return (5.0 * (t - 32.0)) / 9.0;
}
extern double fahr(double); // creates an external definition
cels has external linkage and is referenced, an external
definition has to appear in another translation unit (see
6.9); the inline definition and the external definition are
distinct and either may be used for the call.
So, for an inline definition, the programmer is required to
supply an extern definition of the function in another translation-unit for references to the function that are not inlined.
For an inline definition, the compiler must not create a global definition of the function. That means any reference to an inline definition that is not inlined must be a reference to a global function defined elsewhere. Put another way, the object file produced by compiling this translation unit will not contain a global symbol for the inline definition. And any reference to the function that is not inlined will be to an extern (global) symbol provided by some other object file or library at link time.
For an extern inline function declared by a file scope declaration with the extern storage-class-specifier (i.e. the function definition and/or prototype), the compiler must provide a global definition of the function in the resulting object file. The compiler can choose to inline any references to that function seen in the translation unit where the function definition has been provided, or the compiler can choose to call the global function.
The behavior of any program that relies on whether or not a function call is actually inlined, is undefined.
Note also an inline function with external linkage may not declare or reference a static variable anywhere in the translation-unit.
Definition of translation-unit: A source file and all of its includes, recursively.
Like it does for static functions, the Sun C compiler decides if it is profitable to inline a reference to an inline definition or an extern inline function at -xO3 and above. When not profitable to inline, or at an optimization of less than -xO3 a reference to the global function will be generated. Likewise a reference to the address of the function is always a reference to the global function.
The rules for C++ differ: a function which is inline anywhere must be inline everywhere and must be defined identically in all the translation units that use it.
The GNU C rules differ and are described in the GNU C manual, which can be found here.
To obtain behavior from the Sun C compiler that is compatible with
gcc's implementation of extern inline functions for most programs, use
the -features=no%extinl flag. When this flag is specified the Sun
C compiler will treat the function as if it was declared as a static
inline function.
The one place this is not compatible will be when the address of the function is taken. With gcc this will be an address of a global function, and with Sun's compiler the local static definition address will be used.
Finding the canonical path to an executable
Below is a coding example of how an executable can determine the canonical path to itself on the file system. Compiler drivers, like cc, CC, f95 need to do this in order to exec the component executables that compile a program. For example, the compiler front-end, an optimizer, and a linker.
% cat findself.c
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#ifndef MAXPATHLEN
#define MAXPATHLEN 1024
#endif
/* find_run_directory - find executable file in PATH
* PARAMETERS:
* cmd filename as typed by user
* cwd where to return working directory
* dir where to return program's directory
* run where to return final resolution name
* RETURNS:
* returns zero for success,
* -1 for error (with errno set properly).
*/
int
find_run_directory (char *cmd, char *cwd, char *dir, char **run)
{
char *s;
if (!cmd || !*cmd || !cwd || !dir) {
errno = EINVAL; /* stupid arguments! */
return -1;
}
if (*cwd != '/')
if (getcwd (cwd, MAXPATHLEN - 1) == NULL )
return -1; /* cant get working directory */
if (strchr (cmd, '/') != NULL) {
if (realpath(cmd, dir) == NULL) {
int lerrno = errno;
if (chdir((const char *)cwd) == NULL)
errno = lerrno;
return -1;
}
} else {
#ifdef __linux__
/* getexecname() not available on Linux */
if (readlink("/proc/self/exe", dir, MAXPATHLEN) == -1) {
#else
if (realpath(getexecname(), dir) == NULL) {
#endif
int lerrno = errno;
if (chdir((const char *)cwd) == NULL)
errno = lerrno;
return -1;
}
}
s = strrchr (dir, '/');
*s++ = 0;
if (run) /* user wants resolution name */
*run = s;
return 0;
}
char current_working_directory[MAXPATHLEN];
char run_directory[MAXPATHLEN];
char * run_exec_name = NULL;
int
main(int argc, char **argv)
{
if ( !find_run_directory (argv[0],
current_working_directory,
run_directory, &run_exec_name) ) {
(void)printf("argv[0] = %s\n"
"cwd = %s\n"
"run_dir = %s\n"
"run_exec = %s\n",
argv[0],
current_working_directory,
run_directory,
run_exec_name);
} else {
(void) printf("%s\n", "Unaable to find run directory.");
}
exit (0);
}
% cc findself.c -O -o prod/bin/findself
% ls
bin findself.c prod
% ls bin
findself
% ls -laF bin
total 6
drwxr-xr-x 2 me staff 512 Mar 5 10:37 ./
drwxr-xr-x 4 me staff 512 Mar 5 10:37 ../
lrwxrwxrwx 1 me staff 20 Mar 5 10:37 findself -> ../prod/bin/findself*
% ls -laF prod/bin
total 22
drwxr-xr-x 2 me staff 512 Mar 5 10:37 ./
drwxr-xr-x 3 me staff 512 Mar 5 10:37 ../
-rwxr-xr-x 1 me staff 8992 Mar 5 10:37 findself*
% bin/findself
argv[0] = bin/findself
cwd = /home/me/blog
run_dir = /home/me/blog/prod/bin
run_exec = findself
% prod/bin/findself
argv[0] = prod/bin/findself
cwd = /home/me/blog
run_dir = /home/me/blog/prod/bin
run_exec = findself
% setenv PATH /home/me/bin:${PATH}
% findself
argv[0] = findself
cwd = /home/me/blog
run_dir = /home/me/blog/prod/bin
run_exec = findself
%
( May 08 2006, 06:36:27 PM PDT ) Permalink Comments [0]
Special Math Functions Draft Technical Report
There were three major items to come out of the Berlin ISO C standard committee meeting which will move forward in the process of becoming Technical Reports.
Linux Technology Preview #3
While in Berlin at the ISO C standards meeting Sun announced:
Linux Compiler Technology Preview (TP) #3
The announcement went out to the Sun Studio Forum for Linux users, here.
TP #3 has some interesting features: | http://blogs.sun.com/dew/ | crawl-002 | refinedweb | 3,320 | 52.49 |
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.
Dear all,
I try to use serialPort with .NETMF 4.1 for build a software with FEZ PAndaII.
"
What I must do?
Thank You very much,
Francesco
Hello,
After searching on the MSDN, it seems that even the last version SDK does not include the Microsoft.SPOT.Hardware.SerialPort namespace:
However, there is a class named SerialPort under
System.IO.Ports, is it that you are looking for the class?
By the way, for issues regarding .NET Micro Framework, the
.NET Micro Framework forum is more appropriate, the current forum is for .NET Framework Base Classes.
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. | https://social.msdn.microsoft.com/Forums/en-US/d23bc27f-c7fc-4d6c-99f3-abb0aaed9926/cant-find-serialport-in-netmf-41?forum=netfxbcl | CC-MAIN-2021-39 | refinedweb | 155 | 61.53 |
Trying to compile some RL2 samples , missing IContext and IContextConfig
Hi there,
I've just started looking at RL2 - and having trouble compiling the samples I found on this site - these 2 lines are unresolved in all the samples I've seen - but not sure what im missing
import robotlegs.bender.framework.context.api.IContext; import robotlegs.bender.framework.context.api.IContextConfig;
- I have only included 2.0.0b5.swc - do I need anything else?
Thanks
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by creynders on 30 Mar, 2013 10:30 AM
The API's changed a little:
2 Posted by mike on 30 Mar, 2013 10:28 PM
Many thanks!
creynders closed this discussion on 31 Mar, 2013 10:31 AM. | http://robotlegs.tenderapp.com/discussions/robotlegs-2/1233-trying-to-compile-some-rl2-samples-missing-icontext-and-icontextconfig | CC-MAIN-2019-22 | refinedweb | 146 | 63.8 |
* Yann Ylavic:
>> readdir is thread-safe. There used to be this idea that fdopendir
>> could be implemented like this:
>>
>> DIR *
>> fdopendir (int fd)
>> {
>> return (DIR *) fd;
>> }
>>
>> And readdir would use a static buffer for the directory entry (like
>> gethostbyname) instead of something which is allocated as part of the
>> opaque the DIR * object (similar to a FILE *). Such systems may
>> indeed have existed at one point, but I doubt that you can compile APR
>> on them. It's unlikely that these systems will support readdir_r
>> because it is much to recent an interface.
>
> Right, modern readdir()s seem to be thread-safe but with regard to
> different directories only, at least Linux' man page states:
> ."
It's been a bit of a struggle to get this right. People are really
concerned about the case where multiple threads read from the same
directory stream. But how often does that happen in practice?
>> For systems which use a buffer embedded in the DIR * object (and I'm
>> not aware of any which don't), readdir is as thread-safe as memcpy,
>> although some manual pages claim it is not. This is very likely an
>> editorial mistake because thread safety guarantees for functions which
>> operate on separate objects are still an evolving concept.
>
> Are you thinking of the above editorial?
I meant this part:
Preliminary: | MT-Unsafe race:dirstream | AS-Unsafe lock | AC-Unsafe
lock | See POSIX Safety Concepts.
<>
“MT-Unsafe race:dirstream” doesn't make much sense because we don't
have this as a category for memcpy because for some reason, it is
“obvious” for memcpy that it's only thread-safe if it is called for
completely separate arguments.
I think the Solaris manual also does not mark readdir as thread-safe,
implicitly suggesting to use readdir_r in multi-threaded programs.
But this suggestion isn't helpful on Solaris, either.
>> Just stop using readdir_r. I know that many people are invested in
>> that interface for various reasons, but sometimes, you just have to
>> delete pointless code and get on with it.
>
> I'm not sure we can use readdir() blindy though, what if multiple
> threads use it on the same dir?
Then the current implementation is already broken because apr_dir_read
does not perform any locking: The call to readdir_r can write to the
d_name buffer, and the reads of the d_name in apr_dir_read constitute
a data race.
If you mean “the same directory on the file system”: What counts is
the same DIR * object. If two objects iterate through the same
directory, this does not matter because each DIR * object is required
to keep its own iteration position. | http://mail-archives.apache.org/mod_mbox/apr-dev/201703.mbox/%3C87tw6hr0v0.fsf@mid.deneb.enyo.de%3E | CC-MAIN-2017-43 | refinedweb | 438 | 59.84 |
Building an Image Viewer
Computer vision is the technology that enables computers to achieve a high-level understanding of digital images and videos, rather than only treating them as bytes or pixels. It is widely used for scene reconstruction, event detection, video tracking, object recognition, 3D pose estimation, motion estimation, and image restoration.
OpenCV (open source computer vision) is a library that implements almost all computer vision methods and algorithms. Qt is a cross-platform application framework and widget toolkit for creating applications with graphical user interfaces that can run on all major desktop platforms, most embedded platforms, and even mobile platforms.
These two powerful libraries are used together by many developers to create professional software with a solid GUI in industries that benefit from computer vision technology. In this book, we will demonstrate how to build these types of functional application with Qt 5 and OpenCV 4, which has friendly graphical user interfaces and several functions associated with computer vision technology.
In this first chapter, we will start by building a simple GUI application for image viewing with Qt 5.
The following topics will be covered in this chapter as follows:
- Designing the user interface
- Reading and displaying images with Qt
- Zooming in and out of images
- Saving a copy of images in any supported format
- Responding to hotkeys in a Qt application
Technical requirements
Ensure that you at least have Qt version 5 installed and have some basic knowledge of C++ and Qt programming. A compatible C++ compiler is also required, that is, GCC 5 or later on Linux, Clang 7.0 or later on macOS, and MSVC 2015 or later on Microsoft Windows.
Since some pertinent basic knowledge is required as a prerequisite, the Qt installation and compiler environment setup are not included in this book. There are many books, online documents, or tutorials available (for example, GUI Programming with C++ and Qt5, by Lee Zhi Eng, as well as the official Qt library documentation) to help teach these basic configuration processes step by step; users can refer to these by themselves if necessary.
With all of these prerequisites in place, let's start the development of our first application—the simple image viewer.
All the code for this chapter can be found in our code repository at.
Check out the following video to see the code in action:
Designing the user interface
The first part of building an application is to define what the application will do. In this chapter, we will develop an image viewer app. The features it should have are as follows:
- Open an image from our hard disk
- Zoom in/out
- View the previous or next image within the same folder
- Save a copy of the current image as another file (with a different path or filename) in another format
There are many image viewer applications that we can follow, such as gThumb on Linux and Preview app on macOS. However, our application will be simpler than those in that we have undertaken some preplanning. This involved the use of Pencil to draw the wireframe of the application prototype.
The following is a wireframe showing our application prototype:
As you can see in the preceding diagram, we have four areas in the main window: the MenuBar, the ToolBar, the Main Area, and the Status Bar.
The menu bar has two menu options on it—the File and View menus. Each menu will have its own set of actions. The File menu consists of the following three actions as follows:
- Open: This option opens an image from the hard disk.
- Save as: This option saves a copy of the current image as another file (with a different path or filename) in any supported format.
- Exit: This option exits the application.
The View menu consists of four actions as follows:
- Zoom in: This option zooms in to the image.
- Zoom out: This option zooms out of the image.
- Prev: This option opens the previous image in the current folder.
- Next: This option opens the next image in the current folder.
The toolbar consists of several buttons that can also be found in the menu options. We place them on the toolbar to give the users shortcuts to trigger these actions. So, it is necessary to include all frequently used actions, including the following:
- Open
- Zoom in
- Zoom out
- Previous image
- Next image
The main area is used to show the image that is opened by the application.
The status bar is used to show some information pertaining to the image that we are viewing, such as its path, dimensions, and its size in bytes.
You can find the source file for this design in our code repository on GitHub:. The file merely resides in the root directory of the repository, named WireFrames.epgz. Don't forget that it should be opened using the Pencil application.
Starting the project from scratch
In this section, we will build the image viewer application from scratch. No assumptions are made as to what integrated development environment (IDE) or editor you are using. We will just focus on the code itself and how to build the application using qmake in a Terminal.
First, let's create a new directory for our project, named ImageViewer. I use Linux and execute this in a Terminal, as follows:
$ pwd
/home/kdr2/Work/Books/Qt5-And-OpenCV4-Computer-Vision-Projects/Chapter-01
$ mkdir ImageViewer
$
Then, we create a C++ source file named main.cpp in that directory with the following content:
#include <QApplication>
#include <QMainWindow>
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QMainWindow window;
window.setWindowTitle("ImageViewer");
window.show();
return app.exec();
}
This file will be the gateway to our application. In this file, we first include dedicated header files for a GUI-based Qt application provided by the Qt library. Then, we define the main function, as most C++ applications do. In the main function, we define an instance of the QApplication class, which represents our image viewer application while it is running, and an instance of QMainWindow, which will be the main UI window, and which we designed in the preceding section. After creating the QMainWindow instance, we call some methods of it: setWindowTitle to set the title of the window and show to let the window emerge. Finally, we call the exec method of the application instance to enter the main event loop of the Qt application. This will make the application wait until exit() is called, and then return the value that was set to exit().
Once the main.cpp file is saved in our project directory, we enter the directory in the Terminal and run qmake -project to generate the Qt project file, as follows:
$ cd ImageViewer/
$ ls
main.cpp
$ qmake -project
$ ls
ImageViewer.pro main.cpp
$
As you can see, a file named ImageViewer.pro is generated. This file contains
many directives and configurations of the Qt project and qmake will use this
ImageViewer.pro file to generate a makefile later. Let's examine that project file. Its content is listed in the following snippet after we omit all the comment lines that start with #, as follows:
TEMPLATE = app
TARGET = ImageViewer
INCLUDEPATH += .
DEFINES += QT_DEPRECATED_WARNINGS
SOURCES += main.cpp
Let's go through this line by line.
The first line, TEMPLATE = app, specifies app as the template to use when generating the project. Many other values are allowed here, for example, lib and subdirs. We are building an application that can be run directly, so the value app is the proper one for us. Using other values are beyond the scope of this chapter; you can refer to the qmake manual at yourself to explore them.
The second line, TARGET = ImageViewer, specifies the name of the executable for the application. So, we will get an executable file named ImageViewer once the project is built.
The remaining lines define several options for the compiler, such as the include path, macro definitions, and input source files. You can easily ascertain which line does what based on the variable names in these lines.
Now, let's build the project, run qmake -makefile to generate the makefile, and then run make to build the project, that is, compile the source to our target executable:
$ qmake -makefile
$ ls
ImageViewer.pro main.cpp Makefile
$ make
g++ -c -pipe -O2 -Wall -W -D_REENTRANT -fPIC -DQT_DEPRECATED_WARNINGS -DQT_NO_DEBUG -DQT_GUI_LIB -DQT_CORE_LIB -I. -I. -isystem /usr/include/x86_64-linux-gnu/qt5
main.cpp:1:10: fatal error: QApplication: No such file or directory
#include <QApplication>
^~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:395: main.o] Error 1
$
Oops! We get a big error. This is because, with effect from Qt version 5, all the native GUI features have been moved from the core module to a separate module, the widgets module. We should tell qmake that our application depends on that module by adding the line greaterThan(QT_MAJOR_VERSION, 4): QT += widgets to the project file. Following this modification, the content of ImageViewer.pro appears as follows:
TEMPLATE = app
TARGET = ImageViewer
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
INCLUDEPATH += .
DEFINES += QT_DEPRECATED_WARNINGS
SOURCES += main.cpp
Now, let's build the application again by issuing the qmake -makefile and make commands in the Terminal as follows:
$ qmake -makefile
$ make
g++ -c -pipe -O2 -Wall -W -D_REENTRANT -fPIC -DQT_DEPRECATED_WARNINGS -DQT_NO_DEBUG -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I. -I. -isystem /usr/include/x86_64-linux-gnu/qt5 -isystem /usr/include/x86_64-linux-gnu/qt5/QtWidgets
g++ -Wl,-O1 -o ImageViewer main.o -lQt5Widgets -lQt5Gui -lQt5Core -lGL -lpthread
$ ls
ImageViewer ImageViewer.pro main.cpp main.o Makefile
$
Hooray! We finally get the executable file, ImageViewer, in the project directory. Now, let's execute it and see what the window looks like:
As we can see, it's just a blank window. We will implement the full user interface as per our designed wireframe in the next section, Setting up the full user interface.
Setting up the full user interface
Let's proceed with the development. In the preceding section, we built a blank window and now we are going to add the menu bar, toolbar, image display component, and the status bar to the window.
First, instead of using the QMainWindow class, we will define a class ourselves, named MainWindow, which extends the QMainWindow class. Let's see its declaration in mainwindow.h:
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = nullptr);
~MainWindow();
private:
void initUI();
private:
QMenu *fileMenu;
QMenu *viewMenu;
QToolBar *fileToolBar;
QToolBar *viewToolBar;
QGraphicsScene *imageScene;
QGraphicsView *imageView;
QStatusBar *mainStatusBar;
QLabel *mainStatusLabel;
};
Everything is straightforward. Q_OBJECT is a crucial macro provided by the Qt library. If we want to declare a class that has customized signals and slots of its own, or that uses any other facility from the Qt meta-object system, we must incorporate this crucial macro in our class declaration, or, more precisely, in the private section of our class, like we just did. The initUI method initializes all widgets that are declared in the private section. The imageScene and imageView widgets will be placed in the main area of the window to display images. Other widgets are self-explanatory from their type and name, so I will not say too much about them in order to keep the chapter concise.
Another key aspect is the implementation of the initUI method in mainwindow.cpp as follows:
void MainWindow::initUI()
{
this->resize(800, 600);
// setup menubar
fileMenu = menuBar()->addMenu("&File");
viewMenu = menuBar()->addMenu("&View");
// setup toolbar
fileToolBar = addToolBar("File");
viewToolBar = addToolBar("View");
// main area for image display
imageScene = new QGraphicsScene(this);
imageView = new QGraphicsView(imageScene);
setCentralWidget(imageView);
// setup status bar
mainStatusBar = statusBar();
mainStatusLabel = new QLabel(mainStatusBar);
mainStatusBar->addPermanentWidget(mainStatusLabel);
mainStatusLabel->setText("Image Information will be here!");
}
As you can see, at this stage, we don't create every item and button for the menu and toolbar; we just set up the main skeleton. In the preceding code, the imageScene variable is a QGraphicsSence instance. Such an instance is a container for 2D graphical items. According to its design, it only manages graphics items but doesn't have a visual appearance. In order to visualize it, we should create an instance of the QGraphicsView class with it, which is why the imageView variable is there. In our application, we use these two classes to display images.
After implementing all the methods of the MainWindow class, it's time to compile the sources. Before doing this, a number of changes need to be made to the ImageViewer.pro project file, as follows:
- We just write a new source file, and it should be known by qmake:
# in ImageViewer.pro
SOURCES += main.cpp mainwindow.cpp
- The header file, mainwindow.h, has a special macro, Q_OBJECT, which indicates that it has something that cannot be dealt with by a standard C++ preprocessor. That header file should be correctly handled by a Qt-provided preprocessor named moc, the Meta-Object Compiler, to generate a C++ source file that contains some code relating to the Qt meta-object system. So, we should tell qmake to check this header file by adding the following line to ImageViewer.pro:
HEADERS += mainwindow.h
OK. Now that all of that is complete, let's run qmake -makefile and make again, and then run the new executable. You should see the following window:
Well, so far so good. Now, let's go on to add the items that should appear in the menus as we intended. In Qt, each item in a menu is represented by an instance of QAction. Here, we take the action, which is to open a new image as an example. First, we declare a pointer to a QAction instance as a private member of our MainWindow class:
QAction *openAction;
Then, in the body of the initUI method, we create the action as a child widget of the main window by calling the new operator, and add it to the File menu as follows:
openAction = new QAction("&Open", this);
fileMenu->addAction(openAction);
Fortunately, buttons on the toolbar can also be represented by QAction, so we can add openAction directly to the file toolbar:
fileToolBar->addAction(openAction);
As mentioned previously, we have seven actions to create: open, save as, exit, zoom in, zoom out, previous image, and next image. All of them can be added in the same way as we added the open action. Also, given that many lines of code are required to add these actions, we can do a little refactoring of the code—create a new private method named createActions, insert all the code of the action into that method, and then call it in initUI.
All actions are now created in a separate method, createActions, after the refactoring. Let's compile the sources and see what the window looks like now:
Great! The window looks just like the wireframe we designed, and we can now expand the menu by clicking on the items on the menu bar!
Implementing the functions for the actions
In the previous section, we added several actions to the menu and toolbar. However, if we click on these actions, nothing happens. That's because we have not written any handler for them yet. Qt uses a signal and slot connection mechanism to establish the relationship between events and their handlers. When users perform an operation on a widget, a signal of that widget will be emitted. Then, Qt will ascertain whether there is any slot connected with that signal. The slot will be called if it is found. In this section, we will create slots for the actions we have created in the preceding sections and make connections between the signals of the actions to these slots respectively. Also, we will set up some hotkeys for frequently used actions.
The Exit action
Take Exit action as an example. If users click it from the File menu, a signal named triggered will be emitted. So, let's connect this signal to a slot of our application instance in the MainWindow class's member function, createActions:
connect(exitAction, SIGNAL(triggered(bool)), QApplication::instance(), SLOT(quit()));
The connect method takes four parameters: the signal sender, the signal, the receiver, and the slot. Once the connection is made, the slot on the receiver will be called as soon as the signal of the sender is emitted. Here, we connect the triggered signal of the Exit action with the quit slot of the application instance to enable the application to exit when we click on the Exit action.
Now, to compile and run, click the Exit item from the File menu. The application will exit as we expect if everything goes well.
Opening an image
The quit slot of QApplication is provided by Qt, but if we want to open an image when clicking on the open action, which slot should we use? In this scenario, there's no slot built-in for this kind of customized task. We should write a slot on our own.
To write a slot, first we should declare a function in the body of the class, MainWindow, and place it in a slots section. As this function is not used by other classes, we put it in a private slots section, as follows:
private slots:
void openImage();
Then, we give this slot (also a member function) a simple definition for testing:
void MainWindow::openImage()
{
qDebug() << "slot openImage is called.";
}
Now, we connect the triggered signal of the open action to the openImage slot of the main window in the body of the createActions method:
connect(openAction, SIGNAL(triggered(bool)), this, SLOT(openImage()));
Now, let's compile and run it again. Click the Open item from the File menu, or the Open button on the toolbar, and the slot openImage is called. message will be printed in the Terminal.
We now have a testing slot that works well with the open action. Let's change its body, as shown in the following code, to implement the function of opening an image from disk:
QFileDialog dialog(this);
dialog.setWindowTitle("Open Image");
dialog.setFileMode(QFileDialog::ExistingFile);
dialog.setNameFilter(tr("Images (*.png *.bmp *.jpg)"));
QStringList filePaths;
if (dialog.exec()) {
filePaths = dialog.selectedFiles();
showImage(filePaths.at(0));
}
Let's go through this code block line by line. In the first line, we create an instance of QFileDialog, whose name is dialog. Then, we set many properties of the dialog. This dialog is used to select an image file locally from the disk, so we set its title as Open Image, and set its file mode to QFileDialog::ExistingFile to make sure that it can only select one existing file, rather than many files or a file that doesn't exist. The name filter Images (*.png *.bmp *.jpg) ensures that only files with the extension mentioned (that is, .png, .bmp, and .jpg) can be selected. After these settings, we call the exec method of dialog to open it. This appears as follows:
If the user selects a file and clicks the Open button, a non-zero value will be returned by dialog.exec. Then, we call dialog.selectedFiles to get the path of the files that are selected as an instance of QStringList. Here, only one selection is allowed; hence, there's only one element in the resulting list: the path of the image that we want to open. So, we call the showImage method of our MainWindow class with the only element to display the image. If the user clicks the Cancel button, a zero value will be returned by the exec method, and we can just ignore that branch because that means the user has given up on opening an image.
The showImage method is another private member function we just added to the MainWindow class. It is implemented as follows:
void MainWindow::showImage(QString path)
{
imageScene->clear();
imageView->resetMatrix();
QPixmap image(path);
imageScene->addPixmap(image);
imageScene->update();
imageView->setSceneRect(image.rect());
QString status = QString("%1, %2x%3, %4 Bytes").arg(path).arg(image.width())
.arg(image.height()).arg(QFile(path).size());
mainStatusLabel->setText(status);
}
In the process of displaying the image, we add the image to imageScene and then update the scene. Afterward, the scene is visualized by imageView. Given the possibility that there is already an image opened by our application when we open and display another one, we should remove the old image, and reset any transformation (for example, scaling or rotating) of the view before showing the new one. This work is done in the first two lines. After this, we construct a new instance of QPixmap with the file path we selected, and then we add it to the scene and update the scene. Next, we call setSceneRect on imageView to tell it the new extent of the scene—it is the same size as the image.
At this point, we have shown the target image in its original size in the center of the main area. The last thing to do is display the information pertaining to the image on the status bar. We construct a string containing its path, dimensions, and size in bytes, and then set it as the text of mainStatusLabel, which had been added to the status bar.
Let's see how this image appears when it's opened:
Not bad! The application now looks like a genuine image viewer, so let's go on to implement all of its intended features.
Zooming in and out
OK. We have successfully displayed the image. Now, let's scale it. Here, we take zooming in as an example. With the experience from the preceding actions, we should have a clear idea as to how to do that. First, we declare a private slot, which is named zoomIn, and give its implementation as shown in the following code:
void MainWindow::zoomIn()
{
imageView->scale(1.2, 1.2);
}
Easy, right? Just call the scale method of imageView with a scale rate for the width and a scale rate for the height. Then, we connect the triggered signal of zoomInAction to this slot in the createActions method of the MainWindow class:
connect(zoomInAction, SIGNAL(triggered(bool)), this, SLOT(zoomIn()));
Compile and run the application, open an image with it, and click on the Zoom in button on the toolbar. You will find that the image enlarges to 120% of its current size on each click.
Zooming out just entails scaling the imageView with a rate of less than 1.0. Please try to implement it by yourself. If you find it difficult, you can refer to our code repository on GitHub ().
With our application, we can now open an image and scale it for viewing. Next, we will implement the function of the saveAsAction action.
Saving a copy
Let's look back at the showImage method of MainWindow. In that method, we created an instance of QPixmap from the image and then added it to imageScene by calling imageScene->addPixmap. We didn't hold any handler of the image out of that function; hence, now we don't have a convenient way to get the QPixmap instance in the new slot, which we will implement for saveAsAction.
To solve this, we add a new private member field, QGraphicsPixmapItem *currentImage, to MainWindow to hold the return value of imageScene->addPixmap and initialize it with nullptr in the constructor of MainWindow. Then, we find the line of code in the body of MainWindow::showImage:
imageScene->addPixmap(image);
To save the returned value, we replace this line with the following one:
currentImage = imageScene->addPixmap(image);
Now, we are ready to create a new slot for saveAsAction. The declaration in the private slot section is straightforward, as follows:
void saveAs();
The definition is also straightforward:
void MainWindow::saveAs()
{
if (currentImage == nullptr) {
QMessageBox::information(this, "Information", "Nothing to save.");
return;
}
QFileDialog dialog(this);
dialog.setWindowTitle("Save Image As ...");
dialog.setFileMode(QFileDialog::AnyFile);
dialog.setAcceptMode(QFileDialog::AcceptSave);
dialog.setNameFilter(tr("Images (*.png *.bmp *.jpg)"));
QStringList fileNames;
if (dialog.exec()) {
fileNames = dialog.selectedFiles();
if(QRegExp(".+\\.(png|bmp|jpg)").exactMatch(fileNames.at(0))) {
currentImage->pixmap().save(fileNames.at(0));
} else {
QMessageBox::information(this, "Information", "Save error: bad format or filename.");
}
}
}
First, we check whether currentImage is nullptr. If true, it means we haven't opened any image yet. So, we open a QMessageBox to tell the user there's nothing to save. Otherwise, we create a QFileDialog, set the relevant properties for it, and open it by calling its exec method. If the user gives the dialog a filename and clicks the open button on it, we will get a list of file paths that have only one element in it as our last usage of QFileDialog. Then, we check whether the file path ends with the extensions we support using a regexp matching. If everything goes well, we get the QPixmap instance of the current image from currentImage->pixmap() and save it to the specified path. Once the slot is ready, we connect it to the signal in createActions:
connect(saveAsAction, SIGNAL(triggered(bool)), this, SLOT(saveAs()));
To test this feature, we can open a PNG image and save it as a JPG image by giving a filename that ends with .jpg in the Save Image As... file dialog. Then, we open the new JPG image we just saved, using another image view application to check whether the image has been correctly saved.
Navigating in the folder
Now that we have completed all of the actions in relation to a single image, let's go further and navigate all the images that reside in the directory in which the current image resides, that is, prevAction and nextAction.
To know what constitutes the previous or next image, we should be aware of two things as follows:
- Which is the current one
- The order in which we count them
So, first we add a new member field, QString currentImagePath, to the MainWindow class to save the path of the current image. Then, we save the image's path while showing it in showImage by adding the following line to the method:
currentImagePath = path;
Then, we decide to count the images in alphabetical order according to their names. With these two pieces of information, we can now determine which is the previous or next image. Let's see how we define the slot for prevAction:
void MainWindow::prevImage()
{
QFileInfo current(currentImagePath);
QDir dir = current.absoluteDir();
QStringList nameFilters;
nameFilters << "*.png" << "*.bmp" << "*.jpg";
QStringList fileNames = dir.entryList(nameFilters, QDir::Files, QDir::Name);
int idx = fileNames.indexOf(QRegExp(QRegExp::escape(current.fileName())));
if(idx > 0) {
showImage(dir.absoluteFilePath(fileNames.at(idx - 1)));
} else {
QMessageBox::information(this, "Information", "Current image is the first one.");
}
}
First, we get the directory in which the current image resides as an instance of QDir, and then we list the directory with name filters to ensure that only PNG, BMP, and JPG files are returned. While listing the directory, we use QDir::Name as the third argument to make sure the returned list is sorted by filename in alphabetical order. Since the current image we are viewing is also in this directory, its filename must be in the filename list. We find its index by calling indexOf on the list with a regexp, which is generated by QRegExp::escape, so that it can exactly match its filename. If the index is zero, this means the current image is the first one in this directory. A message box pops up to give the user this information. Otherwise, we show the image whose filename is at the position of index - 1 to complete the operation.
Before you test whether prevAction works, don't forget to connect the signal and the slot by adding the following line to the body of the createActions method:
connect(prevAction, SIGNAL(triggered(bool)), this, SLOT(prevImage()));
Well, it's not too hard, so attempt the work of nextAction yourself or just read the code for it in our code repository on GitHub.
Responding to hotkeys
At this point, almost all of the features are implemented as we intended. Now, let's add some hotkeys for frequently used actions to make our application much easier to use.
You may have noticed that, when we create the actions, we occasionally add a strange & to their text, such as &File and E&xit. Actually, this is a way of setting shortcuts in Qt. In certain Qt widgets, using & in front of a character will automatically create a mnemonic (a shortcut) for that character. Hence, in our application, if you press Alt + F, the File menu will be triggered, and while the File menu is expanded, we can see the Exit action on it. At this time, you press Alt + X, and the Exit action will be triggered to let the application exit.
Now, let's give the most frequently used actions some single key shortcuts to make using them more convenient and faster as follows:
- Plus (+) or equal (=) for zooming in
- Minus (-) or underscore (_) for zooming out
- Up or left for the previous image
- Down or right for the next image
To achieve this, we add a new private method named setupShortcuts in the MainWindow class and implement it as follows:
void MainWindow::setupShortcuts()
{
QList<QKeySequence> shortcuts;
shortcuts << Qt::Key_Plus << Qt::Key_Equal;
zoomInAction->setShortcuts(shortcuts);
shortcuts.clear();
shortcuts << Qt::Key_Minus << Qt::Key_Underscore;
zoomOutAction->setShortcuts(shortcuts);
shortcuts.clear();
shortcuts << Qt::Key_Up << Qt::Key_Left;
prevAction->setShortcuts(shortcuts);
shortcuts.clear();
shortcuts << Qt::Key_Down << Qt::Key_Right;
nextAction->setShortcuts(shortcuts);
}
To support multiple shortcuts for one action, for example, + and = for zooming in, for each action we make an empty QList of QKeySequence, and then add each shortcut key sequence to the list. In Qt, QKeySequence encapsulates a key sequence as used by shortcuts. Because QKeySequence has a non-explicit constructor with int arguments, we can add Qt::Key values directly to the list and they will be converted to instances of QKeySequence implicitly. After the list is filled, we call the setShortcuts method on each action with the filled list, and this way setting shortcuts will be easier.
Add the setupShortcuts() method call at the end of the body of the createActions method, then compile and run; now you can test the shortcuts in your application and they should work well.
Summary
In this chapter, we used Qt to build a desktop application for image viewing, from scratch. We learned how to design the user interface, create a Qt project from scratch, build the user interface, open and display images, respond to hotkeys, and save a copy of images.
In the next chapter, we will add more actions to the application to allow the user to edit the image with the functions provided by OpenCV. Also, we will add these editing actions in a more flexible way by using the Qt plugin mechanism.
Questions
Try these questions to test your knowledge of this chapter:
- We use a message box to tell users that they are already viewing the first or last image while they are trying to see the previous one before the first image, or the next one after the last image. But there is another way to deal with this situation—disable prevAction when users are viewing the first image, and disable nextAction when users are viewing the last image. How is this implemented?
- Our menu items or tool buttons only contain text. How could we add an icon image to them?
- We use QGraphicsView.scale to zoom in or out of an image view, but how do you rotate an image view?
- What does moc do? What actions do the SIGNAL and SLOT macros perform? | https://www.packtpub.com/product/qt-5-and-opencv-4-computer-vision-projects/9781789532586 | CC-MAIN-2020-40 | refinedweb | 5,252 | 60.95 |
Subject: [boost] [fixed_point] Request for interest in a binary fixed point library
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2012-04-04 17:33:15
Hi,
the recent discussion on the MultiplePrecission Arithmetic library has
show that some people has its ow fixed point library.
Is there an interest in a Boost library having as base?
I have started a prototype (there is no doc yet
but the basic goal is quite close to the N3352 C++ proposal). This
prototype should allow you to play a little bit with.
If yes, what would you like to be changed, added or improved in n3352?
Next follows some design decisions that IMO need to be decided before hand.
* Should integers and reals be represented by separated classes?
* Should signed and unsigned be represented by separated classes?
* Should the library use a specific representation for signed numbers
(separated sign, 2-complement? Let the user choose?
* Should the library provide arbitrary range and resolution and allocators?
* Should the library be open to overflow and rounding or just implement
some of the possible policies? and in this case which ones?
* Should fixed_point be convertible to/from integer/float/double?
* Could the result of an arithmetic operation have more range and
resolution than his arguments?
* Is there a need for a specific I/O?
* is there a need for radix other than 2 (binary)?
* Should the library implement the basic functions, or should it
imperatively implement the C++11 math functions? Could a first version
just forward to the c++11 math functions?
* Should the library support just one of the know ways to name a
fixed-point, a U(a,b), nQm, ...? Provide some ways to move from one to
another?
* Could expect the same/better performances respect to hand written code?
* What should be the size used by a fixed_point instance? _fast? _least?
Should the user be able to decide which approach is better for his needs?
* Which should be the namespace? boost? boost/fixed_point?
boost/binary_fixed_point? boost/bfp?
...
* others you can think of.
Please, replay if you are interested, have some experience in the
domain, good ideas, ..
Best regards,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | http://lists.boost.org/Archives/boost/2012/04/191987.php | CC-MAIN-2014-42 | refinedweb | 381 | 68.97 |
What is MEF?
Managed Extensibility Framework (MEF) is a library in Silverlight 4 (Now, it's
part of Silverlight 4) for building rich internet applications which improves
the flexibility, maintainability and testability of large application.
MEF helps us to build modular based application. We can also load modules
dynamically. Why is MEF?
Really, I was fed up with these frameworks, when MEF has been released. Because,
it was not the matter of learning, but the matter of confusion. Which
framework should I use for modular programming? Which one should I use for
dynamically loading module/assembly? Should I use PRISM or MEF?
The Composite Application Library (PRISM) is basically a huge toolbox. Using
PRISM, a lot can be done. It supports for modularity, dynamic loading, region
management etc. But, According to me, PRISM is bit complicated. So, if I want to
do a modular program, then it is simply waste of time to go PRISM.
MEF can be used for modular, dynamically downloading programs without learning
tons of design concepts. Even PRISM v4 supports for MEF. Basic principles of MEF
There are three major parts.
So, now our parts are ready to be loaded. Now,
the actual work is to build the composition. This can be done using the CompositionInitializer Class. At runtime, the compositioninitializer builds a package and maps with the particular
import and export tag names given.
Consider a Silverlight example. Here, I would like to display my name using MEF.
Step 1 : Create a Silverlight 4
application. Step 2 : Create a class. Write a property and use export key over that.
So, here I want to display my name so, I am using EXPORT over firstname
property. (Here, property name may be anything. But, export key name should be
same for both Export and Import. public class AravindTest {
public string
Midlename
{
get {
return "Hi";
}
set {
}
}
[Export("MyName")]
public string
{
get {
return "Aravind";
}
set {
}
}
} Step 3 : In the main page, I am using AravindTest module to import. I
have created one property called Name with keyword Import("MyName").
Also, I have used CompositionInitializer to build a package. public partial class MainPage : UserControl {
[Import("MyName")]
public string
Name{get;set;}
public MainPage()
{
InitializeComponent();
CompositionInitializer.SatisfyImports(this);
MessageBox.Show(Name);
}
} Step 4 :
So, when I run the application, the result will be:
©2014
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/aravindbenator/mef-silverlight-implementation/ | CC-MAIN-2014-52 | refinedweb | 392 | 52.26 |
If the previous lesson, we wrote our first C# program. In that lesson, we mentioned that class names and method names are case-sensitive. But what is the difference between a class and a method? In object-oriented programming, we can consider that everything we work with is some kind of object. As developers, we want to be able to use and manipulate these objects. To define these objects, we categorize our code blocks into different components like Classes and Methods.
What is a Class in C#?
In C#, a Class can be considered as a template or blueprint for an object. It defines which properties and functions our object needs, so we can create the object and use it. Let’s consider an example where we write a program to represent a soccer team . We might create a Class called SoccerTeam. In this class, we would define fields that all soccer teams have – a name, a stadium, and number of wins, losses, and draws. Our pseudocode might look something like this:
Class SoccerTeam{ Field1 name; Field2 stadium; Field3 wins; Field4 losses; Field5 draws; }
What is a Method in C#?
A Method is a function inside our Class that contains a series of statements. Like the Class, a Method is its own code block, but Method code blocks are located inside their respective Class code block. They can define actions to be made on the Class.
For example, we might want to create functions that allow us to add match results to our team’s record or view the team’s statistics. These function definitions would go inside our SoccerTeam Class code block and are called Methods. Our pseudocode might now look something like this:
Class { Field1 name; Field2 stadium; Field3 wins; Field4 losses; Field5 draws; Method1 AddMatch(arguments) { Do something; } }
A Complete C# Class Example
Let’s put it all together by studying a real example based on our SoccerTeam class.
using System; namespace SoccerTeams { public class SoccerTeam { //Private fields private string name; private string stadium; //Private fields with initial values private int wins = 0; private int losses = 0; private int draws = 0; //Property that takes private fields and computes win ratio public double WinRatio { get { int matches = this.wins + this.losses + this.draws; return (double)this.wins / matches; } } //Method for adding matches to SoccerTeam public void AddResult (int goalsFor, int goalsAgainst) { if (goalsFor > goalsAgainst) { this.wins++; } else if (goalsFor == goalsAgainst) { this.draws++; } else { this.losses++; } } //Override ToString() class so we can get a nice visual of what our class contains public override string ToString() { return this.name + " plays at " + this.stadium + ": " + "W" + this.wins + " L" + this.losses + " D" + this.draws; } //Constructor which defines how the class is initialized //Takes two arguments (name and stadium) public SoccerTeam(string n, string s) { this.name = n; this.stadium = s; } } class Program { static void Main() { //Instantiate object SoccerTeam ocsc = new SoccerTeam("Orlando City SC", "Orlando City Stadium"); //Simulate some matches ocsc.AddResult(4, 2); //win ocsc.AddResult(2, 2); //draw ocsc.AddResult(1, 0); //win ocsc.AddResult(0, 1); //loss //Print record by calling WinRatio property. Console.WriteLine("Win Ratio: " + ocsc.WinRatio); //Print what is in our class Console.WriteLine(ocsc.ToString()); //Wait for user input Console.ReadLine(); } } }
In this example, we have defined a class called SoccerTeam. Within this Class, we have private Fields for the team’s name, stadium, and number of wins, losses, and draws (lines 7-14). We then define a Property called WinRatio (lines 16-24) that allows us to take these private Fields, make some computations, and use the result in another class. A Property is a mechanism that allows us to read, write, or compute values of private Fields.
Next, we created a Method for adding match results to our SoccerTeam (lines 27-39). This Method takes two arguments (goalsFor and goalsAgainst) and determines whether the match was a win, a loss, or a draw for our SoccerTeam.
Creating an Instance of our Class
Remember, the Class itself is only a template for what the Object contains. We don’t actually create the Object until we instantiate it in our Program class by using the new operator on Line 61. After this, we hard-code some match results and add them to our class by calling the .AddResult() method (lines 63-68).
Finally, we access the public WinRatio property and write it and some other interesting information to the console. When we run the program, we will see information about our Class Instance in the console.
Another C# Class vs Method Example
Remember in our first C# program, we used
Console.ReadLine(); and
Console.WriteLine('');. Console is actually a class defined in the .NET Core framework, and it consists of several methods and properties. ReadLine() and WriteLine() are two such methods of the Console class.
In fact, if you look closely at our first C# program example, you will see that all our code was written inside the Main method of our Program class. you were organizing your code into classes and methods without even realizing it!
The Bottom Line
In this tutorial, we have tried to answer your questions about Classes, Methods, and other common C# terms. You may not be able to write a full C# program yet, but hopefully now you have a better understanding of the difference between a Class and a Method. These concepts will become more clear as you continue to work through the tutorials. In the coming lessons, you can expect to discover more about data types, variables, and simple C# statements.
Do you have any questions about this C# Class vs Method example? Let me know in the comments. | https://wellsb.com/csharp/beginners/understanding-csharp-classes-and-methods/ | CC-MAIN-2020-16 | refinedweb | 942 | 65.73 |
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
The missing library dialog should give you clearer options for how to resolve
the issue.
Some logical options would be:
1. Remove the library reference (my option for the case that prompted this
report, as the library reference was obsolete anyway)
2. Create a new library reference (avoids typos, and avoids us having to realize
that the library name is case-sensitive (not sure why it is case sensitive
either, really)
3. Manage Libraries - For those who want the current, expert only option
Reassigning to "projects" for evaluation.
May be duplicate.
I totally agree on this one. I just manned a booth at Boston NB Day and many developers have problems with library
references in a team environment. If a library reference is defined, we should show them what is in the library and let
them define the reference finding the necessary jars.
Ideally we should define a "shared" library reference that lives with the project. I know namespace other issues would
need to be addressed but this might also allow projects to be more portable.
For shared libs, see issue #44035, already prepared but did not make it into 6.0.
Shared libs are already implemented, right? Can we close this issue? Milosi?
jbecicka: not sure. While sharable libraries are there, there still can be unresolved library references.
I personally do not believe that shared libs are the only thing needed here. The dialog really should be quite a bit
more friendly.
reproducible with 6.7m3, don't see how to remove unresolved reference for example.
But it's nt necessary to replace reference with exactly the same jar, so may be #2 was fixed.
Not sure what #3 means, but dialog do not allow anything except resolve reference.
Bug prior to 7.0, not touched for the last 2 years --> P4.
IN NB 7.3 a API was created which allows modules to supply code and UI for resolving project problems (ProjectProblemsProvider). Now only the UI has to be designed.
I will try to address it into NB 7.3. | https://bz.apache.org/netbeans/show_bug.cgi?id=91036 | CC-MAIN-2020-16 | refinedweb | 370 | 67.76 |
Quoting Vasiliy Kulikov (segoon@openwall.com):> Actually, what concerns me is not ptrace, but symlink/hardling> protection. There is no interaction between namespaces in case of> containers via symlinks in the basic case. In case of ptrace I don't> think the child ns may weaken the parent ns - child ns may not access> processes of the parent namespace and everything it may ptrace is> already inside of this ns.Oh, yes. If you're saying the symlink protection shouldn't beper-pidns, I agree it seems an odd fit.How about a version of this patch leaving symlink protectionout of pidns (maybe in user ns), and just putting ptraceprotection per-pidns?-serge | https://lkml.org/lkml/2011/11/23/270 | CC-MAIN-2017-34 | refinedweb | 113 | 62.68 |
902/difference-between-action-and-actions-in-selenium
Action is an interface :
public interface Action
Action Interface represents a single user-interaction action.
and Actions is a Class that extends Object class
public class Actions
extends java.lang.Object
Use this class rather than using the Keyboard or Mouse directly..
Example:-
Let's Assume we want to Sent text in Caps in Text field. We need to Press SHIFT and then send Text and then we will release from SHIFT key.
Performing all the task at a time using Selenium API we will use Actions class and Action interface.
1) Actions actions = new Actions(webdriver object);
Since we need to perform all action one by one like this "actions.
2) keyDown(element, Keys.SHIFT) + sendKeys(“Text_In_UpperCase”) + keyUp(Keys.SHIFT)".
we can clubbed all action together as below.
3) actions.keyDown(element, Keys.SHIFT).sendKeys(“Text_In_UpperCase”).keyUp(Keys.SHIFT);
Now,We need to build this sequence using the build() method of Actions class and get the composite action.
4) Action action = actions.build();
Keep in mind that the build method always returns “Action type object” so we need to create reference of Action Interface and hold all builder's Actions.
And finally, perform the actions sequence using perform() method of Action Interface.
5) action.perform();
In short we can use:-
actions.keyDown(element, Keys.SHIFT).sendKeys(“Text_In_UpperCase”).keyUp(Keys.SHIFT).build().perform();
The simple difference is that, getClass() returns ...READ MORE
@Before and @After are called for every ...READ MORE
Hello, talking about the definition.
Assert: If the assert condition ...READ MORE
Both sleep() and setSpeed() are used
My upgrade to Internet Explorer 10 was ...READ MORE
For Selenium Standalone Server use this:
profile.setPreference("browser.helperApps.neverAsk.saveToDisk", "application/java-archive");
and ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/902/difference-between-action-and-actions-in-selenium | CC-MAIN-2020-34 | refinedweb | 301 | 51.75 |
You may have noticed that I occationally link to small video clip demonstrating my pygame display.
In this snippet I'll show how those videos are made.
As a teaser, take a look at this link (unfortunately MediaFire's servers are currently quite slow, so it may take several minutes to download – I migth migrate to another platform):
mod: alternatively:
The video is (surprise) made from a series of individual screen dumps.
The individual pictures are later collected and converted to a video file.
To generate the screen dumps and name them individually, I use a small module called 'video'.
Inside the module a Python generator takes care of the file name generation (sorry for the redundant expression):
import pygame pygame.init() def make_video(screen): _image_num = 0 while True: _image_num += 1 str_num = "000" + str(_image_num) file_name = "image" + str_num[-4:] + ".jpg" pygame.image.save(screen, file_name) print("In generator ", file_name) # delete, just for demonstration pygame.time.wait(1000) # delete, just for demonstration yieldWhen the generator is first initiated (see below), the name of the display (here: screen) is passed and a counter '_image_num' is initiated.
When the generator is later called from main (by use of next(), see below) it enters its endless loop creating file names in the format ”image0001.jpg” where the number part grows by one in each pass.
When the generator code gets to the 'yield' statement, it passes control back to the main code and only resumes upon a 'next()' command in the main.
Line 13 and 14 are only in for demonstration purpose, you should delete them before use in real code.
The main code looks like this:
from video import make_video import pygame pygame.init() screen = pygame.display.set_mode((400, 300)) screen.fill((200, 100, 50)) save_screen = make_video(screen) # initiate the video generator video = False # at start: video not active while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() # toggle video on/off by clicking 'v' on keyboard # elif event.type == pygame.KEYDOWN and event.key == pygame.K_v: video = not video # game code here # pygame.display.flip() if video: next(save_screen) # call the generator print("IN main") # delete, just for demonstrationIn line 1 the generator is imported.
In line 8 the generator is initiated, a local variable 'save_screen' now points to the generator; the generator gets the display name ('screen') as argument and initiates as described above.
In line 9, a local boolean is set to False; later we can toggle this between True and False and start/stop the video dump.
Line 11 is the game loop used in most games and recreational programs. I hope line 12-14 is known.
Line 17/18 toggles the 'video' between True and False by clicking on 'v' on the keyboard.
Line 24/25 calls the generator by use of the keyword 'next' with 'save_screen' as argument.
If you download the two code snippets, please remember to place then in the same directory and name the first one 'video'.
When you run the main code (and click on 'v'), you will get nice prints in the console demonstrating that the code flips between main and module. After running the code, you will find a number of image files in the directory, named 'image0001.jpg, … , image####.jpg'.
All you need to do now is convert the pictures into a video. There are many programs you can find on the internet to do this conversion, both free and paid. Some uses compression and some not. Some allows various kind of subsequent manipulation. I don't have an overview and am not an expert in this.
To do a small 10 seconds video, however, I use a free program called MakeAVI. It is very simple and needs no further intro.
To share the videos you need to upload them somewhere and make public a link to the location. Until now I have used MediaFire for this, but many sites offer this, Github, Sourceforge, Dropbox, .. You'll find you own way.
Please remember to share some good game moments with us here on DIC - or use this to demonstrate a problem you need help with. | http://www.dreamincode.net/forums/topic/400620-video-sequences-from-your-pygame-display/ | CC-MAIN-2018-09 | refinedweb | 691 | 72.76 |
This is the discussion place for DTMLCookbook = how to do simple tasks with DTML for a ZWiki :
This page is for the discussion of the development of the document: DTMLCookbook
PageMaintainers: FloK, DeanG
OpenQuestions? :
What's the difference betweeen:
&dtml-last_editor; and &dtml.url_quote-last_editor
FlorianKonnertz, 2002/12/16 14:24 GMT (via web):
Necessary changes when using a DynamicDNS? redirect:
- Add a DtmlDocument? or DtmlMethod? with the id
real_url(in my example). This must be the absolute URL and is different in each SubWiki.
- Change in all dtml-pages the following:
&dtml-page_url; to <dtml-var real_url>/&dtml.url_quote-id; &dtml.url_quote-sequence-key; to <dtml-var real_url>/&dtml.url_quote-sequence-key; &dtml-wiki_url; to <dtml-var real_url>
I did it for NooWiki:RecentChanges as far by now. See the modified code at
DeanGoodmanson, 2003/01/02 00:59 GMT (via web):
url_quote converts the string to URL friendly characters, just as html_quote does for HTML friendly chars...
Anyone know if there's an xml_quote ?
2003/01/03 07:01 GMT (via web):
I'd really like a version of th SearchPage? that worked in a NestedPage? (WikiInclusion?) This would essentially be a word search much like WikiBadges...
This would need to be formattable in no more than 1/3 hor/vert of a page. A similar example is AboutZwikiDiscussion, but extending the search to take an overridable text-box.
FlorianKonnertz, 2003/01/11 11:21 GMT (via web):
I want to provide a section in the header for SisterSite? links configurable via UserOptions?. My first start was with a dtml-var and cookie similar to the bookmarks, seperated with spaces. The user has to enter the RemoteWikiURL-s (later it would be great to get the URLs? from the RemoteWikiLink? pages) ie: zwiki.org ;
Now i need to split that string:
<dtml-let <dtml-var "ssl[0][0]"> <dtml-var "ssl[0][1]"> </dtml-let>
And then add the:
<a href="http://<dtml-var ... > blabla
to it. I'll post my progress later today. Do you see any problems or have better suggestions?
FlorianKonnertz, 2003/01/11 14:38 GMT (via web):
See AdditionalLinkPanel for a first solution.
FlorianKonnertz, 2003/01/11 15:00 GMT (via web):
How to display the raw code of a DtmlMethod? or DtmlDocument? by inclusion, ie the code of the current RecentChanges? on a RecentChangesCode? page?:
<dtml-var recentchangesdtml> -> <dtml-var "recentchangesdtml".raw> (WRONG!)
hmmm.... should be easy but i don't remember.
raw code :
<dtml-var "recentchanges.src()"> ???
This doesn't work, ehmm...:
<dtml-var "recentchangesdtml.src()"> (Object: recentchangesdtml.src()) (Info: recentchangesdtml) File , line 2, in f AttributeError: __call__
Ideas, please ?! - FloK,01-13
DeanGoodmanson, 2003/01/12 15:37 GMT (via web):
Splitting text
The [OldZwikiFAQPage]? has a good example of using split with a find to determine the index of the start of the searchd for text within a text block (ideally a page).
FlorianKonnertz, 2003/01/13 13:06 GMT (via web):
Thanks Dean, i saw there's similar code used in the LittleBlog? code in NooWiki:DebugWiki. - Very good.
FlorianKonnertz, 2003/01/14 11:26 GMT (via web):
How to wikilink an acquired DTMLMethod, ie. recentchangesdtml or useroptionsdtml? My approach:
<dtml-let <dtml-var "wikilink(dmethod)"> </dtml-let>
does not work. Does pageWithName only accepts WikiPage?-s objects or is it the syntax [i guess :-( ]? or both?
That's also not the right thing:
<dtml-let
Now the dtml is not executed anymore. Is there a method to do this manually
dtml(), similar to
wikilink()?
Todo: How to handle these expressions:
&dtml-page_url;
FlorianKonnertz, 2003/01/17 07:50 GMT (via web):
copied from IncludeOrTransclude: src() is a method of what object? What is the difference with document_src()?
FlorianKonnertz, 2003/01/23 15:18 GMT (via web):
Showing code of standardwikiheader
I'd like to show dynamicall the code of all the SystemPages?. It works with all pages but not with header and footer, why?:
<dtml-var "standard_wiki_header.document_src()" html_quote newline_to_br>
I get the following error:
(Object: standard_wiki_header.document_src()) (Info: standard_wiki_header) File , line 2, in f RuntimeError: function attributes not accessible in restricted mode
DeanGoodmanson, 2003/01/23 21:18 GMT (via web):
What's up?
At first I questioned to necesity for tihs page until I saw how clean to DTMLCookbook became. I've modified the description portions of the pages.
It seems that the OpenQuestions? on that page vs. this page is going to be interesting to maintain. I suggest putting it on it's own page ZWikiCookbookOpenQuestions? and including it.
p.s. OT: I really hate the obscure & hard to remember term "transclude", although technicaly it fits. I like SubPage? but think that's a term other wiki's use for page heirarchies. (like SubWiki's) What do you think of EmbeddedPage ??
FlorianKonnertz, 2003/01/24 22:00 GMT (via web):
Cooking progress
Dean: Yes, the pages are both much clearer now - one for discussion, questions and collaborating on solutions and one for reference. - An own Open-Questions page included sounds like a good idea, imo PageInclusion? is used too little (all the possibilites in ZWiki and dtml in general are hard to remember). I'd give it a try.
FlorianKonnertz, 2003/01/26 22:41 GMT (via web):
EmbeddedPage
Embedding dtml-containing pages in other pages by dtml: Sometimes it works, sometimes not. Further investigation needed. Example:
<dtml-in "aq_parent.objectItems(spec='ZWiki TestPage')"> <dtml-if sequence-end> <b><dtml-var sequence-number> pages </b> </dtml-if> </dtml-in>
cannot be included. But the following one can:
<dtml-in "debug.objectItems(spec='ZWiki TestPage')"><dtml-if sequence-end><dtml-var sequence-number> pages</dtml-if></dtml-in>
The
aq_parent object makes problems apparently. - Why?
Simon Michael, 2003/01/29 21:59 GMT (via mail):
EmbeddedPage
> The
aq_parent object makes problems apparently. - Why?
When you have DTML pages nested like that, aq_parent is probably the outer page not the folder. Zwiki's folder() method is a way to avoid that issue.
FlorianKonnertz, 2003/01/31 15:30 GMT (via web):
EmbeddedPage
Thanks,Simon. I did my first tests with folder() - seems to suit my needs.
FlorianKonnertz, 2003/01/31 15:35 GMT (via web):
dtml expr question
Why can i do:
<dtml-var "getPropertyType(x_sequence_key)">
but not:
<dtml-let </dtml-let>
I get:
File /usr/local/Zope-2.5.0-src/lib/python/DocumentTemplate/DT_In.py, line 695, in renderwob (Object: propertyItems) File /usr/local/Zope-2.5.0-src/lib/python/DocumentTemplate/DT_Let.py, line 76, in render (Object: ptype="getPropertyType(x_sequence_key)") KeyError
I looked for similar examples and it looks always like < dtml-let - How can this be done?
SimonMichael, 2003/01/31 18:37 GMT (via web):
Something's not right there. Are your double quotes getting quoted ? Set up this code snippet on TestPage if you like and I'll play with it.
FlorianKonnertz, 2003/01/31 19:43 GMT (via web):
dtml expr question
Don't know yet what it could be... :-( I started a PropertyEditFormTest? page. - And then....
But i need your help in the "external method imports ZwikiPage? to access general properties" issue. At the moment i use an external method:
def getPropertyMode(idin): """Returns the mode of a given property """ import Products.ZWiki.ZWikiPage
### here some code is missing
for prop in p: id=prop['id'] mode=prop['mode'] if (idin == id): res = prop['mode']
return res
if __name__ == '__main__': import sys print getPropertyMode(sys.argv[1])
It's not working fine yet. I don't know yet how to access the _properties attribute of the ZwikiPage? class. How is this be done?
FlorianKonnertz, 2003/01/31 19:52 GMT (via web):
dtml expr question
BTW: Currently i've put it hardcoded in the ExternalMethod.... - IMO it's better to add a function
getPropertyMode(id) in ZWikiPagePy? but as long NooWiki is on my friends server where i've no root permission (probably i move the next days, yep :-) i have to do it with an ExternalMethod. - Do you think i'm on the right path, Simon? - I hope so.
FlorianKonnertz, 2003/01/31 20:19 GMT (via web):
PropertyEditForm?
In the ZMI there already is a pretty property editform. Can't it be reused for our purpose? Even the
Add new property could be used. It is almost exactly what i (we) want, but i came to my mind but now... :-O
Where is this stuff located in the zope src?
FlorianKonnertz, 2003/01/31 20:25 GMT (via web):
PropertyEditForm?
Ok, i got it.::
Let's see if it can be used.
Discussion TestPage Auto-Creation --Ducker, 2003/02/19 10:50 GMT
I'm sure there are some better ways to do this, but I've upped the code for auto-creating discussion pages at DiscussionCreator
Discussion auto-creation --DeanGoodmanson, 2003/02/19 16:00 GMT
That's a great one, Ducker!
I'd suggest adding a "parameter" so that it can be put in a DTMLMethod and doc page authors can point the document a common discussion page. For instance, this link would be nice to add to all of the related recipe's, but most of those recipes can be discussed at this page.
If you feel like getting REALLLY fancy, the link would direct the person to that page with a variable for the
optional subject (the footer would have to be modified to accept a that parameter to fill in the form field.)
(Note: The "if value else set to default" seems to be a bit of a pattern that should be noted on this page soon.)
AutomaticContributionManagement? --FlorianKonnertz, 2003/02/20 00:28 GMT
with a variable for the optional subject - Good idea, Dean. - An example of your idea would be: If you have, let's say a topic page with ten subtopic pages and most of them have minor discussion traffic you might want to direct the contributions to these pages to the main topic's discussion page (with the page title as subject) or to a misc disc.page.
In a certain view it's kinda opposite of my Post this weblog entry also to the following wikipages approach (NooWiki:NooWikiWeblog). Sometimes it's useful to copy items to a few special topic pages and sometimes it's good to collect them on one page (or copies of it).
Another REALLY fancy :-) thing: Having a (user set) property for some important pages in one's wiki to post the entry also to a remote wiki's page (with same name or not). I do often ask the question: Should i write this idea first to my wiki or first here on zwikidotorg or both or...? - Also imagine an automatic added note "This comment is a copy from wiki so-and-so page xy" resp. "This comment was also posted to blahblahwiki:hereandnow". - RFC: Would you appreciate such a feature?
ZPT --DeanGoodmanson, 2003/02/20 01:37 GMT
FloK - Do you have any ZPT that sits in the text of a ZwikiPage? ?
I have only heard of ZPT in the context of header/footer support.
[ZPT]? --FlorianKonnertz, 2003/02/20 10:05 GMT
DeanG: No. I've done the header and footer up to now. What do you want to do? Anything special in mind? My first interest is of course the IsssueTracker? and the ZwikiWeblog? but i still wait for furhter comments/interest in creation of a new page type before just transforming these to ZPT.
ZPT --DeanGoodmanson, 2003/02/20 14:35 GMT
What do I want to do?
Make ZWiki more acceptable to the "dtml is bad" Zope crowd.
I'm interested in it as an alternative for the WebLogs? and Issue and Task tracker myself. (ZWikiRecordBook?)
ZPT version of IssueTracker --FlorianKonnertz, 2003/03/24 09:42 GMT
I'm coding a ZPT version of IssueTracker at the moment. Please tell me why i get an:
AttributeError: page_url\n</p>', 'error_value': <exceptions.AttributeError instance at 0xb52aea4>
on this line:
<a tal:
i got it --FlorianKonnertz, 2003/03/24 10:17 GMT
Ok, i got it! I was browsing the template directly instead of a wikipage embedding it via < dtml-var template > ... - Ooups!! ;-) I'm mortified. :-/
IssueTracker, ZPT, help needed --FlorianKonnertz, 2003/03/24 15:59 GMT
I need help with the loop over all the issues in IssueTracker, how to do it in ZPT. It seems as if one has to check for an request item in ZPT before asking for getattr, but i'm not sure.
IssueTracker, ZPT, help needed --SimonMichael, 2003/03/24 21:33 GMT
I'd guess something like:
<li tal: <span tal:</span>
etc. PageTemplate has some good docs links.
IssueTrackerZPT thanks --FlorianKonnertz, 2003/03/24 22:13 GMT
Thanks Simon, meanwhile i managed it. See IssueTrackerZPT. I kept the algorithm of the dmtl version (as far as possible). What do you think? BTW,When do you want to provide a ZPT version of ZWiki?
IssueTrackerZPT thanks --SimonMichael, 2003/03/25 01:08 GMT
I think it's nifty, and good to explore. It's weird to see my old crufty dtml code now in the pristine world of PageTemplates :). I find my wiki page version more useful though.
pathindex problem, Help needed urgently! --FlorianKonnertz, 2003/04/01 20:30 GMT
I need help with a ZCatalog problem: How to address the pathindex? (ZPT). It's for my IssueTrackerZPT version, i need to prevent querying the subwikis which a catalog normally does. Please see my mail in the zope list on 03-31: "Catalog: access index value in ZPT"
Why isn't it possible to do:
<span tal:</span>
(path is index and metadata)
PluginAPIDiscussion --DeanGoodmanson, 2003/04/02 00:41 GMT
Looking for enlightenment here, not so much direction.
I am in no position to start coding on that now, but I'd really like to know the background, thoughts and issues/gotcha's behind this potential development.
Is it possible to discuss development in a public arena without the expection of action? Although I've seen feature discussions turn out overwhelming and turned off projects, I'm willing to risk it, especially in this arena designed to capture knowlege. :-)
page creation / id question --FlorianKonnertz, 2003/04/18 08:20 GMT
I'd like to ask you why...
the creation of a new page with Id="ZWL20030418" (valid) is fine via ZPT:
<span tal: </span>
but fails via DTML:
<dtml-let <dtml-call "create(pageId,text='',REQUEST=REQUEST)"> </dtml-let> Bad Request: The id "ZWL20030418" is invalid--it is already in use.
(In ZWiki-0.17 of course)
What's going on? --FlorianKonnertz, 2003/04/18 09:58 GMT
These things are really strange: brings up the page creation form (page not existing) but in ZMI i can see the page. Then i call the page with the ZPT, i get the BadRequest? error, but then the page exists and be called via RecentChanges?. Then i delete it via the page management button and call the page with DTML and i'm directed to the "page-does-not-exist.-create-it?"-Form. - What's going on?
page creation / id question --FlorianKonnertz, 2003/04/18 10:25 GMT
New trial with new page name:
The ZPT page directs me to the URL in the parent wiki
So tried it again with everything in the main wiki: ZPT redirects to the parent folder, outside the wiki. Why?
page creation / id question --2003/04/20 05:22 GMT
Hi Simon, here's what i found out further by a careful examination of the
create page problem we debugged (dtml again). I did it on a new zope with only ZWiki installed and my Data.fs copied to it. - Both behave the same except for that the fresh install tells me on the NewIdTestPage? that creation succeeded, the old zope NOT. Both pages ARE created and can be seen in RC and ZMI as expected. So in fact there must be something wrong with my current zope. I don't know what to do now - I'll report again. Cheers,Flo
page creation / id question --2003/04/22 10:14 GMT
I have to know why pages are not listed on RecentChanges? even if they exist in ZMI. My testpage was created by
create so it has no size in ZMI - but other testpages also have no size and they're also listed in ZMI. I will look in the source now, but if anyone has an idea of this, please tell me. - TIA, Flo
page creation / id question --2003/04/22 10:35 GMT
This is a SubWiki issue. In a main wiki the ZMI and RecentChanges? are consistent.
dtml to list a page's children --DeanGoodmanson, 2003/05/13 13:55 GMT
A dtml snippet which returns a list of a pages children would be useful. It has been noted on PeopleIndex and GeneralDiscussion.
AllPages may be some help. Psuedocode:
childpages = [] foreach page in pages: if page.parents contains (this pages)id: childpages.add(page.title_or_id) #output foreach page in childpages print wikilink('['+page+']') + seperator
There's my 2 minute attempt to help...
dtml to list a page's children --simon, 2003/05/13 16:12 GMT
Thanks Dean. Here's what I use:
def offspring(self, REQUEST=None): """Return a presentation of all my offspring.""" def offspringAsList(self, REQUEST=None): """ Return all my offspring's page names as a flat list. def offspringIdsAsList(self, REQUEST=None): """ Return all my offspring's page ids as a flat list. def ancestorsAsList(self, REQUEST=None): """ Return the names of all my ancestor pages as a flat list, eldest first. If there are multiple lines of ancestry, return only the first.
These are in Parents.py.
DTML focus ? --simon, 2003/05/13 16:16 GMT
By the way.. I like the cookbook approach. Any interest in renaming these pages to connote DTML somehow ? DTMLCookbook, DTMLDiscussion?, ... ?
list a pages children. --DeanGoodmanson, 2003/05/13 18:30 GMT
The dtml statement was my oversight. I'm glad to see there's support for this stuff in zwiki code, and easily callable via dtml: <dtml-var "offspringAsList()">
I'm having trouble trying to get a particular format, would you mind showing me why the sequence-item doesn't work here? :
<dtml-in "offspringAsList()"> <dtml-var "wikilink('['+sequence-item+']')"> | </dtml-in>
DTML focus --DeanGoodmanson, 2003/05/13 18:34 GMT
The dtml statement was knee-jerk reaction. For me in-page=in dtml. I'm OK to spin off dtml items into those pages, especially now that ZwikiModifications are becoming more prevelant. :-) I'll refactor these pages soon with the names you suggested. I didn't write up how I added a field to the tracker, and perhaps seperating out the DTML stuff would make this kind of thing more inviting, and less annoying to the dtml !*@&s! crowds out there. Any thoughts regardng my last submission to PluginAPIDiscussion as a better way to seperate logic from content [than DTML]?
... --simon, 2003/05/13 19:08 GMT
I don't know that + notation. How about:
<dtml-in offspringAsList prefix=x> [<dtml-var x_sequence_item>] </dtml-in>
... --simon, 2003/05/13 19:14 GMT
Interesting.. a wikiname in brackets isn't getting linked right. Also those offspring methods return all children, grandchildren etc. so aren't the answer. I did this in dtml once, using the method you posted I think.
... --simon, 2003/05/13 19:17 GMT
No thoughts on PluginAPIDiscussion right now.
... --simon, 2003/05/13 19:20 GMT
Try this:
<dtml-let thispage=name> <dtml-in pages prefix=x> <dtml-if "thispage in x_sequence_item.parents"> <dtml-var "wikilink(x_sequence_item.name())"> </dtml-if> </dtml-in> </dtml-let>
... --simon, 2003/05/13 19:38 GMT
Oops, link the name, not the entire page text. So this works, but of course it's expensive in a large wiki. Parents.py stores parents and calculates children, so children are harder to work out. Using catalog would be one way to optimize. Here's another approach, I don't know if it makes a difference:
<dtml-let thispage=name> <dtml-in offspringAsList prefix=x> <dtml-if "thispage in pageWithName(x_sequence_item).parents"> <dtml-var "wikilink(x_sequence_item)"> </dtml-if> </dtml-in> </dtml-let>
How do I get last 3 discussions from all wiki pages ended with "discussion"? --2003/05/19 04:07 GMT
As title, which is similar to this site's FrontPage that get last three GeneralDiscussion comments.
How do I get last 3 discussions from all wiki pages ended with "discussion"? --SimonMichael, 2003/05/19 05:37 GMT
I'm not sure, but I'd like to have something similar for FrontPage (show last three comments made anywhere in the wiki). Getting the last three modified pages is easy enough, at least with a catalog. I don't think we can be sure whether the page was commented or edited last, without adding a special flag for this.
Zwiki in frames URL helper --DeanGoodmanson, 2003/07/30 04:22 GMT reply
I'm addicted to the FuzzyLinks? and end up browsing via URL/Address bar much more than I expected after using the wiki within a frame and losing that control.
Here's the snippet I put together to add a URL address bar to teh wiki header.
Form:
<form method="GET" action="redir" > <input name="new_url" size="40" value="<dtml-var URL>"><input type="submit" value="Go"> </form>
Supporting DTML method:
<dtml-if new_url> <dtml-call "RESPONSE.redirect(new_url)"> <dtml-else> <dtml-if HTTP_REFERER> <dtml-call "RESPONSE.redirect(HTTP_REFERER)"> <dtml-else> <dtml-call "RESPONSE.redirect(URL1)"> </dtml-if> </dtml-if>
DTML::
There's plenty of room for polish here, including url_quote, removing and re-adding domain name and protocol, etc.
I consider this a crude variation on the SearchPage?'s Jump functionality, which may be a better integratoin (a checkbox next to search form for something like google's "I'm feeling lucky" functionality, or a preceding left square bracket meaning "jump to page", or search . "Abc" = Search wiki for
abc, where "[ABC" = Search wiki for
abc page and jump ... not sure the implications of regex thinking at this moment, and there might be some "title only" vs. "title AND content" confusion.
How does one include the Sitemap on a page?
Including some different page like on DTMLCookbook works, but how do I include something like this:
<dtml-var "SELF/map(bare=1)">
if i "hard code" it, e.g. putting:
<dtml-var "FrontPage/map(bare=1)">
on Frontpage, it does not work (croaks on saving). Sure, putting a page include to "Self" on the page does croak, it creates a loop. But why does the /map go wrong?
sitemap on front page --DeanGoodmanson, Mon, 22 Sep 2003 11:53:05 -0700 reply
does something like this work?:
<dtml-var map>
how do i include the map on a page? --Simon Michael, Tue, 23 Sep 2003 00:20:58 -0700 reply
Thomas - what are you trying to do ? If you want to quote and display the dtml code, you must use a STX quoted paragraph (::) as Dean says. Or html-quote the angle brackets, or put a space after the left bracket.
sitemap on front page --ThomasSprinzing, Tue, 23 Sep 2003 02:17:46 -0700 reply
Dean:
< dtml-var map> works, but it includes the whole map page. So i get the page Header and logo on the result. Isnt`t there a way to get just the map? Simon: Sorry `bout that - sometimes the company proxy does strange things and delivers old unupdated pages to me. So editing is not at all straightforward...
sitemap on front page --Simon Michael, Wed, 24 Sep 2003 09:30:30 -0700 reply
Like on NavigationAids ? Yes. There's also dtml-var offspring and dtml-var subtopics (new), but these will only look under a single page (the current, or one you specify).
REQUEST, RESPONSE --DeanGoodmanson, Mon, 17 Nov 2003 11:11:38 -0800 reply
Remember: When adding a <dmtl-var REQUEST> and RESPONSE debug information to a page the redirect may no longer work.
Are there any forums about Zope and ZWiki? -- Wed, 29 Dec 2004 20:39:06 -0800 reply
Just wanting to know if Zope and ZWiki have a forum in which to join and interact with questions and answers using both. ''--JohnNaughton''
... --Bob McElrath?, Wed, 29 Dec 2004 20:46:40 -0800 reply
Yes, see AboutZwikiDiscussion | https://zwiki.org/FrontPage/DTMLCookbookDiscussion | CC-MAIN-2021-25 | refinedweb | 4,065 | 66.23 |
Storm 0.18
Milestone information
- Version:
- 0.18
- Released:
- 2010-10-25
- Registrant:
- Gary Poster
- Release registered:
- 2010-10-25
- Active:
- No. Drivers cannot target bugs and blueprints to this milestone.
Activities
- Assigned to you:
- No blueprints or bugs assigned to you.
- Assignees:
- 2 Free Ekanayaka, 1 Gavin Panella, 1 Gustavo Niemeyer, 1 James Henstridge, 1 Jamu Kakar, 1 Jeroen T. Vermeulen, 2 Thomas Herve
- Blueprints:
- No blueprints are targeted to this milestone.
- Bugs:
- 9 Fix Released
Download files for this release
Release notes
The Storm team is proud to announce Storm 0.18!
The new release includes a number of new features:
* Storm includes (optional) code to manage and migrate database schemas
* storm.zope.testing added testresources
(https:/
* TimeoutErrors include messages to describe why the Timeout was raised
This release includes official packages for all supported releases
of Ubuntu except 10.10. 10.10 packages will be added after problems with
Storm's release machinery are sorted out. The packages are available in the
Storm team's PPA:
https:/
You can find the release files at:
https:/
You can always get the latest source code from Launchpad:
Finally, you can join us in the #storm channel on irc.freenode.net
and on the Storm mailing list:
https:/
Read on for more...
Code to manage and migrate database schemas
-------
The new ``storm.schema`` package includes a generalized version of the code
used by the Landscape team for their schemas.
The ``Schema`` class can be used to ``create``, ``drop``, ``delete`` and
``upgrade`` database schemas. A ``Store`` may have a single schema. The
schema is defined by the series of SQL statements that should be used to
create, drop and clear the schema, respectively; and by a patch package used
to upgrade it.
A patch package is simply a Python package that contains files for each patch
level in the series. Each file must be named ``patch_N.py``, where ``N`` is
the numeric version of the patch in the series (using ascending natural
numbers). The patch files must define an ``apply`` callable taking a
``Store`` instance as its only argument. This will be called when the patch
gets applied.
Here's an example, where ``patch_package`` is a Python module
containing database patches used to upgrade the schema over time, and
``store`` is a Storm ``Store``:
>>> from storm.schema import Schema
>>> creates = ['CREATE TABLE person (id INTEGER, name TEXT)']
>>> drops = ['DROP TABLE person']
>>> deletes = ['DELETE FROM person']
>>> import patch_package
>>> schema = Schema(creates, drops, deletes, patch_package)
>>> schema.
While you can use the schema's ``create`` method separately, ``upgrade`` is
sufficient alone. It will create the schema if it does not exist, and
otherwise will run unapplyed patches to an existing schema. Note that this
approach therefore expects the "creates" SQL (that is, the second line of the
example above) to be maintained alongside patches--it should be *equivalent*
to running all patches.
storm.zope.testing added testresources support
-------
If you would like to use testresources
(https:/
storm.zope.
registered with ZStorm is now available. It can be used roughly like this::
from testresources import ResourcedTestCase
from storm.zope.testing import ZStormResourceM
from storm.schema import Schema
name = "test"
uri = "sqlite:"
schema = Schema(...)
manager = ZStormResourceM
class MyTest(
resources = [("zstorm", manager)]
def test_stuff(self):
store = self.zstorm.
Comparable expressions (such as Column and Alias) provide new
startswith(), endswith() and contains_string() methods. These
methods perform prefix, suffix and substring comparisons using LIKE.
Strings used with these methods are automatically escaped. | https://launchpad.net/storm/+milestone/0.18 | CC-MAIN-2016-44 | refinedweb | 573 | 64.61 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to reference a delivery order line to sales order line?
When we create a sales order, a delivery order is created automatically which reference to the sales order line. However, when we create a delivery order, there is no option to reference it back to the sales order line. How can this be done manually?
Hello,
After two about 20h struggle I could not find a nice way do to that.Still I have at least a temporary solution, that solves this problem.
I would really love to get feedback from you on this. Especially how to rewrite it, so that I don't use cr and manual trims. I know this is wrong, but I could get the result in other way.
Regards,
Pavel
class stock_pack_operation(osv.osv):
_inherit = "stock.pack.operation"
_columns = {
'x_sale_line_id': fields.many2one('sale.order.line', 'Custom Field: Reference to the sale.order.line', required=True)
}
def create(self, cr, uid, vals, context=None):
tmp = [vals['picking_id']]
cr.execute('SELECT origin FROM stock_picking WHERE stock_picking.id = %s', tmp)
origin = cr.fetchone()
origin = origin[0]
origin = int(origin[2:])
cr.execute('SELECT id FROM sale_order_line WHERE order_id = %s AND product_id=%s', (origin, vals['product_id']))
zsli = cr.fetchone()
zsli = zsli[0]
vals['x_sale_line_id'] = zsli
res_id = super(stock_pack_operation, self).create(cr, uid, vals, context=context)
return res_id
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Hello, Did you manage to solve this issue? I have the same problem - in v7 I could refer to the SO Item, but in v8 this seems to be missing. Somehow I can't find a way to refer from stock.pack.operation to sale.order.line and I need that to generate proper delivery order report... In odoo-8.0/openerp/addons/sale/sale.py -> "def _prepare_order_line_procurement(self, cr, uid, order, line, group_id=False, context=None):" I see "'sale_line_id': line.id ", but there is no such field in the stock.pack.operation model. Why is that? I really don't understand why it worked in v7 and now the field is missing. Looking forward to your feedback! Regards, Pavel | https://www.odoo.com/forum/help-1/question/how-to-reference-a-delivery-order-line-to-sales-order-line-43537 | CC-MAIN-2017-43 | refinedweb | 394 | 52.36 |
pem 0.2.0
Parse and split PEM files painlessly.
pem: Easy PEM file parsing
pem is an MIT-licensed Python module for parsing and splitting of PEM files, i.e. Base64 encoded DER keys and certificates.
It runs on Python 2.6, 2.7, 3.3, and PyPy 2.0+, has no dependencies and does not attempt to interpret the certificate data in any way. pem is intended to ease the handling of PEM files in combination with PyOpenSSL and – by extension – Twisted.
It’s born from my personal need because of the inconsistent handling of chain certificates by various servers: some servers (like Apache) expect them to be a separate file while others (like nginx) expect them concatenated to the server certificate. Since I want my Python software to be universal and to be able to cope with both, pem was born.
The core API call is the function parse():
import pem with open('cert.pem', 'rb') as f: certs = pem.parse(f.read())
The function returns a list of valid PEM objects found in the string supplied. Currently possible types are Certificate and RSAPrivateKey. Both can be transformed using str() into plain strings for other APIs. They don’t offer any other public API at the moment.
Convenience
Since pem is mostly a convenience module, there are several helper functions.
Files
parse_file(file_name) reads the file file_name and parses its contents. So the following example is equivalent with the first one:
import pem certs = pem.parse_file('cert.pem')
Twisted
A typical use case in Twisted with the APIs above would be:
import pem from twisted.internet import ssl key = pem.parse_file('key.pem') cert, chain = pem.parse_file('cert_and_chain.pem') cert = ssl.PrivateCertificate.loadPEM(str(key) + str(cert)) chainCert = ssl.Certificate.loadPEM(str(chain)) ctxFactory = ssl.CertificateOptions( privateKey=cert.privateKey.original, certificate=cert.original, extraCertChain=[chainCert.original], )
Turns out, this is the major use case for me. Therefore it can be simplified to:
import pem ctxFactory = pem.certificateOptionsFromFiles( 'key.pem', 'cert_and_chain.pem', )
The first certificate found will be used as the server certificate, the rest is passed as the chain. You can pass as many PEM files as you like. Therefore you can distribute your key, certificate, and chain certificates over a arbitrary number of files. A ValueError is raised if more than one key, no key, or no certificate are found. Any further keyword arguments will be passed to CertificateOptions.
Ephemeral Diffie-Hellman support
Starting with version 14.0.0, Twisted will support ephemeral Diffie-Hellman ciphersuites; you can pass an instance of twisted.internet.ssl.DiffieHellmanParameters as the dhParameters keyword argument to CertificateOptions. Since pem just passes keyword arguments to CertificateOptions verbatim, that will just work.
However, pem is also forward compatible. Twisted 14.0.0 is not released yet, but pem lets you use the API described above anyway. You can just use pem.DiffieHellmanParameters: if your version of Twisted comes with that class, you just get the Twisted version; if it doesn’t, you get a version from pem.
Just pass instances of that class as dhParameters to certificateOptionsFromFiles, and pem will make it magically work:
import pem from twisted.python.filepath import FilePath path = FilePath("/path/to/the/dh/params") ctxFactory = pem.certificateOptionsFromFiles( 'key.pem', 'cert_and_chain.pem', dhParameters=pem.DiffieHellmanParameters.fromFile(path) )
Future
pem currently only supports the PyOpenSSL/Twisted combo because that’s what I’m using. I’d be more than happy to merge support for additional frameworks though!
History
0.2.0 (2014-03-13)
- Add forward-compatible support for DHE.
0.1.0 (2013-07-18)
- Initial release.
Credits
“pem” is written and maintained by Hynek Schlawack.
- Downloads (All Versions):
- 21 downloads in the last day
- 287 downloads in the last week
- 1067 downloads in the last month
- Author: Hynek Schlawack
- License: MIT
- Categories
- Development Status :: 5 - Production/Stable
-: hynek
- DOAP record: pem-0.2.0.xml | https://pypi.python.org/pypi/pem/0.2.0 | CC-MAIN-2015-48 | refinedweb | 648 | 50.94 |
Globalizing ASP.NET MVC Client Validation
One of my favorite features of ASP.NET MVC 2 is the support for client validation. I’ve covered a bit about validation in the following two posts:
- ASP.NET MVC 2 Custom Validation covers writing a custom client validator.
- Localizing ASP.NET MVC Validation covers localizing error messages.
However, one topic I haven’t covered is how validation works with globalization. A common example of this is when validating a number, the client validation should understand that users in the US enter periods as a decimal point, while users in Spain will use a comma.
For example, let’s assume I have a type with the
RangeAttribute
applied. In this case, I’m applying a range from 100 to 1000.
public class Product { [Range(100, 1000)] public int QuantityInStock { get; set; } public decimal Cost { get; set; } }
And in a strongly typed view, we have the following snippet.
<% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <%: Html.LabelFor(model => model.QuantityInStock) %> <%: Html.TextBoxFor(model => model.QuantityInStock)%> <%: Html.ValidationMessageFor(model => model.QuantityInStock)%> <% } %>
Don’t forget to reference the necessary ASP.NET MVC scripts. I’ve done it in the master page.
<script src="/Scripts/MicrosoftAjax.debug.js"></script> <script src="/Scripts/MicrosoftMvcAjax.debug.js"></script> <script src="/Scripts/MicrosoftMvcValidation.debug.js"></script>
Now, when I visit the form, type in 1,000 into the text field, and hit the TAB key, I get the following behavior.
Note that there is no validation message because in the US, 1,000 == 1000 and is within the range. Now let’s see what happens when I type 1.000.
As we can see, that’s not within the range and we get an error message.
Fantastic! That’s exactly what I would expect, unless I was a Spaniard living in Spain (¡Hola mis amigos!).
In that case, I’d expect the opposite behavior. I’d expect 1,000 to be equivalent to 1 and thus not in the range, and I’d expect 1.000 to be 1000 and thus in the range, because in Spain (as in many European countries), the comma is the decimal separator.
Setting up Globalization for ASP.NET MVC 2
Well it turns out, we can make ASP.NET MVC support this. To demonstrate this, I’ll need to change my culture to es-ES. There are many blog posts that cover how to do this automatically based on the request culture. I’ll just set it in my Global.asax.cs file for demonstration purposes.
protected void Application_BeginRequest() { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("es-ES"); }
The next step is to add a call to the
Ajax.GlobalizationScript helper
method in my Site.master.
<head runat="server"> <%: Ajax.GlobalizationScript() %> <script src="/Scripts/MicrosoftAjax.debug.js"> </script> <script src="/Scripts/MicrosoftMvcAjax.debug.js"> </script> <script src="/Scripts/MicrosoftMvcValidation.debug.js"> </script> </head>
What this will do is render a script tag pointing to a globalization script named according to the current locale and placed in scripts/globalization directory by convention. The idea is that you would place all the globalization scripts for each locale that you support in that directory. Here’s the output of that call.
<script src="~/Scripts/Globalization/es-ES.js"> </script>
As you can see, the script name is es-ES.js which matches the current locale that we set in Global.asax.cs. However, there’s something odd with that output. Do you see it? Notice that tilde in the src attribute? Uh oh! That there is a bona fide bug in ASP.NET MVC.
Not to worry though, there’s an easy workaround. Knowing how
discriminating our ASP.NET MVC developers are, we knew that people would
want to place these scripts in whatever directory they want. Thus we
added a global override via the
AjaxHelper.GlobalizationScriptPath
property.
Even better, these scripts are now available on the CDN as of this morning (thanks to Stephen and his team for getting this done!), so you can specify the CDN as the default location. Here’s what I have in my Global.asax.cs.
protected void Application_Start() { AjaxHelper.GlobalizationScriptPath = ""; AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); }
With that in place, everything now just works. Let’s try filling out the form again.
This time, 1,000 is not within the valid range because that’s equivalent to 1 in the es-ES locale.
Meanwhile, 1.000 is within the valid range as that’s equivalent to 1,000.
So what are these scripts?
They are simply a JavaScript serialization of all the info within a
CultureInfo object. So the information you can get on the server, you
can now get on the client with these scripts.
In Web Forms, these scripts are emitted automatically by serializing the culture at runtime. However this approach doesn’t work for ASP.NET MVC.
One reason is that the scripts themselves changed from ASP.NET 3.5 to ASP.NET 4. ASP.NET MVC is built against the ASP.NET 4 version of these scripts. But since MVC 2 runs on both ASP.NET 3.5 and ASP.NET 4, we couldn’t rely on the script manager to emit the scripts for us as that would break when running on ASP.NET 3.5 which would emit the older version of these scripts.
As usual, I have very simple sample you can download to see the feature in action.
37 responses | https://haacked.com/archive/2010/05/10/globalizing-mvc-validation.aspx/ | CC-MAIN-2020-29 | refinedweb | 903 | 60.61 |
j2me August 19, 2011 at 2:28 PM
Hi, how can add image in forms but using lick a button. does not using canvas in j2me for Symbian development ... View Questions/Answers
Please can I get the code for solution of producer consumer problem using semaphores? August 19, 2011 at 2:24 PM
Please can I get the code for solution of producer consumer problem using semaphores? ... View Questions/Answers
can interface solve this problem in java August 19, 2011 at 2:17 PM
I have a JDialog which displays the calendar [from 2010-2020], i created this in a different class. In this calendar each day from 1-31 is placed in corresponding cells of a 5X6 JTable. Now i need to get which day is clicked by the user from another class which calls this calander class. can inte... View Questions/Answers
four rectangle/image of the shape of the button.then Draw these on some x and y co-ordinated in j2me then how to draw ? August 19, 2011 at 12:32 PM
hi,? ... View Questions/Answers
image Processing August 19, 2011 at 12:05 PM
Please give Me a JPEG or GIf "LOSS LESS" Image Compression and Decompression Source Code Please Help Me I don't want links Kindly help me Compression ratio not matter....... ... View Questions/Answers
j2me August 19, 2011 at 12:01 PM
Hi, In my j2me application I have used canvas to display an image in fullscreen.In the image there are four points( rectangular areas ). Now I have to add events to these points. It looks like that those areas will be used as button. How can I select a area of an image and add an event to that a... View Questions/Answers
XSS attack August 19, 2011 at 11:58 AM
I. ... View Questions/Answers
GeneralBlock Memory Leak August 19, 2011 at 11:30 AM
Hi all, I'm testing my iPhone application using instrument test in XCode 4... but on every test it throws a GeneralBlock error. Can anyone explain what type of the "GeneralBlock" error is and when it occurs?? **generalblock-16 generalblock -24 generalblock-56 generalbl... View Questions/Answers
java in using brausing August 19, 2011 at 10:45 AM
sir, i am beginner java developer ,sir i am creat's the image viewer Jframe but not a created frame plz. halpe me give the image viewer Jframe code ... View Questions/Answers
image Processing August 19, 2011 at 10:28 AM
BCIF Image Compresssion Algorithm s alossless image Compression algorithm ,Pleas Help Me weather it can support only 24 bit bmp images? ... View Questions/Answers
Image Processing Java August 19, 2011 at 10:26 AM
Using This Code I Compressed A JPEG Image And the Original Size of the image is 257kb and The Compressed Image Size Is 27kb How Can I Decompress It Please Give Me The "SOURCE CODE" And Hee is my Source Code.... Please kindly Help Me? import java.awt.image.BufferedIma... View Questions/Answers
write a program in java. August 18, 2011 at 11:13 PM
arrange the natural n umber in 5x5 matrix as 21 22 23 24 25 20 7 8 9 10 19 6 1 2 11 18 5 4 3 12 17 16 15 14 13 i at centerd position and remaining arrange in anticlockwise direction. ... View Questions/Answers
asynchronous and synchronous collection August 18, 2011 at 5:04 PM
which are all interfaces or classes are synchronous and asynchronous in java? please send it in table or picture form. ...
returning doubles with 2 decimals JAVA August 18, 2011 at 4:46 PM
Hi all. I'm writing a program where users input the cost of an item (in the constructor), so for example, Item (string name, double cost, int qty) { ... } I have a method public double getCost() { ... } <... View Questions/Answers
Help me August 18, 2011 at 4:14 PM
Hi, LWUIT is working in eclipse j2me for Symbian OS? ... View Questions/Answers
message sending and receiving using UDP TCP in J2ME August 18, 2011 at 3:22 PM
I need the simple program for message sending and receiving using UDP TCP in J2ME. Could u pls? ... View Questions/Answers
j2me August 18, 2011 at 3:19 PM
hi, how to use LWUIT in emulator J2me for Symbian OS? and also how to install in emulator, did worked or not? ... View Questions/Answers
j2me August 18, 2011 at 3:04 PM
hi, i'm working emulator in j2me language, for Symbian OS (Nokia) development. then, what are the click events in this for developing the application.? Thank you. ... View Questions/Answers
j2me August 18, 2011 at 2:57 PM
hi, in j2me does any click events for developing Symbian.? and How to add click event for dynamically generated button in j2me on Symbian OS ... View Questions/Answers
J2ME August 18, 2011 at 2:53 PM
i am working in j2ME language on Symbian OS S60 (Nokia)application in this application i am adding buttons for designing in MIDlet application. so how to add button in dynamically. and how can place lancher icon also. ... View Questions/Answers
how to implements jdbc connections using awt August 18, 2011 at 2:17 PM
sir, My name is subbareddy vajrala.I want to implement small project on awt.so please give me your valuable information about how to implements jdbc connections in awt.please give me sample example awt with jdbc. Thanking you sir....
to display NEXT and previous option after 10 entries in java August 18, 2011 at 1:09 PM
As after jsp code we refer to java for connectivity and i want that directly only 10 entries will be open and next there will be pages. so what would i do for coding in java? ... View Questions/Answers
question August 18, 2011 at 12:51 PM
Dear Sir, How do i solve the following problems , i am using jdk1.6.0_22 and eclipse Helios , please help me ................ Access restriction: The type Player is not accessible due to restriction on required library /home/dieutek/jdk1.6.0_22/jre/lib/ext/jmf.jar Access r... View Questions/Answers
inherittance August 18, 2011 at 12:31 PM
Develop a class Borrower which has 5 important fileds, namely borrowerId, borrowerName, borrowerLastName, contactNumber and dateOfBirth. Your class must demonstrate good encapsulation where all fileds get manipulated through get and set methods. Save your file as Borrower.java Sample inpu... View Questions/Answers
check box condition August 18, 2011 at 12:21 PM
Hai, my application has two check box one for chart and another one for table.when We click chart check box download only chart but table also download.same problem in table slection..xsl coding was used in my application..please help me... ... View Questions/Answers
java August 18, 2011 at 12:00 PM
Write a program that print the following pattern: - 1 2 3 2 3 4 5 4 3 4 5 6 7 6 5 4 5 6 7 8 9 8 7 6 5 ... View Questions/Answers
merge the multilple jasperfiles in java August 18, 2011 at 10:05 AM
how to merge the multiple jasperfiles in java ... View Questions/Answers
help me plz:-Merge multiple jasper files into Single one August 18, 2011 at 9:40 AM
how to Merge multiple jasper files into Single one word doc ... View Questions/Answers
Merge multiple jasper file to one word Doc using java August 18, 2011 at 9:33 AM
how to Merge multiple jasper file to one word Doc using java ... View Questions/Answers
Task manager enable and disable thru java August 17, 2011 at 10:22 PM
I would like to know, how to enable and disable task manager using java. Kindly, please Let me know ... View Questions/Answers
Java Timer August 17, 2011 at 7:38 PM
Hai Sir, I want to know How to schedule a task on first date of every month in java by using Java timer.Please help me sir. thank you, Shyam ... View Questions/Answers
how to display data from jsp file into database August 17, 2011 at 4:58 PM
this is a jsp file . look at the line with code-> Statement st=con.createStatement(); in the below example. the error is "cannot convert from java.sql.Statement to com.mysql.jdbc.Statement please help me with it. thanks <%@ page language="java" contentType="text/html; charset=UTF-8" ... View Questions/Answers
hyperlink August 17, 2011 at 4:37 PM
I have a .xls file and i want the code to open this .xls file in read only mode when i click on a hyperlink,it should not not be editable,please send me the code ... View Questions/Answers
Hyperlink August 17, 2011 at 4:28 PM
The excel file is not opening in read only mode ,it is editable,actually i need to open it in read only mode by clicking on link.please help me ... View Questions/Answers
Can Login on Fedora, cannot on Windows- JSP/Servlet for login August 17, 2011 at 4:05 PM
I am using JSP and Servlet to perform login. When I host the website and test it works fine for clients on Fedora but on Windows (with IE, firefox, google chrome) clients cannot login to the website. How can I fix this? Please help.... ... View Questions/Answers
Displaying database values in pdf format August 17, 2011 at 3:20 PM
Hi All, I am developing a struts application.I am having one registration form when i am submitting the form the values are stored in database,the database name is registration. In another form i am having a button, by clicking that button the registration data... View Questions/Answers
Hyper link August 17, 2011 at 3:00 PM
how to open a document in read only mode by clicking on a hyperlink ... View Questions/Answers
how to display the database values in pdf format August 17, 2011 at 2:29 PM
in struts how to display the values in pdf format when clicking a button in jsp page ... View Questions/Answers
how to code-updating some details n some details r unchanged August 17, 2011 at 2:02 PM
i have created a page with empty text boxes,details are.... house no: streetname: cityname: pincode: contact : bloodgroup : my requirement is to update the details of user. and i had written the... View Questions/Answers
Table-chart selection August 17, 2011 at 1:43 PM
Hai, Our application has pdf download.The pdf file has chart and table..Now the problem is user choose only table option but table and chart download..i want table only..the coding was given below..please help me..... ... View Questions/Answers
Can someone help me with this? August 17, 2011 at 1:39 PM
I have this project and i dont know how to do it. Can someone help me? please? Write a java class named "PAMA" which define a main () method that performs simple arithmetic. (e.g. Add, Subtract, Multiply, Divide) Help me please! Thanks in advance! ... View Questions/Answers
question August 17, 2011 at 12:21 PM
I am doing a project using struts framework and spring jdbc .I have a issue that when data is inserted into database if we click the refresh button then the same data is again inserted in to the database.What is the solution of this problem? ... View Questions/Answers
J2ME August 17, 2011 at 12:04 PM
What is the source code for Mortgage Calculator. using text fields are home price,loan amount,down payments,down payment percent, Annual tax,annual Interest,interest rate,annual insurance,pay per monthly,terms in years,freq.payment then Commands are Interest rates,Rest,More these all are using ... View Questions/Answers
Help Me August 17, 2011 at 11:49 AM
What is the source code for Sample example for Mortgage Calculator in J2ME language for developing Symbian. ... View Questions/Answers
Java scroll pane-java GUI August 17, 2011 at 10:23 AM
Dear friends.. Very Good morning. I have a doubt in my gui application. Take ex:- My gui application has 1 Jscrollpane of height 600 and width 400. normally it is showing 200 height and 400 width.. JscrollPane consist of somany text field arranged... View Questions/Answers
event handling August 17, 2011 at 7:12 AM
diff btwn event handling in ASP.net and in HTML ... View Questions/Answers
java August 17, 2011 at 7:11 AM
program to send and receive datagram using datagram packet and datagram socket ... View Questions/Answers
Error here August 16, 2011 at 2:53 PM
<%@ taglib prefix="s" uri="/struts-tags" %> its giving me error as: org.apache.jasper.JasperException: File "/struts-tags" not found org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:50) org.apache.jasper.compiler.ErrorDispatcher.dispatch(... View Questions/Answers
J2ME August 16, 2011 at 1:57 PM
Hi, what is the source code for Mortgage Calculator in J2ME for Developing Symbian. ... View Questions/Answers
j2ME August 16, 2011 at 1:42 PM
give a sample example for using key listener in j2ME for developing Symbian. ... View Questions/Answers
j2me August 16, 2011 at 1:38 PM
how to use keylistener in j2m? ... View Questions/Answers
transpose August 16, 2011 at 1:03 PM
how to write transpose of a matrix a program.... ... View Questions/Answers
Help me plzz August 16, 2011 at 11:12 AM
Hello Roseindia.... I need ur help urgently... I am working on a struts but too beginner to work on it..... I need ur help I am making a registration form which contains user details and on submitting dat form...i shud redirect to some oder page with all details....for eg<... View Questions/Answers
j2me- how to subtract time s in j2me??? August 15, 2011 at 11:29 PM n... View Questions/Answers
array string August 15, 2011 at 6:56 PM
how to sort strings with out using any functions? ... View Questions/Answers
jdbc problem August 15, 2011 at 5:28 PM
hi my name is mohit...i am making a project in java swings....pls help me how to check that the username and password are correct...the user inputs the username and password and then i want to match the username and password with the already entered username and password in the database...if the ... View Questions/Answers
How to copy text from a gif image August 15, 2011 at 11:13 AM
I have some gif images that containing some important text. i want to have that text in my notepad file. i have some hundred's of gif images like that. so that i can't type the text in notepad. so, please tell me how can i copy that text to my notepad. i am using ubuntu as OS.<... View Questions/Answers
How to save excel sheet into mysql database using blob or clob August 15, 2011 at 8:40 AM, cr... View Questions/Answers
PLEASE HELP WITH MY JAVA August 14, 2011 at 10:54 PM
Hey my name is Gavin and im a student at school that takes IT. my teacher has gave me a problem and i can't figure it out please help!!!!!!!! it is a for-loop question: Display the first 5 multiples of 3 Add all 5 values Calculate the average of the 5 values Print the sum and av... View Questions/Answers
core java project by using databse August 14, 2011 at 8:31 PM
hello sir, i'm a b.tech final year student.... i wantto make a project on java with database... can u plzz suggest me how i can start for it.... means on which topic i can make project??? ... View Questions/Answers
struts tiles framework August 14, 2011 at 8:31 PM
how could i include tld files in my web application? ... View Questions/Answers
ojdbc12.jar August 14, 2011 at 1:07 PM
i need ojdbc12.jar for connect database to jsp where we able to download? ... View Questions/Answers
java August 14, 2011 at 8:58 AM
send me java interview questions? ... View Questions/Answers
gc() method August 14, 2011 at 12:33 AM
what is difference between java.lang.System class gc() method and java.lang.Runtime class gc() method ... View Questions/Answers
array August 13, 2011 at 10:59 PM
take a 2d array and display all its elements in a matrix fome using only one for loop and ple explain the program in below......... ... View Questions/Answers
transpose matrix August 13, 2011 at 10:55 PM
write a program in java to declare a square matrices 'A' or order n which is less than 20.allow in user to input only positive integers into the matrix and print the transpose of it. for this program u r given answer but if i entered 2 by 3 matrix it will not give answer ple check it once... ... View Questions/Answers
reply must August 13, 2011 at 2:33 PM
is it critical to do a software job based on games(java) i know core java & advanced java basics only please give me answer. ... View Questions/Answers
question August 13, 2011 at 12:01 PM
Sir, How to stream video on one computer which is playing on another PC in LAN using java + socket / RMI . if you have any idea about that please help me and give the source code ... View Questions/Answers
MS access August 13, 2011 at 12:04 AM
how do i access my Ms-Access file placed in the same jar file where my application code/class file r present??? Want to access it via Code. Can anyone help me ? Please give reply urgent... ... View Questions/Answers
How to Access MS Access in jar. August 13, 2011 at 12:02 AM
how do i access my Ms-Access file placed in the same jar file where my application code/class file r present??? Want to access it via Code or is their any alter-native?? Do i need any Driver to do this ... i m able to access a Ms-access via JDBC but cant find the file wen kept inside the same jar... View Questions/Answers
servlet problem August 12, 2011 at 9:52 PM
wheni m deploying an servlet application im getting trouble context [/filename] startup failed due to previous error in tomcat 6.0.. ... View Questions/Answers
Image Steganography August 12, 2011 at 8:51 PM
SOURCE CODE FOR IMAGE STEGANOGRAPHY? ... View Questions/Answers
Accessing non-static members through the main method in Java. August 12, 2011 at 8:40 PM...!!! ... View Questions/Answers
Inserting a value to an Enum field in Table August 12, 2011 at 7:32 PM
I'm writing a code that creates a user account and sends the result to the user table in a mysql database. For example, in the user table I have: username varchar(15) PRIMARY KEY, password varchar (10), is_Admin enum('Y','N'), In the Java co... View Questions/Answers
Finding duplicates using array in for loop August 12, 2011 at 4:50 PM
how to find the duplicates in array using for loop ... View Questions/Answers
how to create uiwebview programmatically August 12, 2011 at 4:13 PM
How to create uiwebview programmatically in iPhone application? ... View Questions/Answers
add plist file iphone sdk August 12, 2011 at 4:07 PM
how to add plist file in XCode? ... View Questions/Answers
add plist file iphone sdk August 12, 2011 at 4:07 PM
how to add plist file in XCode? ... View Questions/Answers
write data to plist August 12, 2011 at 4:00 PM
How to write data to plist file in XCode? ... View Questions/Answers
how to test iphone app on iphone without developer program August 12, 2011 at 3:33 PM
Is it possible to test the iPhone application on different device without the UDID Key? ... View Questions/Answers
question August 12, 2011 at 2:43 PM
Sir, Please send me Java Swing source code for video streaming , it's very urgent ... View Questions/Answers
difference between hashcode,reference in java August 12, 2011 at 2:35 PM
difference between hashcode,reference in java ... View Questions/Answers
Please can you help August 12, 2011 at 1:32 PM
I have a some code which uses a textarea comment box that will on post return the users text to the same page that they entered it on. However I wanted it to be able to check for a existing comment and create a new line rather than replacing it. Here is the code: <... View Questions/Answers
i want to copy files from one machine to other using java and the code will be running on third machine August 12, 2011 at 1:26 PM
i want to copy some files from one machine say 'A' to some other machine say 'B' by using the java program running on third machine say 'c'. So , can you help me on this that how can i do this using java . Thanks in advance. ... View Questions/Answers
Why is this code working August 12, 2011 at 5:44 AM
Looking at this piece of code: #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct _point{ int x; int y; } point; int main (void) { point *ptr; point obj; ptr = (point *)malloc(sizeof(char)); obj.x = 500; obj.y = 5... View Questions/Answers
use of package concepts August 12, 2011 at 1:27 AM
i m getting error when i use .* method to access all package files. when i use this through another method like packagename.classname, it works but in this method i will have to write all class names. i want to use packagename.* methhod. please solve the problem. i just start to learning java<... View Questions/Answers
how to read the values for text and csv files and store those values into database in multiple rows..means one value for one row August 11, 2011 at 7:53 PM
Hai, I need a program by using servlets.. the program is like this i upload a text and csv file.it was stored in perticular directory now i have to read the stored(.csv or .txt) file and get the values from that file and store them into database table in multiple rows(whic... View Questions/Answers
how to solve August 11, 2011 at 5:23 PM
log4j:WARN No appenders could be found for logger (org.apache.struts.util.PropertyMessageResources). log4j:WARN Please initialize the log4j system properly. ... View Questions/Answers
C Programming SubString Program August 11, 2011 at 4:13 PM
Sir I want to Check whether the Single Substring is present in the given 3 string. characters. eg if i entered First String- CPROGRAMMING Second String- CPROGRAM third String- PROGRAMMING if i entered to check PROGRAM is exists in given three strings then output will be TRUE. Plz Help Me... View Questions/Answers
PHP find online users August 11, 2011 at 4:06 PM
How to find the online users in PHP? ... View Questions/Answers
Detecting the file extension August 11, 2011 at 3:59 PM
How to detect the file extension in PHP? ... View Questions/Answers
How to open URL in iPhone SDK August 11, 2011 at 3:37 PM
In my iPhone application ..i wanted to open a url in the application. Is it possible? If yes how will i come back to my application? ... View Questions/Answers
property in javascript get set August 11, 2011 at 3:23 PM
How to create the get and set property in JavaScript? ... View Questions/Answers
anonymous class August 11, 2011 at 2:57 PM
what actually is an anonymous class and what is the use of it along with the simple example? if there is a video explaining it that will be fine ... View Questions/Answers
java compiler error August 11, 2011 at 12:54 PM
I am trying to compile a simple program which is as follows: public class A { public static void main(String args[]) { B b=new B(); } } Above program is in file A.java second program is in file B.java which is... View Questions/Answers
servlet redirect problem help needed August 11, 2011 at 12:52 PM
package; public class Ser exten... View Questions/Answers | http://www.roseindia.net/answers/questions/112 | CC-MAIN-2014-52 | refinedweb | 4,062 | 64 |
- ODBC Drivers
- Java JDBC Drivers
- ADO.NET Providers
- SQL SSIS Components
- BizTalk Adapters
- Excel Add-Ins
- Power BI Connectors
- Delphi & C++Builder
- Data Sync
- API Server
AWS Glue Jobs からSage US Data にJDBC 経由でデータ連携Amazon S3 でホストされているCData JDBC ドライバーを使用してAWS Glue ジョブからSage US にデータ連携。
AWS Glue はAmazon のETL サービスであり、これを使用すると、簡単にデータプレパレーションを行い、ストレージおよび分析用に読み込むことができます。AWS Glue と一緒にPySpark モジュールを使用すると、JDBC 接続を経由でデータを処理するジョブを作成し、そのデータをAWS データストアに直接読み込むことができます。ここでは、CData JDBC Driver for Sage US をAmazon S3 バケットにアップロードし、Sage US data からデータを抽出してCSV ファイルとしてS3 に保存するためのAWS Glue ジョブを作成して実行する方法について説明します。
CData JDBC driver for Sage US をAmazon S3 バケットにアップロード
In order to work with the CData JDBC Driver for Sage US in AWS Glue, you will need to store it (and any relevant license files) in a bucket in Amazon S3.
- Open the Amazon S3 Console.
- Select an existing bucket (or create a new one).
- Click Upload
- Select the JAR file (cdata.jdbc.sage50us.jar) found in the lib directory in the installation location for the driver.
Amazon Glue Job を設定
- Navigate to ETL -> Jobs from the AWS Glue Console.
- Click Add Job to create a new Glue job.
- Fill in the Job properties:
- Name: Fill in a name for the job, for example: Sage50USGlueJob.
- IAM Role: Select (or create) an IAM role that has the AWSGlueServiceRole and AmazonS3FullAccess (because the JDBC Driver and destination are in an Amazon S3 bucket) permissions policies.
- Type: Select "Spark."
- This job runs: Select "A new script to be authored by you".
Populate the script properties:
- Script file name: A name for the script file, for example:GlueSage50USJDBC
- S3 path where the script is stored: Fill in or browse to an S3 bucket.
- Temporary directory: Fill in or browse to an S3 bucket.
- ETL language: Select "Python."
- Expand Security configuration, script libraries and job parameters (optional).For Dependent jars path, fill in or browse to the S3 bucket where you loaded the JAR file.Be sure to include the name of the JAR file itself in the path, i.e.: s3://mybucket/cdata.jdbc.sage50us.jar
- Click Next.Here you will have the option to add connection to other AWS endpoints, so if your Destination is Redshift, MySQL, etc, you can create and use connections to those data sources.
- Click "Save job and edit script" to create the job.
- In the editor that opens, write a python script for the job.You can use the sample script (see below) as an example.
サンプルGlue スクリプト
To connect to Sage US using the CData JDBC driver, you will need to create a JDBC URL, populating the necessary connection properties.Additionally, (unless you are using a Beta driver), you will need to set the RTK property in the JDBC URL.You can view the licensing file included in the installation for information on how to set this property..
ビルトイン接続文字列デザイナー_0<<
To host the JDBC driver in Amazon S3, you will need a license (full or trial) and a Runtime Key (RTK).For more information on obtaining this license (or a trial), contact our sales team.
Below is a sample script that uses the CData JDBC driver with the PySpark and AWSGlue modules to extract Sage US data and write it to an S3 bucket in CSV format.Make any changes to the script you need to suit your needs and save the job.
import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.dynamicframe import DynamicFrame from awsglue.job import Job args = getResolvedOptions(sys.argv, ['JOB_NAME']) sparkContext = SparkContext() glueContext = GlueContext(sparkContext) sparkSession = glueContext.spark_session ##Use the CData JDBC driver to read Sage US data from the Customer table into a DataFrame ##Note the populated JDBC URL and driver class name source_df = sparkSession.read.format("jdbc").option("url","jdbc:sage50us:RTK=5246...;ApplicationId=8dfafu4V4ODmh1fM0xx;CompanyName=Bellwether Garden Supply - Premium;").option("dbtable","Customer").option("driver","cdata.jdbc.sage50us.Sage50USDriver").load() glueJob = Job(glueContext) glueJob.init(args['JOB_NAME'], args) ##Convert DataFrames to AWS Glue's DynamicFrames Object dynamic_dframe = DynamicFrame.fromDF(source_df, glueContext, "dynamic_df") ##Write the DynamicFrame as a file in CSV format to a folder in an S3 bucket. ##It is possible to write to any Amazon data store (SQL Server, Redshift, etc) by using any previously defined connections. retDatasink4 = glueContext.write_dynamic_frame.from_options(frame = dynamic_dframe, connection_type = "s3", connection_options = {"path": "s3://mybucket/outfiles"}, format = "csv", transformation_ctx = "datasink4") glueJob.commit()
Glueジョブを実行する
With the script written, we are ready to run the Glue job.Click Run Job and wait for the extract/load to complete.You can view the status of the job from the Jobs page in the AWS Glue Console.Once the Job has succeeded, you will have a csv file in your S3 bucket with data from the Sage US Customer table.
Using the CData JDBC Driver for Sage US in AWS Glue, you can easily create ETL jobs for Sage US data, writing the data to an S3 bucket or loading it into any other AWS data store. | http://www.cdata.com/jp/kb/tech/sage-jdbc-aws-glue.rst | CC-MAIN-2019-47 | refinedweb | 817 | 56.55 |
This is just to create a bit of syntactict sugar for OO interfaces. In a a method that normally returns an ArrayRef of Objects, you can return an instance of Array::Delegate, which is still and array, just blessed. Then, the caller can use $results =...ALASKA/Array-Delegate-0.01 - 06 Dec 2009 11:11:35.1210 (19 reviews) - 03 Jul 2014 16:09:35 GMT - Search in distribution
GMT - Search in distribution
This module is an implementation of the XML protocol used by the .nz registry. It allows XML requests (and responses) to be constructed via an OO interface. Additionally, it allows SRS XML documents to be parsed, returning a set of objects. Note, thi...MUTANT/XML-SRS-0.09 - 14 Feb 2011 02:59:59 GMT - Search in distribution
"Class::Iter" defines the majority of iterator methods for iterator classes created by "Class::Visitor". "parent" returns the parent of this iterator, or "undef" if this is the root object. "is_iter" returns true indicating that this object is an ite...KMACLEOD/Class-Visitor-0.02 - 20 Nov 1997 23:51:31
The Connector is generic connection to a data set, typically configuration data in a hierarchical structure. Each connector object accepts the get(KEY) method, which, when given a key, returns the associated value from the connector's data source. Ty...MRSCOTTY/Connector-1.08 - 18 Jun 2014 08:11 - 30 Jun 2008 15:29:06 GMT - Search in distribution
A "Badger::Hub" object is a central repository of shared resources for a Badger application. The hub sits in the middle of an application and provides access to all the individual components and larger sub-systems that may be required. It automatical...ABW/Badger-0.09 - 08 Feb 2012 08:09:33 GMT - Search in distribution
METHODS new() args: [ -post_max=>, -disable_uploads=>, -auto_escape=> ] * "-post_max" is the ceiling on the size of POSTings, in bytes. The default for LibWeb::CGI is 100 Kilobytes. * "-disable_uploads", if non-zero, will disable file uploads complet...CKONG/LibWeb-0.02 - 19 Jul 2000 22:25:12 GMT - Search in distribution
Suppose you have a class "FooHandle", where... * FooHandle does not inherit from IO::Handle; that is, it performs filehandle-like I/O, but to something other than an underlying file descriptor. Good examples are IO::Scalar (for printing to a string) ...DSKOLL/IO-stringy-2.110 (1 review) - 14 Feb 2005 16:32:25 GMT - Search in distribution
This module implements a simple facade class, allowing you to create objects that delegate their methods to subroutines or other object or class methods. To create a delegate object, simply call the new() constructor passing a reference to a hash arr...ABW/Class-Facade-0.01 - 07 Feb 2002 14:24:45.23 - 06 Apr 2014 01:27:51 GMT - Search in distribution
This module provides a number of helper functions for smartmatching. Some are simple functions that directly match the left hand side, such as $foo ~~ positive $bar ~~ array Others are higher-order matchers that take one or more matchers as an argume...LEONT/Smart-Match-0.007 - 27 May 2013 15:21:37 (1 review) - 27 Jul 2002 06:41:49.49 - 09 Aug 2012 18:19:07 GMT - Search in distribution
This class, and other classes in its namespace, implement the Value Object Design Pattern. A value object encapsulates its value and adds semantic information. For example, an IPv4 address is not just a string of arbitrary characters. You can detect ...MARCEL/Class-Value-1.100840 - 25 Mar 2010 17:41:05 GMT - Search in distribution | https://metacpan.org/search?q=Array-Delegate | CC-MAIN-2014-23 | refinedweb | 592 | 55.03 |
Created 10 October 2007, last updated 14 November 2010
The source is available on bitbucket if you
prefer direct access to the code, including recent changes..
Hi. Thx for program. Your coloring algorithm looks great. Could you describe it ? ( or where in code I can find it ). Best regards
@Adam, thanks. There's no innovation in the coloring algorithm, I just used techniques I'd read about other places. Maybe the palette is what you are after, and that's all in palettes.py
I ran Aptus on my home computer and it works great. Now at work I am also getting the error about bitmap data:
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/aptus/gui/viewpanel.py", line 182, in on_paint
self.bitmap = self.draw_bitmap()
File "/usr/lib/python2.6/site-packages/aptus/gui/computepanel.py", line 165, in draw_bitmap
return self.bitmap_from_compute()
File "/usr/lib/python2.6/site-packages/aptus/gui/computepanel.py", line 155, in bitmap_from_compute
bitmap = wx.BitmapFromBuffer(pix.shape[1], pix.shape[0], pix)
File "/usr/lib/python2.6/site-packages/wx-2.8-gtk2-unicode/wx/_gdi.py", line 856, in BitmapFromBuffer
return _gdi_._BitmapFromBuffer(width, height, dataBuffer)
RuntimeError: Failed to gain raw access to bitmap data.
Looks like a wx problem, but googling that message is not helping...
I don't know what the error "RuntimeError: Failed to gain raw access to bitmap data" means. It's used for any error encountered while creating a raw bitmap, so it could be a low-memory situation.
This comment in the wrong place but anyway, saw your apl1 and apl2.png in /pix. I experiment with APL2 now and then, it's interesting.
Any more sample code please post. Am using APL2/PC DOS, with various tricks on Linux dosbox, have a 32 Meg workspace.
Python code here interesting too. Heard that BM passed away, too bad, amazing guy.
Hello:
I tried to install Aptus.exe file in Windows.
It refused saying python25 was required. But I have python26
as part of pythonxy 2.6.5.6 which works well in my Win 7 OS.
So can you make Aptus run in py2.6/2.7 etc?
Thank you
Anandaram
Hi. I'm trying to install Aptus. After :
adam@adam-laptop:~/Aptus-2.0$ python setup.py install
there are some problems :
...
building 'aptus.engine' extension
creating build/temp.linux-x86_64-2.6
creating build/temp.linux-x86_64-2.6/ext
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/include/python2.6 -c ext/engine.c -o build/temp.linux-x86_64-2.6/ext/engine.o -O3 -fno-strict-aliasing
ext/engine.c:5:20: error: Python.h: There is no such file or directory
It makes impossible tu run aptus :
adam@adam-laptop:~/Aptus-2.0$ python /home/adam/Aptus-2.0/scripts/aptusgui.py
Traceback (most recent call last):
File "/home/adam/Aptus-2.0/scripts/aptusgui.py", line 2, in
import aptus, aptus.gui, sys
ImportError: No module named aptus
adam@adam-laptop:~/Aptus-2.0$ python /home/adam/Aptus-2.0/scripts/aptuscmd.py
Traceback (most recent call last):
File "/home/adam/Aptus-2.0/scripts/aptuscmd.py", line 2, in
import aptus, aptus.cmdline, sys
ImportError: No module named aptus
What can I do ?
Regards
"Python.h: There is no such file or directory" -- you need to install the python-dev package with your OS package manager.
Thx. It works now. I have only use (on Ubuntu) :
sudo python setup.py install
Regards
This link is dead :
The requested URL /mandelbrot/coordinates.html was not found on this server.
Install works (W32). Result is great. Thank you for sharing sources.
Nice, worked out of the box. The only modification I can think of (and don't know how to implement off the bat), is that it would be nice if places where it's gray (because there's too many color changes happening to keep up, could be made black and antialiased).
Look at for a super illustration of how fractals can be formed by recursively folding space.
2007–2010,
Ned Batchelder | http://nedbatchelder.com/code/aptus/ | CC-MAIN-2015-06 | refinedweb | 704 | 63.25 |
»
I/O and Streams
Author
Accessing mp3 properties (128 bytes at the end) dirty hack...How to make this more efficient?
john price
Ranch Hand
Joined: Feb 24, 2011
Posts: 495
I like...
posted
Jul 07, 2011 21:48:38
0
I looked at the I/O API and came up with a dirty hack. I know that the last 128 bytes for mp3 files are the properties for these files. I tested my code (below) on some real life mp3 files (the 3 that came with my computer). It works fine, but I know this is just a cheap hack. Is there a better way to do this? I am doing it twice. There has to be a way to do it once...
Thanks,
cc11rocks
import java.io.*; public class AAAA_1RealLifeTest { public static void main(String[] args) { readInput(); } public static void readInput() { StringBuffer buffer = new StringBuffer(); StringBuffer sucker = new StringBuffer(); try { FileInputStream fis = new FileInputStream("AICX.mp3"); File file = new File("AICX.mp3"); InputStreamReader isr = new InputStreamReader(fis, "UTF8"); Reader in = new BufferedReader(isr); int ch; int characters_in = 0; while ((ch = in.read()) > -1) { characters_in ++; if (characters_in < 4) { sucker.append((char)ch); } if (characters_in == 4) { System.out.println(sucker); } } in.close(); FileInputStream fis_1 = new FileInputStream("AICX.mp3"); InputStreamReader isr_1 = new InputStreamReader(fis_1, "UTF8"); Reader out = new BufferedReader(isr_1); out.skip(characters_in - 128); int three_check = 0; if (sucker.toString().equals("ID3")) { while ((ch = out.read()) > -1) { buffer.append((char)ch); } } else { buffer.append("This is not an ID3 Compliant song"); } System.out.println(buffer.toString()); //return buffer.toString(); } catch (IOException e) { e.printStackTrace(); //return null; } } }
EDIT: Made the code just send off an error if it doesn't have ID3 in the beginning (which I believe is the format of the mp3 that I am doing).
)
Madhan Sundararajan Devaki
Ranch Hand
Joined: Mar 18, 2011
Posts: 312
I like...
posted
Jul 08, 2011 01:04:07
0
You may re-write as follows.
public static void readInput() { StringBuilder buffer = null, sucker = null; FileInputStream fis = null; InputStreamReader isr = null; BufferedReader in = null; int ch = 0, characters_in = 0; boolean found = false, present = false; buffer = new StringBuilder(); sucker = new StringBuilder(); try { fis = new FileInputStream("AICX.mp3"); isr = new InputStreamReader(fis, "UTF8"); in = new BufferedReader(isr); while ((ch = in.read()) > -1) { if( !found ) { characters_in++; if(characters_in < 4) { sucker.append((char)ch); } } if(characters_in == 4) { //System.out.println(sucker); found = true; in.skip(characters_in - 128); if (sucker.toString().equals("ID3")) { present = true; buffer.append((char)ch); } } } if( !present ) { buffer.append("This is not an ID3 Compliant song"); } System.out.println(buffer.toString()); } catch (IOException ioe) { ioe.printStackTrace(); } finally { try { in.close(); isr.close(); fis.close(); } catch(Exception e) { // DO NOTHING... } in = null; isr = null; fis = null; sucker = null; buffer = null; } }
S.D. MADHAN
Not many get the right opportunity !
I agree. Here's the link:
subject: Accessing mp3 properties (128 bytes at the end) dirty hack...How to make this more efficient?
Similar Threads
How can I get Unicode String of a String?
How a character save in 2 bytes in Java?
ASCII to EBCIDIC conversion error
FileInputStream - Replace Characters - FileInputStream
file, UTF8
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/544601/java-io/java/Accessing-mp-properties-bytes-dirty | CC-MAIN-2015-48 | refinedweb | 535 | 61.73 |
# How static code analysis helps in the GameDev industry

The gaming industry is constantly evolving and is developing faster than a speeding bullet. Along with the growth of the industry, the complexity of development also increases: the code base is getting larger and the number of bugs is growing as well. Therefore, modern game projects need to pay special attention to the code quality. Today we will cover one of the ways to make your code more decent, which is static analysis, as well as how PVS-Studio in practice helps in the game project development of various sizes.
"*The most important thing I have done as a programmer in recent years is to aggressively pursue static code analysis. Even more valuable than the hundreds of serious bugs I have prevented with it is the change in mindset about the way I view software reliability and code quality.*" – [John Carmack](https://www.gamasutra.com/view/news/128836/InDepth_Static_Code_Analysis.php)
We have been working with major game developers for many years and during this time we managed to do a lot of interesting and useful things for the gaming industry. It's not much of a surprise, given the [list of our clients](https://viva64.com/en/customers/) from the gaming industry. We actively support our clients: to integrate PVS-Studio into their own development process, fix errors found by the analyzer, and we even make special custom features.
In addition, we do a lot of independent development of the analyzer in the GameDev direction, as well as promote PVS-Studio, telling people about interesting errors that it has found in various video games.
Sure, we do have some interesting stories to tell. This article will cover several such cases.
PVS-Studio and Unity
--------------------

One of the ways we promote our product is by writing articles about checking open projects. Everyone benefits from these articles: a reader gets the chance to check out some unusual errors in a familiar project and learn something new. As for the PVS-Studio team, we get the opportunity to show the work done on real code, so that project developers can learn about errors and fix them in advance.
Our first major acquaintance with Unity took place in 2016, when the developers of this game engine opened the source code of several components, libraries, and demos in their official repository. No wonder, we could not pass by such an alluring case and wanted to write an article about checking the posted code.
Then we found out that the Unity3D code (at that time the engine was called like that) was of very high quality. But still we were able to find quite a lot of serious errors in it. There were enough of them to write an [article](https://www.viva64.com/en/b/0423/).
Two years later, another thing happened – Unity developers opened the code of the engine and the editor itself. And just like the previous time, we could not take no notice of that and checked the source code of the engine. And it was not for nothing — we also found a [bunch](https://www.viva64.com/en/b/0568/) of captivating flaws.
At the same time our intentions go far beyond just writing articles. We continue to work on PVS-Studio, and GameDev is one of the most significant areas for our development. Therefore, we want Unity game developers to be able to get the best possible analysis of their projects.
One of the steps to improve the quality of Unity projects analysis was writing annotations for methods defined in the Unity Scripting API.
Method annotation is a special mechanism used in PVS-Studio. It allows a user to provide the analyzer with all the necessary information about a particular method. It is written in special code by the analyzer developers themselves (i.e., by us).
This information can be of completely various kinds. For example: how the method can affect the parameters passed to it, whether it can allocate memory, and whether it returns a value that must be handled. Thus, annotation allows the analyzer to better understand the logic of methods, allowing it to detect new and more complex errors.
We have already written a huge number of different annotations (for example, for methods from the System namespace), and we were happy to add method annotations from the Unity Scripting API to them.
We started to extend the list of annotations with an assessment. How many methods are there in total? Which ones should be annotated first? There were a lot of methods in total, so we decided to start by annotating the most frequently used ones.
This is how we were looking for popular methods: first, we gathered a pool of projects from GitHub that use Unity features, and then we used a self-written utility (based on Roslyn) to calculate calls to the methods we were interested in. As a result, we got a list of classes whose methods were used most often:
* UnityEngine.Vector3
* UnityEngine.Mathf
* UnityEngine.Debug
* UnityEngine.GameObject
* UnityEngine.Material
* UnityEditor.EditorGUILayout
* UnityEngine.Component
* UnityEngine.Object
* UnityEngine.GUILayout
* UnityEngine.Quaternion
* ...
Next, it remained to annotate the methods of these classes. We created a test project and dug into the documentation to get as much information about those methods as possible. For example, we tried passing *null* as various arguments to see how the program would behave.
During such checks, we were discovering some interesting undocumented information from time to time. We even found a couple of noteworthy bugs in the engine. For example, when we are running the following code:
```
MeshRenderer renderer = cube.GetComponent();
Material m = renderer.material;
List outNames = null;
m.GetTexturePropertyNameIDs(outNames);
```
the Unity editor itself crashes (at least in version 2019.3.10f1). Of course, it is unlikely, that anyone will write such code. Still the fact that the Unity editor can be crashed by running such a script is curious.
So, we had the annotations written. After running the analysis, we immediately found new triggerings. For example, the analyzer detected a strange call to the *GetComponent* method:
```
void OnEnable()
{
GameObject uiManager = GameObject.Find("UIRoot");
if (uiManager)
{
uiManager.GetComponent();
}
}
```
**Analyzer warning:** [V3010](https://www.viva64.com/en/w/v3010/) The return value of function 'GetComponent' is required to be utilized. — ADDITIONAL IN CURRENT UIEditorWindow.cs 22
The *[GetComponent](https://docs.unity3d.com/ScriptReference/GameObject.GetComponent.html)* method implies the return of a specific value even due to its name. It is logical to assume that this value should be used in some way. Now thanks to the new annotation, the analyzer knows that such an "unattended" call to this method may indicate a logical error and warns about it.
This is not the only warning that appeared in the set of our test projects after adding new annotations. I will not cite the rest, so as not to make this article too large. The main thing is that now the development of Unity projects using PVS-Studio allows you to write much safer and cleaner code without bugs.
If you would like to read more about our work with annotations for Unity methods, here is the article: [How the PVS-Studio analyzer began to find even more errors in Unity projects](https://www.viva64.com/en/b/0744/).
Unreal Engine 4
---------------

When, back in 2014, the developers of Unreal Engine 4 opened the source code of the engine, we simply could not get past that project and also wrote an [article](https://www.viva64.com/en/b/0249/) about it. The engine developers liked the article and fixed the errors we found. But this was not enough for us, and we decided to try to sell the license for our analyzer to Epic Games.
Epic Games was interested in improving its engine with PVS-Studio, so we agreed on the following: we fix the Unreal Engine code on our own so that the analyzer does not issue any warnings, and guys from Epic Games buy our license and additionally reward us for the work done.
Why all warnings had to be fixed? The fact is that one can get the maximum benefit from static analysis by correcting errors right *when they appear*. When you check your project for the first time, you usually get several hundred (and sometimes thousands) warnings. Among all these analyzer triggerings, it is easy to lose warnings issued for newly written code.
At a first glance, this problem can be solved quite easily: you just need to sit down and go through the entire report, gradually correcting errors. However, although this method is more intuitive, it may take time. It is much more convenient and faster to use suppress files.
Suppress files are a [special feature of PVS-Studio](https://www.viva64.com/en/m/0032/) that allows you to hide analyzer warnings in a special file. However, hidden warnings will not appear in subsequent logs: you can view them separately.
After having many triggerings after the first check, you can add all detected warning to the suppress file in a couple of clicks, and you will get a clean log without a single entry after the next check.
Now that the old warnings are no longer included in the log, you can easily detect a new warning immediately when it appears. Here's the order of actions: write the code –> check it with the analyzer –> spot a new warning –> fix the error. This is how you will get the most out of using the analyzer.
At the same time, do not forget about the warnings in the suppress file: they can still contain warnings about major errors and vulnerabilities, just as before. Therefore, one should return to these warnings and reduce their number on a regular basis.
No doubts, this scenario is convenient, but developers from Epic Games wanted their code to be fixed straight away, so they passed the task to us.
And we got to work. After checking the project code, we found 1821 warnings of Level\_1 and Level\_2. Parsing such a large volume of warnings requires serious work, and to facilitate this whole process, we have set up continuous code analysis on our CI server.
It looked like this: every night on our server, the current version of Unreal Engine 4 was built, and immediately after the build, the analysis was automatically started. Thus, when our guys came to work in the morning, they always had a fresh report from the analyzer, which helped them to track the progress of eliminating warnings. In addition, this system allowed us to check the build stability at any time by running it on the server manually.
The whole process took us 17 working days. The schedule for fixing errors was as follows:

In fact, this schedule does not fully reflect our work. After we fixed all warnings, we waited another two days for them to accept our latest pull requests. All this time, the latest version of Unreal Engine was being checked automatically, which, in turn, continued to be updated with new code. So, what do you think happened? During those two days, PVS-Studio found four more errors in the code! One of them was crucial and could potentially lead to undefined behavior.
Of course, we also fixed those errors too. At that point developers of Unreal Engine had only one thing left: set up automatic analysis in their own place, just as we've done. From that moment on, they began to see warnings every day that were issued for the code they had just written. This allowed them to fix errors in the code right when they appeared – *at the earliest stages of development*.
You can read more about how we worked on the Unreal Engine code in the [official Unreal Engine blog](https://www.unrealengine.com/en-US/blog/how-pvs-studio-team-improved-unreal-engines-code) or [on our website](https://www.viva64.com/en/b/0330/).
Analysis of various games
-------------------------

Did I mention that we check various open projects and write articles about them? So, we now have a whole lot of similar articles about game projects! We wrote about games like [VVVVVV](https://www.viva64.com/en/b/0707/), [Space Engineers](https://www.viva64.com/en/b/0376/), [Command & Conque](https://www.viva64.com/en/b/0741/)r, [osu!](https://www.viva64.com/en/b/0704/) and even (a very early article) [Doom 3](https://www.viva64.com/en/b/0120/). We've also compiled the [top 10](https://www.viva64.com/en/b/0570/) of most interesting software bugs from the video game industry.
So, we checked probably most of the well-known open source engines. In addition to Unity and Unreal Engine 4, projects such as [Godot](https://www.viva64.com/en/b/0321/), [Bullet](https://www.viva64.com/en/b/0647/), [Amazon Lumberyard](https://www.viva64.com/en/b/0574/), [Cry Engine V](https://www.viva64.com/en/b/0495/) and many others have come under our sights.
The best part of all this is that many of the bugs we described were later fixed by the project developers themselves. It's nice to feel that the tool you are developing brings real, visible, and tangible benefits to the world.
You can view a list of all our articles related to video game development in one way or another on a [special page](https://www.viva64.com/en/tags/?q=gamedev) of our blog.
Conclusion
----------
At this point my article comes to an end. I wish you clean and correctly working code without bugs and errors!
Interested in the topic of static analysis? Want to check your project for errors? Try [PVS-Studio](https://www.viva64.com/en/pvs-studio-download/).
[](https://viva64.com/en/pvs-studio-download/?utm_source=habr&utm_medium=banner&utm_campaign=0778_GameDev) | https://habr.com/ru/post/530532/ | null | null | 2,371 | 55.74 |
29 July 2008 21:59 [Source: ICIS news]
HOUSTON (ICIS news)--Global demand for phosphate fertilizers will continue to increase despite prices doubling from last year’s levels, fertilizer producer Mosaic said on Tuesday.
CEO Jim Prokopanko, speaking a day after the company unveiled record earnings, said high prices were not leading to demand destruction for phosphate.
“We’re selling it as fast as we can produce it, and where prices go from here - with a strong market, you can draw your own conclusions,” Prokopanko said during an earnings conference call.
Diammonium phosphate was trading at $1,190.49/tonne (€750.00) FOB (free on board) NOLA (?xml:namespace>
While fertilizer demand remains firm, Prokopanko said the recent increases in sulphur prices were “a temporary phenomenon”.
“We’re confident we’re going to see much lower sulphur prices in the next couple quarters,” Prokopanko said.
( | http://www.icis.com/Articles/2008/07/29/9143986/phosphate-demand-will-continue-to-grow-mosaic.html | CC-MAIN-2014-15 | refinedweb | 145 | 60.55 |
This tutorial uses an old version of Apollo Client, and we’re working on updating it soon. For a more up to date introduction to the new constructor API, check out this getting started guide. Most of this tutorial will be identical and can be completed with the new API.
GraphQL is a new API-definition and query language that has the potential to become the new REST. It makes it easy for UI components to declaratively fetch data without having to worry about backend implementation details. Because it’s such a powerful abstraction, GraphQL can speed up app development and make code much easier to maintain.
However, despite the great advantages of using GraphQL, (this part): Setting up a simple client
- Part 2: Setting up a simple server
- Part 3: Writing mutations and keeping the client in sync
- Part 4: Optimistic UI and client side store updates
- Part 5: Input types and custom cache resolvers
- Part 6: Subscriptions on the server
- Part 7: GraphQL Subscriptions on the client
- Part 8: Pagination
This tutorial — the first in the series — is about getting started with GraphQL on the frontend. It only takes about 20–30 minutes, and by the end of it you’ll have a very simple React UI that loads its data with GraphQL and looks something like this:
Let’s get started!
1. Getting set up1. Getting set up
Note: To do this tutorial you will need to have node, npm and git installed on your machine, and know a little bit about React.
We’re going to use
create-react-app in this tutorial, so go ahead and install that:
> npm install -g create-react-app
We’ll also clone the tutorial repository from GitHub, which has some CSS and images in it that we’ll use later.
> git clone > cd graphql-tutorial
Next, we create our react app with
create-react-app .
> create-react-app client > cd client
To make sure it’s working, let’s start our server:
> npm start
If it all worked, you should now see the following in your browser:
2. Writing the first component2. Writing the first component
Since we’re building an app with Apollo here, let’s change the logo and CSS by copying over
logo.svg and
App.css from
../resources
> cd src > cp ../../resources/* .
To keep this initial tutorial short, we’ll only build a simple list view today. Let’s change a few things in
App.js:
- Change “Welcome to React” to “Welcome to Apollo”. Apollo is the name of the GraphQL client we’re going to use throughout this tutorial series.
- Remove the “To get started ..” paragraph and replace it with a pure React component that renders an unordered list
<ul>with two list items
<li>, “Channel 1” and “Channel 2” (yes, you guessed it, we’re going to build a messaging app!). Let’s name our list component
ChannelsList.
Now your
App.js should look like this:
import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css';const ChannelsList = () => (<ul> <li>Channel 1</li> <li>Channel 2</li> </ul>);class App extends Component { render() { return ( <div className="App"> <div className="App-header"> <img src={logo} <h2>Welcome to Apollo</h2> </div> <ChannelsList /> </div> ); } }export default App;
create-react-app sets up hot reloading for you, so as soon as you save the file, the browser window with your app should update to reflect the changes:
3. Writing your GraphQL schema3. Writing your GraphQL schema
Now that we have a simple app running, it’s time to write GraphQL type definitions for it. The schema will specify what object types exist in our app, and what fields they have. In addition, it also specifies the allowed entry points into our API. We’ll do that in a file called
schema.js
export const typeDefs = `type Channel { id: ID! # "!" denotes a required field name: String }# This type specifies the entry points into our API. In this case # there is only one - "channels" - which returns a list of channels. type Query { channels: [Channel] # "[]" means this is a list of channels } `;
With this schema we’ll be able to write a simple query to fetch the data for our
ChannelList component in the next section. This is what our query will look like:
query ChannelsListQuery { channels { id name } }
4. Wiring your component together with the GraphQL query4. Wiring your component together with the GraphQL query
Alright, now that we have our schema and query, we just need to hook up our component with Apollo Client! Let’s install Apollo Client and some helper packages that we’ll need to get GraphQL into our app:
> npm i -S react-apollo
react-apollo is a neat integration of Apollo Client with React that lets you decorate your components with a higher order component called
graphql to get your GraphQL data into the component with zero effort. React Apollo also comes with
ApolloClient, which is the core of Apollo that handles all the data fetching, caching and optimistic updates (we’ll get to those in another tutorial).
Now with that, let’s add a few imports at the top of our
App.js and create an instance of Apollo Client:
import { ApolloClient, gql, graphql, ApolloProvider, } from 'react-apollo';const client = new ApolloClient();
Next, we decorate the original
ChannelsList with a GraphQL higher-order component that takes the query and passes the data to our component:
const channelsListQuery = gql` query ChannelsListQuery { channels { id name } } `;const ChannelsListWithData = graphql(channelsListQuery)(ChannelsList);
When wrapped with the
graphql HOC, our
ChannelsList component will receive a prop called
data, which will contain
channels when it’s available, or
error when there is an error. In addition
data also contains a
true when Apollo Client is still waiting for data to be fetched.
We’ll modify our
ChannelsList component to make sure the user knows if the component is loading, or if there has been an error:
const ChannelsList = ({ data: {loading, error, channels }}) => { if (loading) { return <p>Loading ...</p>; } if (error) { return <p>{error.message}</p>; } return <ul> { channels.map( ch => <li key={ch.id}>{ch.name}</li> ) } </ul>; };
Finally, we have to replace the
ChannelsList inside our App’s render function with
ChannelsListWithData. In order to make an instance of Apollo Client available to the component we just created, we also wrap our top-level app component with
ApolloProvider, which puts an instance of the client on the UI.
Your
App component should now look like this:
class App extends Component { render() { return ( <ApolloProvider client={client}> <div className="App"> <div className="App-header"> <img src={logo} <h2>Welcome to Apollo</h2> </div> <ChannelsListWithData /> </div> </ApolloProvider> ); } }
Okay, we’re almost done! If you try to run this now, you should see the following error:
What’s going on? Well, we wired up all our components correctly, but we haven’t written a server yet, so of course there is no data to fetch or display! If you don’t specify a URL for your GraphQL endpoint, Apollo Client will assume that it’s running on the same origin under
/graphql. To change that, we need to create a network interface with a custom URL.
However, because this tutorial isn’t about writing a server, we’ll use the fact that GraphQL is self-documenting to create mocks automatically from the type definitions we wrote earlier. To do that, we just need to stop the server, install a few additional packages, and restart it:
npm i -S graphql-tools apollo-test-utils graphql
We’ll use these packages to create a mock network interface for Apollo Client based on the schema we wrote earlier. Add the following imports and definitions towards the top of
App.js:
import { makeExecutableSchema, addMockFunctionsToSchema } from 'graphql-tools'; import { mockNetworkInterfaceWithSchema } from 'apollo-test-utils'; import { typeDefs } from './schema';const schema = makeExecutableSchema({ typeDefs }); addMockFunctionsToSchema({ schema });const mockNetworkInterface = mockNetworkInterfaceWithSchema({ schema });
Now all you have to do is pass the
mockNetworkInterface to the constructor of Apollo Client …
const client = new ApolloClient({ networkInterface: mockNetworkInterface, });
That’s it, you’re done! Your screen should now look like this:
Note: “Hello World” is just the default mock text used for strings. If you want to customize your mocks to be super-fancy, check out this post I wrote a while ago.
If something isn’t working, and you can’t figure out why, you can compare it to this file to see what you did differently. Alternatively, you can check out the
t1-end Git branch to inspect some working code.
Congratulations, you’ve officially finished the first part of the tutorial! It may not feel like much, but you’ve actually done a lot: You’ve written a GraphQL schema, generated mock data from it, and connected that to your React component with a GraphQL query. You know have the foundation on which we’re going to build a real messaging app throughout the remainder of this tutorial series. In part 2, we’ll write a simple server and hook it up to our app!
If you liked this tutorial and want to keep learning about Apollo and GraphQL, make sure to click the “Follow” button below, and follow us on Twitter at @apollographql and @helferjs.
Stay in our orbit!
Become an Apollo insider and get first access to new features, best practices, and community events. Oh, and no junk mail. Ever.
Make this article better!
Was this post helpful? Have suggestions? Consider so we can improve it for future readers ✨. | https://www.apollographql.com/blog/graphql/examples/full-stack-react-graphql-tutorial?gi=cb2672c0c0b2 | CC-MAIN-2022-27 | refinedweb | 1,584 | 60.04 |
How to get ext.grid column model from outside ext JS (jquery on click)
Need hide/show column of ext grid, depending on external factors, not related to grid store.
Now i try simple thing:
Code:
$(document).ready(function(){ // =================== hide column/show column ======================== function show_announce() { Ext.namespace('Ext.ux'); Ext.namespace('Ext.ux.form'); // $.log( Ext.grid.getColumnModel() ); // $.log( grid.getColumnModel() ); // $.log( Ext.ux.grid.getColumnModel() ); $.log( grid.getColumnModel() ); } // =================== $( "#test-test" ) . click( function(){ show_announce(); return false; }); });
But every variant of call this column model i try:
Ext.grid.getColumnModel()
grid.getColumnModel()
Ext.ux.grid.getColumnModel()
grid.getColumnModel()
is incorrect.
But, i can reload store from this same function with store.reload();
Dear developers and EXT users!
Please help me to hide or show column from JS.
Or get column model of extjs grid is inided and is active!
Arsen
it was global scope problemm of column model & grid vars. Both must be in glob.scope, ie without vars ...
So then start working
pls delete this ...
On clickin of leaf node of tree.Panel need to hide and show diff. grids?
hey guys help me out...........
I have two grids with diff. column names n diff. forms for them and I want to make the grids hide() and show() when I click on tree.Panel......leaf nodes.......
like I have two leaf nodes under Admin that is : bulletin_users for grid1
and announcements for grid2
So I want wen i click on bulletin_users the grid1 should be visible with its respective toolbar in the ContentPanel and the grid2 should be hidden with its respective toolbar.........and viceversa wen I click on announcements
tell me the code which I should put into the listeners of TreePanel.
Hello,
Ive used xtemplate to generate the html. Setting the id/class within the xtemplate that jquery will then access when the onclick is accessed.
Hey can U give me the code like how to use it actually M just a beginner and I searched for it but not getting how to apply in my code
Im not in a position to get it for you now, but check this link out. It is kinda along the same thingy ...
Basically, your just building an html table with the xtemplate, then you access the id(s)/classes using jquery ...
Hope this helps.
Similar Threads
Open a Model window with the details of the Grid Row when I click the grid rowBy shonekolathu in forum Ext 3.x: Help & DiscussionReplies: 2Last Post: 11 Jan 2011, 1:47 PM
Grid Column ModelBy tutankamen in forum Ext 3.x: Help & DiscussionReplies: 1Last Post: 13 Jan 2010, 4:14 AM
Ext js 2.0 ( How to get a checkbox in grid column modelBy cprabha in forum Ext 2.x: Help & DiscussionReplies: 3Last Post: 31 Jul 2008, 6:07 AM | https://www.sencha.com/forum/showthread.php?128827-How-to-get-ext.grid-column-model-from-outside-ext-JS-(jquery-on-click)&s=fa4ce439bd5648fb28cb2f531f0652d6&p=656073 | CC-MAIN-2016-18 | refinedweb | 467 | 66.94 |
Java Reference
In-Depth Information
public A ( String named ) {
name = named ;
}
public String getName () {
return name ;
}
}
Here's the definition for B :
package javanut6 . ch03 . different ;
import javanut6.ch03.A ;
public class B extends A {
public B ( String named ) {
super ( named );
}
@Override
public String getName () {
return "B: " + name ;
}
}
Java packages do not “nest,” so javanut6.ch03.different is
just a different package than javanut6.ch03 ; it is not con‐
tained inside it or related to it in any way.
However, if we try to add this new method to B , we will get a compilation error,
because instances of B do not have access to arbitary instances of A :
public String examine ( A a ) {
return "B sees: " + a . name ;
}
If we change the method to this:
public String examine ( B b ) {
return "B sees another B: " + b . name ;
}
then the compiler is happy, because instances of the same exact type can always see
each other's protected fields. Of course, if B was in the same package as A then any
instance of B could read any protected field of any instance of A because protected
fields are visible to every class in the same package.
Search WWH ::
Custom Search | http://what-when-how.com/Tutorial/topic-5244o10/Java-in-a-Nutshell-142.html | CC-MAIN-2019-04 | refinedweb | 200 | 68.91 |
Hi everyone,
I've been super busy over the last couple of months and not had a chance to carry on with the jumping into c++ pdf. However, now I'm back and ready, and stuck, already!
Here is my program, I thought I'd refresh my memory on how loops work:
So it all works ok, entering any number other than 1,2,3 or 4 results in a prompt to "Please try again:".So it all works ok, entering any number other than 1,2,3 or 4 results in a prompt to "Please try again:".Code:
#include <iostream>
#include <string>
using namespace std;
int main ()
{
int userSelection;
string file = "File";
string edit = "Edit";
string view = "View";
string navigate = "Navigate";
cout << "1 - " << file << "\n";
cout << "2 - " << edit << "\n";
cout << "3 - " << view << "\n";
cout << "4 - " << navigate << "\n\n";
cout << "Please select a number from the list: ";
cin >> userSelection;
while ( userSelection < 1 || userSelection > 4 )
{
cout << "Please try again: ";
cin >> userSelection;
}
if ( userSelection == 1 )
{
cout << "You chose " << file;
}
else if ( userSelection == 2 )
{
cout << "You chose " << edit;
}
else if ( userSelection == 3 )
{
cout << "You chose " << view;
}
else
{
cout << "You chose " << navigate;
}
return 0;
}
However, if entering anything other than a number, i.e. a letter, t, r, a, whatever, the cin fails and goes into an infinite loop.
I've been reading about this happening in other people's programs but haven't been able to apply any solution to my own. Looking for some kind of 'not a number' function or something. If the program only allows 1,2,3 or 4 in order to continue, I would have logically thought any other input would be bad, hence show the prompt. However, I don't understand how c++ works so my own logic probably goes out the window!
Any help is much appreciated, thanks.
Sam. | http://cboard.cprogramming.com/cplusplus-programming/153512-simple-program-stopping-invalid-character-infinite-loop-printable-thread.html | CC-MAIN-2015-11 | refinedweb | 303 | 62.01 |
Chan for Python, lovingly stolen from Go
Project description
Implements Go’s chan type in Python.
Install with pip install chan
Source at
Usage
You can put onto channels, and get from them
c = Chan() # Thread 1 c.put("Hello") # Thread 2 print "Heard: %s" % c.get()
Channels can be closed (usually by the sender). Iterating over a channel gives all values until the channel is closed
c = Chan() # Thread 1 c.put("It's") c.put("just") c.put("contradiction") # Thread 2 for thing in c: print "Heard:", thing
You can wait on multiple channels using chanselect. Pass it a list of input channels and another of output channels, and it will return when any of the channels is ready
def fan_in(outchan, input1, input2): while True: chan, value = chanselect([input1, input2], []) if chan == input1: outchan.put("From 1: " + str(value)) else: outchan.put("From 2 " + str(value))
You can see more examples in the “examples” directory.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
chan-0.3.1.tar.gz (6.5 kB view hashes) | https://pypi.org/project/chan/0.3.1/ | CC-MAIN-2022-40 | refinedweb | 200 | 66.33 |
Nathan Hangen
- Member Since: January 15th, 2012
- Tampa, FL
- ignitiondeck.com
- @nhangen on WordPress.org
- Plugin Developer
Bio/Description
Committed [1704097] to Plugins Trac:
tag 1.2.26
3 months ago
Committed [1704096] to Plugins Trac:
add wc settings menu
3 months ago
Committed [1704095] to Plugins Trac:
import from trunk
3 months ago
Committed [1686372] to Plugins Trac:
tag 1.2.25
4 months ago
Committed [1686371] to Plugins Trac:
import changes from trunk. We’re going live with 1.2.25
4 months ago
Committed [1675613] to Plugins Trac:
supports 4.8
4 months ago
Committed [1671939] to Plugins Trac:
tag 1.2.24
4 months ago
Committed [1671935] to Plugins Trac:
dev tools js
4 months ago
Committed [1671932] to Plugins Trac:
import from trunk for 1.2.24.
4 months ago
Wrote a comment on the post SVN Syncing Issues Continued, on the site Make WordPress Plugins:
Thanks Mika, that makes sense. I'll use Dion's links to take a closer look. Appreciate…
5 months ago
Wrote a comment on the post SVN Syncing Issues Continued, on the site Make WordPress Plugins:
Curious why the repository isn't open source. Is it because of security, confidentiality, or some…
5 months ago
Committed [1649169] to Plugins Trac:
delete old wrapper folder
6 months ago
Posted a reply to A briliant idea but a totally non-intuitive result, on the site WordPress.org Forums:
I'm sorry you feel disappointed with your IgnitionDeck experience. To be honest, we are a…
7 months ago
Posted a reply to ignitiondeck enterprise, on the site WordPress.org Forums:
Thanks for posting such a thorough (and positive) review. The feedback is excellent. We'll take…
8 months ago
Released a new plugin, Beer Geek
9 months ago
Created a topic, [DISCLAIMER] – I am on the development team, on the site WordPress.org Forums:
We built this plugin in 2011 because at the time, ther…
1 year ago
WP Button Styles
Active Installs: 10+ | https://profiles.wordpress.org/nhangen/ | CC-MAIN-2017-43 | refinedweb | 332 | 63.49 |
table of contents
NAME¶
strsep - extract token from string
SYNOPSIS¶
#include <string.h>
char *strsep(char **stringp, const char *delim);
strsep():
Since glibc 2.19:
_DEFAULT_SOURCE
Glibc 2.19 and earlier:
_BSD_SOURCE
DESCRIPTION¶
If *stringp is NULL, the strsep() function returns NULL and does nothing else. Otherwise, this function finds the first token in the string *stringp, that is delimited by one of the bytes.
RETURN VALUE¶
The strsep() function returns a pointer to the token, that is, it returns the original value of *stringp.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
4.4BSD.
NOTES¶
The strsep() function was introduced as a replacement for strtok(3), since the latter cannot handle empty fields. However, strtok(3) conforms to C89/C99 and hence is more portable.
BUGS¶
Be cautious when using this function. If you do use it, note that:
- This function modifies its first argument.
- This function cannot be used on constant strings.
- The identity of the delimiting character is lost.
SEE ALSO¶
index(3), memchr(3), rindex(3), strchr(3), string(3), strpbrk(3), strspn(3), strstr(3), strtok(3)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://manpages.debian.org/testing/manpages-dev/strsep.3.en.html | CC-MAIN-2022-21 | refinedweb | 225 | 67.65 |
15 December 2010 15:07 [Source: ICIS news]
MOSCOW (ICIS)--Sibur's sales figures for 2010 are expected to rise by 38% compared with the year before to roubles (Rb) 223bn ($7.26bn, €5.43bn), the Russian petrochemical holding company said on Wednesday.
In the preliminary figures, Sibur also said it expected to invest Rb60bn in 2010 to upgrade its existing production units and build new facilities, up from Rb30bn last year.
The company intends to invest Rb70bn in 2011, Sibur CEO Dmitry Konov said.
Konov also announced Sibur planned to sign an agreement with ?xml:namespace>
A $300m facility would be built in
In a separate development, Sibur announced on Wednesday that it had signed a memorandum of understanding with the US-based Fluor Corporation for the establishment of a joint venture based on NIPI Gazpererabotka, the Russian company's research and design institute for gas processing.
According to the memorandum, Fluor would have a 10% interest in the joint venture.
($1 = Rb30.70, €1 = Rb41 | http://www.icis.com/Articles/2010/12/15/9419907/russias-sibur-expects-2010-sales-to-rise-by-38.html | CC-MAIN-2014-10 | refinedweb | 167 | 51.38 |
Starting from this chapter?
Clone the application repo and check out the
creating-endpoints branch:
git clone git@github.com:auth0-blog/wab-ts-express-api.git \ express-ts-api \ --branch creating-endpoints
Make the project folder your current directory:
cd express-ts-api
Then, install the project dependencies:
npm i
Finally, create a
.env hidden file:
touch .env
Populate
.env with this:
PORT=7000
One of the requirements for this project is that only authorized users can write records to the store. To quickly and securely achieve that, you can use Auth0 to manage your application's user credentials.
Set Up an Auth0 API
First, you have to create a free Auth0 account if you don't have one yet.
After creating your account, head to the APIs section in the Auth0 Dashboard, and hit the Create API button.
Then, in the form that Auth0 shows:
Add a Name to your API. Something like Menu API, for example.
Set its Identifier to.
Leave the signing algorithm as
RS256as it's the best option from a security standpoint.
Identifiers are unique strings that help Auth0 differentiate between your different APIs. We recommend using URLs as they facilitate predictably creating unique identifiers; however, Auth0 never calls these URLs.
With these values in place, hit the Create button.
Now, click on the Quick Start tab. This page presents instructions on how to set up different APIs. From the code box, choose Node.js. Keep this window open as you'll be using the values from the code snippet soon.
Your API needs these configuration variables to identity itself with Auth0: an Audience and a Domain value. The best place to store these values is within the
.env file of your project.
Open
.env and add the following keys to it:
PORT=7000 AUTH0_DOMAIN= AUTH0_AUDIENCE=
You can find the values needed for these keys within the code snippet of the Node.js quickstart.
The
AUTH0_DOMAIN is the value of the
issuer property without the forward slash at the end:
https://<Tenant Name>.auth0.com
The
AUTH0_AUDIENCE is the value of the
audience property.
Do not include the quotes as part of the
.envvariable value. Only include the string within the quotes.
Create Authentication Middleware
To protect an endpoint in Express, you rely on a middleware function that gets executed before the callback function of the controller that handles the request. There are two ways that you can accomplish this.
The first option is to "inject" an authorization middleware function in the controller as follows:
itemsRouter.post("/", authorizationFunction, async (req: Request, res: Response) => { // Controller logic... });
Here,
authorizationFunction gets called before the route handler function of
itemsRouter.post. In turn, the business logic within
authorizationFunction can perform two tasks:
(a) invoke the next function in the middleware chain, the router handler function, if it can determine that the user has the authorization to access the resource or,
(b) close the request-response cycle by responding with a
401 Unauthorized message, which prevents your API from executing the route handler.
The approach of adding authorization middleware by controller gives you granular and low-level control of the authorization flow. However, it can be tedious to inject the authorization middleware function per controller if you have many of them.
As an alternative, you can separate the public controllers from the protected controllers using the authorization middleware as a boundary between groups. For example, within an Express router, you could do the following:
itemsRouter.get(...); itemsRouter.use(authorizationFunction); itemsRouter.post(...); itemsRouter.put(...); itemsRouter.delete(...);
As such, the
GET endpoint can be accessed by clients without presenting any "proof of authorization" — it is a public endpoint.
However, any other endpoint defined after your application mounts
authorizationFunction into
itemsRouter can only be accessed if
authorizationFunction can determine that the client making the endpoint request has the authorization to access it. For this API, Auth0 provides the proof of authorization mentioned in the form of a JSON Web Token (JWT) called an access token.
A JWT defines a compact and self-contained way to transmit information between parties as a JSON object securely. This information can be verified and trusted because it is digitally signed, which makes JWTs useful to perform authorization.
Once the user logs in using a client application, Auth0 provides the client with an access token that defines the resources that the client has permission to access or manipulate with that token. The access token defines information about what users can do in your API in the JSON object it encapsulates. As such, the client must include the access token with each subsequent request it makes to a protected API endpoint.
You'll use the partition approach for this application as you need to protect all the endpoints that write data to the store.
Install authorization dependencies
To create your authorization middleware function, you need to install these two packages:
npm i express-jwt jwks-rsa
Here's what these packages do for you:
express-jwt: Validates the authorization level of HTTP requests using JWT tokens in your Node.js application.
jwks-rsa: A library to retrieve RSA signing keys from a JWKS (JSON Web Key Set) endpoint.
Since you are working in a TypeScript project, you also need the type definitions for these packages; however, only the
express-jwt package is available in the
@types npm namespace:
npm i -D @types/express-jwt
The helper function you need from the
jwks-rsapackage is simple and doesn't require strong typing.
Next, create a file to define your authorization middleware function:
touch src/middleware/authz.middleware.ts
Populate
src/middleware/authz.middleware.ts as follows:
import jwt from "express-jwt"; import jwksRsa from "jwks-rsa"; import * as dotenv from "dotenv"; dotenv.config(); export const checkJwt = jwt({ secret: jwksRsa.expressJwtSecret({ cache: true, rateLimit: true, jwksRequestsPerMinute: 5, jwksUri: `${process.env.AUTH0_DOMAIN}/.well-known/jwks.json` }), // Validate the audience and the issuer. audience: process.env.AUTH0_AUDIENCE, issuer: `${process.env.AUTH0_DOMAIN}/`, algorithms: ["RS256"] });
When you call the
checkJwt function, it invokes the
jwt function, which verifies that any JSON Web Token (JWT) present in the request payload to authorize the request is well-formed and valid. Auth0 determines the validity of the JWT. As such, you pass the
jwt function some variables to help it contact Auth0 and present it with all the JWT information it needs:
The
audienceand
issuerof the JWT, which are defined in your
.envfile and loaded into this module using
dotenv.config().
The
algorithmsused to sign the JWT.
The
secretused to sign the JWT.
To obtain the secret, you need to do some additional work: you use the
expressJwtSecret helper function from the
jwks-rsa library to query the JSON Web Key Set (JWKS) endpoint of your Auth0 tenant. This endpoint has a set of keys containing the public keys that your application can use to verify any JSON Web Token (JWT) issued by the authorization server and signed using the RS256 signing algorithm.
The
checkJwt function implicitly receives the request,
req, and response,
res, object from Express, as well as the
next() function, which it can use to invoke the next middleware function in the chain.
All that's left to do is to mount the
checkJwt middleware function before you mount your
itemsRouter write endpoints.
Open
src/items/items.router.ts and import
checkJwt under the
Required External Modules and Interfaces section:
/** * Required External Modules and Interfaces */ import express, { Request, Response } from "express"; import * as ItemService from "./items.service"; import { Item } from "./item.interface"; import { Items } from "./items.interface"; import { checkJwt } from "../middleware/authz.middleware";
Then, under the
Controller Definitions section, locate the definition of the
POST items/ endpoint, and right above it, add the following code to mount the authorization middleware,
itemsRouter.use(checkJwt):
/** * Controller Definitions */ // GET items/ itemsRouter.get(...); // GET items/:id itemsRouter.get(...); // Mount authorization middleware itemsRouter.use(checkJwt); // POST items/ itemsRouter.post(...); // PUT items/ itemsRouter.put(...); // DELETE items/:id itemsRouter.delete(...);
To test that your write endpoints are indeed protected, issue the following requesting in the terminal:
curl -X POST -H 'Content-Type: application/json' -d '{ "item": { "name": "Salad", "price": 4.99, "description": "Fresh", "image": "" } }' -i
The server replies with an
HTTP/1.1 401 Unauthorized response status, and the message
No authorization token was found, confirming that your write endpoints are protected. To access them, you need a JWT issued by Auth0. The fastest way to get that token is to use a client just like any of your users would.
Creating an Auth0 Client Application
You can use the WHATABYTE client application to create an Auth0 user, log in, and request protected data from your API. To configure this client, you need to create an Auth0 Single-Page Application in the Auth0 dashboard, which gives you the Auth0 Domain and Auth0 Client ID values needed to configure the demo client application to talk to the Auth0 authentication server and get access tokens for your logged-in users.
The process of creating an Auth0 client application is quite straightforward:
Open the Auth0 Applications section of the Auth0 Dashboard.
Click on the Create Application button.
Provide a Name value such as WAB Dashboard.
Choose Single Page Web Applications as the application type.
Click on the Create button.
On the application page that loads, click on the Settings tab. Use the Auth0 values present there to fill the missing values in Auth0 Demo Settings form of the WAB Dashboard client, namely Auth0 Domain and Auth0 Client ID.
Click the Save button below the form. The WAB Dashboard is a client to your Express server. To test this connection, click on the Menu tab and observe how it populates with the menu items defined in your API store.
Connecting a Client Application With Auth0
Click on the Settings tab of your Auth0 client application and update the following setting fields:
Allowed Callback URLs
Use the value of Auth0 Callback URL from the Auth0 Demo Settings form,.
After a user authenticates, Auth0 only calls back any of the URLs listed there. You can specify multiple valid URLs by comma-separating them (typically to handle different environments like QA or testing). Make sure to specify the protocol,
http:// or
https://, otherwise the callback may fail in some cases.
Allowed Web Origins
Use.
This field holds a comma-separated list of allowed origins for use with web message response mode, which makes it possible to log in using a pop-up, as you'll soon see in the next section.
Allowed Logout URLs
Use.
This field holds a set of URLs that Auth0 can redirect to after a user logs out of your application. The demo client has been configured to use the provided value for redirecting users.
With these values in place, you can scroll to the bottom of the "Settings" page and click on the Save Changes button.
Head back to the WAB Dashboard and click on the Sign In button. Since this may be the first user you are adding to Auth0, go ahead and click on the Sign Up tab in the pop-up that comes up. Then, provide an email and password to register your new user.
Once you sign in, the user interface of the WAB Dashboard changes:
The Sign In button becomes a Sign Out button
A user tab is now displayed below the Sign Out button.
Click on the user tab to see a custom page with your email as the title.
The WAB Dashboard caters to two types of users: regular users and users with a
menu-admin role. This role allows the user to create, update, and delete menu items in the WAB Dashboard.
In the next section, you create the
menu-admin role, associate permissions with it, and assign it to a new user that you create through the Auth0 Dashboard. This privileged user can unlock the admin features of the WAB Dashboard.
I've secured my Express app with Auth0
Auth0 Docs
Implement Authentication in Minutes
Auth0 Community
Join the Conversation | https://auth0.com/blog/use-typescript-to-create-a-secure-api-with-nodejs-and-express-adding-authentication-authorization/ | CC-MAIN-2020-05 | refinedweb | 1,996 | 54.73 |
{
C++ Knowledge level: Beginner
Books read: 0
Book Currently reading: Beginning C++ Through Game Programming (Second edition)
}
^ Accidently purchased the second edition, not the third...but i'm sure there isn't a big difference
I'm trying to make a number guessing game and it works, but 1 bit of the code isn't working
i'm trying to get the players previous score and tell him if he/she beat his old score, by making an int called timesPlayed and preScore, i will only have this code run if the player has played twice, and there is an old score to display...but I can't get the preScore(old score) to display...I'm stumped, if I wasn't I probably wound't be here asking for help...
Can some one take a look at my code and tell me what I did wrong, and try and teach me so I don't make the same mistake again.
Also could you give me tips on inproving my code, thank you.
#include <iostream> #include <string> #include <cstdlib> #include <ctime> using namespace std; int main() { char playAgain = 'y'; while(playAgain == 'y') { srand(time(0)); int theNumber = rand() % 100 + 1; int tries = 0; int guess; int timesPlayed = 1; cout << "\tWelcome to Guess My Number\n\n"; do { cout << "Enter a guess: "; cin >> guess; tries++; if(guess > theNumber) cout << "\nToo High!\n\n"; if(guess < theNumber) cout << "\nToo low!\n\n"; }while (guess != theNumber); int score = 10000 / tries; cout << "\nThat's it! You got it in " << tries << " Guesses!\n\n"; cout << "Your Score: " << score; int preScore = score; ++timesPlayed; if(timesPlayed >= 2) { if(preScore > score) cout << "You Beat your old score of " << preScore << endl; if(preScore < score) cout << "Im sorry, but you did not beat your old score of " << preScore << endl; } cout <<"\n\nPlay Again?\n"; cout << "(y/n)\n\n"; cin >> playAgain; if(playAgain == 'y') cout << "\n\n\n\n\n\n\n\n\n\n\n\n"; } cout << "\n\nGoodbye, Play again some time."; }
This post has been edited by GunnerInc: 31 July 2012 - 06:22 PM
Reason for edit:: Removed font tag | http://www.dreamincode.net/forums/topic/287580-number-guessing-game-problem/ | CC-MAIN-2016-40 | refinedweb | 353 | 74.93 |
table of contents
NAME¶
getdirentries,
getdents — get directory
entries in a file system independent format
LIBRARY¶
Standard C Library (libc, -lc)
SYNOPSIS¶
#include
<sys/types.h>
#include <dirent.h>
ssize_t
getdirentries(int
fd, char *buf,
size_t nbytes,
off_t *basep);
ssize_t
getdents(int
fd, char *buf,
size_t nbytes);
DESCRIPTION¶.
IMPLEMENTATION NOTES¶
The d_off field is being used as a cookie to readdir for nfs servers. These cookies can be cached and allow to read directory entries at a specific offset on demand.
RETURN VALUES¶
If successful, the number of bytes actually transferred is returned. Otherwise, -1 is returned and the global variable errno is set to indicate the error.
ERRORS¶
The
getdirentries() system call will fail
if:
- [
EBADF]
- The fd argument is not a valid file descriptor open for reading.
- [
EFAULT]
- Either buf or non-NULL basep point.
- [
EINTEGRITY]
- Corrupted data was detected while reading from the file system.
SEE ALSO¶
HISTORY¶
The
getdirentries() system call first
appeared in 4.4BSD. The
getdents() system call first appeared in
FreeBSD 3.0. | https://manpages.debian.org/bullseye/freebsd-manpages/getdirentries.2freebsd.en.html | CC-MAIN-2022-27 | refinedweb | 170 | 57.77 |
North Shore
Children
&Families FREE!
The online and print forum promoting the development of children, families and the parents who care for them.
IN THIS ISSUE Higher Education: What's A College Education For? How Are We Doing? What to Look For Is College A Good Financial Investment? What Parents Can Do Now Summer Camps & Programs Showcase! Community Calendar Education Feature: Austin Preparatory School Reader Contest! See page 2!
MAY 2012
2 North Shore Children & Families
Family & Friends
Celebrating All North Shore Moms! by Suzanne Provencher, Publisher
Shore a very Happy Mother’s Day!
Happy Mother’s Day to all North Shore Moms! Whether you are Mom, Mommy, Mother, Mama, Ma, Mere, Maman, Madre, Mamma – Mom or Nanna, Nan, Nana, Grammy, Grandma, Grandmother, Granny, Nonna, Nonni,Ya-ya, Memere, Abuela, Babushka – or Auntie, Guardian, Mentor, Teacher or Friend – here’s wishing all Moms and caretakers throughout the North
50: I’d also like to wish some of my oldest and dearest friends a very happy 50th birthday in May! This is the year of the BIG ONE for many of my friends…and me – so Happy Birthday to Marybeth & Donna on May 14 – and Happy Birthday to Tyla on May 17! Thanks, girls, for leading the way and
going there before me! I’ll catch up with you soon… too soon…how did we get here so fast?
please – and good luck to all who enter!
We have another Nan contest to enter this month! Look for the contest “ad” on this page to see how you can enter to win a pair of tickets to see a musical at North Shore Music Theatre in Beverly! The deadline to enter is May 25, only one entry per person,
Nanna
Available June 1
North Shore Children & Families
APARTMENT for RENT
invites you to
Enter to Win!
2 bedrooms, 1 1/2 baths apartment located in Nahant – across the street from the ocean! New paint & flooring throughout. Parking, fireplace, washer & dryer in unit, fully applianced eat-in kitchen, many large closets. Owner occupied 2-family. $1,350/mo. for 1; $1,450/mo. for 2; $1,550/mo. for 3 + util. Located 11 miles North of Boston, convenient to NSCC/Lynn campus, Marian Court & Salem State. Near golf course, beaches, parks and bus line to commuter rail. Great community for biking, fishing, hiking and water sports!
Please call 781.598.8025. Just in Time for Summer!
Last chance for summer camps & programs! If you are a parent looking for a camp or summer program – see pages 12-16 in this issue – and register soon as enrollments are filling up fast! Or if you have a summer camp or program and if you still have enrollments to fill – our final camp showcase for this season will appear in our 2-month Summer issue! To appear in our Summer issue camp showcase, please see below for the deadlines for our next issue.
at Bill Hanney’s
★
NORTH SHORE MUSIC THE ATRE
All prizes are awarded courtesy of North Shore Children & Families, and in partnership with select sponsors.
DEADLINE TO ENTER IS MAY 25! Please enter online at. Please – only one entry per person. Several winners will be selected.
Note to All Advertisers: Our next issue is our Summer issue – which covers 2 months – June AND July. Our Summer issue has a bonus printing so that we may restock our highest traffic distribution locations in early July (for our regular rates!). Our Summer issue also features our final Summer Camps & Programs Showcase for this season (see our May Showcase in this issue!) – so if you need to advertise in June and/or July – you’ll want to plan ahead to advertise in our Summer issue, as we do not have a separate July issue. To advertise in our 2-month Summer issue, please contact Suzanne by Wednesday, May 16, if you require our ad production assistance – or by noon, Friday, May 18, if you will be submitting a completed ad by May 22. Thanks for spending some time with us again – and Happy Mother’s Day! Until next month – Suzanne
North Shore Children & Families
3
Letter from the Publisher
Higher Education
and your child should be looking at and for in the educational experience, with some insider insight that will really give you some things to think about as you begin your search.
by Suzanne Provencher, Publisher
Finally, he asks the question: Is college a good financial investment? Considering the high costs associated with a college education, along with the worst employment outlook in years, how much you invest should be approached like any sound financial decision – with consideration to the ROI, the rate of return on your investment. A college education is one of the most expensive “purchases” that you will make, beyond your home. What is the earning potential in the field your child may pursue? Is the job outlook good in that field now and in the coming years? To spend $160K on a 4-year college education – only to be facing a job market that will pay you $30K per year to start, if you can even find a job – may make you think more about where and how much you should spend.
“Education is our passport to the future, for tomorrow belongs to the people who prepare for it today.” – Malcolm X This month I am sitting in on this page for our editor, Michael F. Mascolo, PhD. As a college psychology professor at a local college on the North Shore, he is very busy with his own students as another school year winds down. The articles he shares in this issue are filled with lots of good information and tips for parents who are starting to think about college educations for their children. And since he is a college professor and parent himself, who better to share some inside scoop with all of you? First up, he addresses what a college education is really for – which may surprise some of you and add new insight that will help you navigate this important and intricate process before you even know where your child might want to go – or where they should go. Next, he shares some facts and statistics that rate how institutions of higher learning are actually doing – with tips to improve your child’s college experience wherever they decide to go. He also suggests the things that you
The bottom line: research, visit campuses early, explore your financial aid options and meet the application deadlines. Explore scholarship opportunities with your child’s high school guidance department, as there are hundreds of local scholarships that many don’t even know exist. So ask. Inquire with your employer and any organizations you or your children belong to, as many have scholarships available for children of employees and members. And apply for as many as you qualify for. Consider student loans. Touring a campus is not enough. Meet the professors in your child’s desired area of study. Sit in on a few classes. Is it a boring lecture – or are the students invigorated and Continued on page 19
North Shore Children & Families P.O. Box 150 Nahant, MA 01908-0150 781.584.4569 A publication of North Shore Ink, LLC © 2012. All rights reserved. Reproduction in full or in part without written permission of the publisher is prohibited.
Suzanne M. Provencher Publisher/Co-Founder/Managing Partner suzanne@northshorefamilies.com Michael F. Mascolo, PhD Editor/Co-Founder/Partner michael@northshorefamilies.com Designed by Group One Graphics Printed by Seacoast Media Group Please see our Calendar in this issue for our upcoming deadlines.
Where to Find Us North Shore Children & Families is available at over 425 locations throughout the North Shore! Our free, monthly parenting publication is available at North Shore libraries, schools, pediatric doctor & dentist offices, hospitals, pre-schools, children & family support services, retailers that cater to parents, children & thriving families,YMCAs, children’s activity & instruction centers (dance, gymnastics, music, children’s gyms) and more! You can find us from route 93 in Woburn – north to the Andovers & NH border – east to Newburyport & Salisbury – south to Gloucester & Cape Ann – west to Malden & Medford and everywhere in between.
We’ve got the North Shore covered! If you would like to be considered to host & distribute our free publication each month from your family-friendly, North Shore business location – or if you’re a reader who needs to find a location near you – please contact Suzanne: suzanne@northshorefamilies.com or 781.584.45.
Call today for your personal tour! 978-777-4699 ext. 12 487 Locust St., Danvers, MA 978-777-4699
4
North Shore Children & Families
Higher Education
What’s A College Education For? by Michael F. Mascolo, PhD Why go to college? The answer seems so obvious. We go to college in order to prepare for a career. We go to college so that we can get a job, to make money, to raise a family and so forth. That’s certainly what many (if not most) of my students say. To be sure, college has practical value. Going to college to prepare for a career is an important reason for going to college. But it is not the only reason. Paradoxically, students who attend college for the express purpose of preparing for a career tend to perform more poorly than students who attend college for other reasons. They may also be even less likely to gain meaningful employment after college. Why is this? Because career preparation alone does not prepare people for the workplace! Ask employers about what they are looking for in college graduates. Ask them what they think of current college graduates. Many will tell you that they feel that many college graduates lack basic skills that are necessary in the workplace. These include higher-order literacy skills; skills in oral and written communication; mathematical and quantitative literacy; creativity; adaptive problem solving; the capacity for critical analysis; and even basic background knowledge. These are not career-specific skills. The skills that employers want are those that are associated with a rigorous general education.
So, what’s a college education for? A good college education prepares a person to lead a good life. A good life is one that is informed, reflective and responsive. These are not simply a series of nice sounding words. A good college education strives for nothing less. Let’s unpack these ideas a bit. A good life. What does it mean to prepare a person for a good life? Our lives are processes that extend into the future. By
definition, a life is something that we cannot predict; we do not know what is to come. Nonetheless, each day we continuously confront questions about our immediate and long-range futures: Where should I go from here? How should I live? What will make my life a good one? How should I relate to others? Ultimately, these are questions about values. A good college education is one that prompts students to confront themselves – to articulate and clarify the values and beliefs that they will live their lives by. This means helping students to find out what is important and why. It means helping students act upon their articulated conceptions of what is important in life. Now how can a good college education help students build the resources that they need to live good lives? By informing; by prompting reflection; by teaching students how to be responsive to that which extends beyond their local concerns. Let’s explore each of these important functions of a good college education. A good college education informs the making of a good life. A good life is an informed life. I cannot make decisions about matters of importance without knowledge and the skills and resources to acquire knowledge. Life is big; it contains multitudes. We learn about what is important when master teachers guide us through great literature, moral philosophy and history. We learn about who we are by studying how we came to be. To do this, we study our national narratives and the histories of how our civilization evolved to become what it is now. However, while essential, it is not enough to study our own histories and cultures. We cannot learn about ourselves unless we also learn about who we are not. To do this, we must confront others – other cultures, other histories, other religions and other peoples. When we do this, we learn to imagine what it is like to walk around in someone else’s shoes. As a result, we learn to empathize with others and respect different traditions. We also learn about the limits of tolerance – about what it is in ourselves and in others that we cannot bear. This helps to make us better persons.
North Shore Children & Families
5
And the list goes on. We cannot make decisions about how to live without knowing about the vulnerabilities and resilience of the earth and the ways our bodies work. We cannot be responsible citizens unless we understand the political and economic systems in which we live, and about how they differ from other such systems around the globe. We cannot cast an intelligent vote unless we understand how our local lives fit into the global world. And we cannot begin to appreciate what is good unless we begin to develop our aesthetic sense – our sense of what is beautiful and how to fill our lives with beauty. A good college education teaches students how to reflect on their lives. Although a good education informs, information without reflection is blind. A good college education teaches students to reflect upon what they have learned. Reflection is essential not only to understand what one has learned, but also to question and critically evaluate what one has learned throughout life. One cannot teach students how to reflect merely by providing them with information. A student can learn a great deal through lectures and independent reading. However, the traditional lecture format to teaching is not sufficient for deep learning to occur. Learning how to reflect on what one has learned requires active engagement in the learning process by both the student and the teacher. This is where our old friend Socrates comes in. Socratic teaching honors the open-ended process of questioning. A teacher who engages a student in Socratic dialogue is teaching a student how to reflect not only on what he or she has learned, but also on the student’s own beliefs, assumptions and values. Here is an example of a Socratic dialogue conducted in an actual high school class*: Facilitator: You’re offered a $500 bike for $100.You know it’s hot. What do you do? (One boy in the group takes the bait.) Continued on page 6
6 North Shore Children & Families What’s a College Education For? Continued from page 5, “l and hopes. I feel like this: I can make it. (One of the participants in the group discussion can’t contain herself. She speaks directly to the boy.) Girl: But you still bought the bike! (All the kids laugh. The boy gets the point.) In this example, the facilitator leads an open-ended discussion with a boy about the morality of purchasing a stolen bicycle. Instead of simply providing a lecture or explaining why it would be wrong to purchase a stolen bicycle, the facilitator questions the boy in an attempt to prompt the child to explore the logical implications of his own thinking. When students engage in this sort of dialogue, they are not only prompted to articulate their understanding of an issue, but also to consider changing their understandings as they come to see the inherent limitations and contradictions of their existing knowledge. Through this process, students learn to reflect on what they have heard, read or experienced both in and out of the classroom. This is the stuff of genuine and deep learning. A good college education teaches responsivity. A good college education informs and fosters reflection. Through these means and others, students learn to be responsive. To be responsive is to be willing and able to respond to the
New Saturday Parent & Child Classes – Registration Open Kindergarten Enrollment – Applications Available for Limited Spaces NEW! Cape Ann Waldorf Summer Camp - from July 16 to August 10!
practical and moral demands of a situation. A good college education teaches individuals to look beyond their individual selves – or at least to see that their own well-being is tied up with the well-being of others. A good college education teaches many forms of responsiveness, including: • How to solve novel intellectual, practical and socio-moral problems • How to interact and work with other people • How to respond to conflicts between and among people • How to respond to the social needs of one’s community • How to act as responsible citizens of one’s nation and one’s world Responsivity is about forging relationships. To be responsive to another person, one must be able to identify the other’s needs, problems and perspectives and act in accordance with one’s sense of the other. However, responsivity is not simply a form of selflessness or altruism. When I am responsive to you, I give of myself, but I do not give myself away. Perhaps paradoxically, it is during our responsiveness to others that we come to feel our own vitality and sense of inner power. Today’s students are preparing to operate in a rapidly changing world. The technologies of today will be obsolete by tomorrow! It follows that our schools and colleges are preparing students for a world that does not yet exist (including careers that do not yet exist). How can we prepare students for a world that we cannot yet imagine? We must prepare students for the unpreparable. This requires not only learning basic skills and knowledge to draw upon, but also learning how to learn. It requires learning to adapt and adjust current knowledge and skills to an ever-changing world. * Elkind, D., & Freddy Sweet, F. (1997). The Socratic approach to character education. Educational Leadership, 54, 56-59.
North Shore Children & Families
7
Higher Education
Higher Education: How Are We Doing? How are colleges and universities doing in educating our young adults? A growing body of evidence suggests that the answer is, “Not so good.” There are good reasons to believe that many, if not most, of our colleges and universities are having difficulty providing students with college-level academic skills – let alone preparing students to lead informed, reflective and responsive lives. It is often said that American higher education provides the gold standard around the world. Students from all over the globe come to American universities for study. But this assertion is misleading. To be sure, post-graduate education in the United States continues to be exemplary. American research universities are more productive than ever in producing knowledge and in training young scientists and scholars. However, the same cannot be said for education at the undergraduate level. In the last several decades, a large number of books have sounded the alarm that not all is well in undergraduate education, and something must be done about it. Although it is easy to make assertions about the mediocre quality of undergraduate education, until recently, solid evidence to support such claims has been hard to come by. In their recent book, Academically Adrift: Limited Continued on page 8
HARBORLIGHT MONTESSORI SCHOOL
8 North Shore Children & Families Higher Education: How Are We Doing? Continued from page 7
Learning on College Campuses, sociologists Richard Arum and Josipa Roska have put forth some convincing evidence that the amount of learning that occurs in contemporary colleges and universities has declined to alarming levels.
showed that 45% of the students in their sample showed no evidence of significant improvement in learning over the first two years of the study; 36% of students failed to demonstrate significant improvement over the four-year period of the study.
Arum and Roska reported findings of a long term study assessing the amount of learning that occurs over the college years. The researchers studied over 2300 students from 24 four-year US colleges between 2005 and 2009. Students completed the Collegiate Learning Assessment (CLA) – a series of essay tasks that provide measures of critical thinking, analytical reasoning and written communication. They also completed a questionnaire about the types of activities in which students participated over the course of their college years. The authors suggest that the amount of learning that occurred by students over the course of the college years was “disturbingly low”. Their findings
Arum and Roksa also reported evidence that suggests academic rigor has decreased in recent decades. Their study showed that in a typical semester, 32% of students did not take any courses that required more than 40 pages of reading per week. In addition, 50% did not take a course that required more than 20 pages of writing over the course of the semester. 25% of students took courses that required neither 40 pages of reading per week nor 20 pages of writing over the course of the semester. Over the course of their four-year college career, half of the students surveyed indicated that they had taken five or fewer classes requiring 20 pages of writing in a
semester; 20% reported taking five or fewer courses requiring 40 pages of weekly reading. These findings, if representative of most institutions of higher learning, suggest that many students can pass through a four-year college education without engaging in the types of rigorous learning activities that are essential for higher level thinking and reasoning. But the limited learning on college campuses is not a simple function of the academic rigor of college courses. It also has a lot to do with student culture on college campuses. Research suggests that students spend far less time on their academic work outside of class than they did 50 years ago. A time-honored rule of thumb is that college students should spend at least two hours in outside-of-class work (e.g., studying, completing projects, etc.) for every single hour spent in the classroom. So for a typical three-credit college course, students would be expected to spend at least six hours per week in study time. For a full 15-credit academic load, students would be expected to devote 30 hours of time to outside of
NOW OPEN!
JLC ADVOCACY
Recreational Education Center
Special Education Consultant Advocate
83 Pine St., Peabody 978.717.5062
Jody L. Crowther
• Open REC (indoor playground) • 13 Week Summer Day Camp • School Vacation Weeks • Weekends • After School • Social Skills and Play Groups • Date Nights & more!
Programs Offered:
6 Perkins Lane, Lynnfield, MA 01940
Tel: 781-334-4363
Free Phone Consultation!
Helping parents navigate the IEP process for children with special education needs.
The "REC" is a family resource center that specializes in supporting families with children, adolescents, young adults and those with special needs.
Open your home to an international student and
share your community for a few weeks this summer! Host an International Student (ages 14-17) and earn up to $2,400 this summer! • Students from Italy, France, and Spain are coming on July 5th for 3 weeks. • Students from China are coming on July 25th for 3 weeks. If you are able to provide a room, meals, transportation to a local point, and a caring environment, you have what it takes to host friendly students from abroad!
Educational Homestay Programs
All programs are based on the principles of Applied Behavior Analysis. Tues.-Fri. 11am-6pm; Sat. & Sun. 10am-6pm
class studying. However, between 1961 and 2003, the amount of time students spend in academic study fell from 24 hours per week in 1961 to just 14 hours per week. What are students doing during the time that they are not studying? Studies show that on average students spend between 11 and 41 hours per week in leisure time or socializing with peers, 12 hours per week in paid work outside of college and 6 hours in cocurricular activities (e.g., internships, community service, etc.). One study showed that students spend on average 14 hours per week texting; 6.5 hours talking with friends on the telephone; 5 hours per week on social networking sites; and 11 hours per week watching videos (e.g., television, movies, internet videos, etc.). In Academically Adrift, Arum and Roska observed that the amount of time students spent studying was related to their academic performance. However, this was only true for students who studied alone. Increased study time did not result in higher academic performance for students who studied in groups. Arum and Roska believe that there is more socializing than studying going on in student-led study groups. Seeking “the college experience”. It seems that many students tend to seek “the college experience” rather than “a college education”. “The college experience” seems to be one in which college is as much (if not more) about social life than academic pursuits. Sadly, much of our everyday culture supports this type of thinking. Many colleges compete for students not by marketing their academic offerings, but by extolling the virtues of their extra-curricular activities and facilities. It is not unusual to hear people say that much of what is important about college life occurs outside of the classroom. While it is true that the social development that occurs outside of the classroom is an important part of college life, we should not regard social and academic life as equivalent in importance. Social life should be a distant second (or third, given the increasing necessity to work to support college tuition) to academics as an aspect of college life.
North Shore Children & Families
9
Education Feature
Austin Preparatory School private schools. The Austin Prep experience cultivates all facets of a student’s burgeoning self – moral, spiritual, social, physical and intellectual. Who We Are:
The Two Most Lasting Gifts: Roots and Wings, it is the cornerstone of how Austin Prep educates its students in grades 6 through 12. Founded in the traditions of the Augustinian Friars, a religious order founded in the 13th Century, Austin Prep still uses some of the earliest philosophical ideas brought forward from Augustine of Hippo, the Order’s Patron, who lived in the 4th century AD. One of the basic tenets of St. Augustine of Hippo, was “to lay first a solid foundation,” in order for a student to rise up and be educated, he or she must have a solid educational foundation. Austin begins working with students as they become adolescents, and as they begin to develop a more developed and full understanding of the importance of education. Augustine continued his thoughts on the education of young people by offering that young people needed to be fed, and nurtured to help them grow and become individuals able to think critically and act justly in their grown up lives; “If you have already taken on wings, let us nourish them. May these wings take to the heights to which you can fly.” With these two basic ideas, roots and wings, Austin Prep has set the course for the education of young people since its founding 50 years ago in 1961. Bringing Out the Best: Our educational programs are geared toward bringing about the best in college bound young men and young women starting in grade 6 and moving all the way through senior year and graduation day. We accomplish the task of nurturing our students through generous academic offerings made available in small classes, the average size class being 16 students. A student teacher ratio of 10:1 is unequalled by its Catholic school peers, and places Austin Prep solidly in the highly competitive pack of this region’s well known
Austin is a Catholic independent school in the Augustinian tradition. Our families have found that in partnering with Austin that the same values taught at home are reinforced and enhanced in our classes, daily activities, on the playing fields, in Chapel time and in our science labs. Austin reinforces the belief that it is not only a good thing to be smart, but it’s cool to be smart in school, to be a good kid, to try new things and to explore opportunities as they present themselves. Our 6th through 12th grade continuum allows us to stay connected to our students throughout the entirety of their adolescent development. Whether students enter Austin in the Middle School or join us in their High School years, Austin friendships are life lasting ones.
very own “black box” theatre, write an article for the Legend, our School newspaper, publish a poem in our literary magazine, volunteer at a soup kitchen or Headstart program in Lawrence or Lynn, match wits with peers in Academic Decathlon competition, learn how to make a delicious crème ^ brulée, take a hike up Mt. Monadnock, or send a rocket soaring above the football field. Students can bring a life-long love of a sport to one of our 17 inter-scholastic teams, or perhaps learn a new sport along the way. All of our programs, be they Middle or High School level, engage our students and meet them where they are and encourage and develop skills and enhance abilities. We especially understand the importance of athletics in the lives of young people, and at Austin, every student, whether an accomplished athlete or interested beginner, is invited and welcomed to participate in our inclusive and championship athletic program. What’s Next: Equipped with a solid high school experience, college is all that simpler! Our graduates continually tell us that their Austin studies thoroughly prepared them for college work, while the moral code and educational system grounded in value based learning prepared them to be able to make good decisions for themselves. Austin imparts a maturity of thought and demeanor that ushers our students into responsible adulthood.
Understanding Ourselves and the Great Commandment: Headmaster Paul J. Moran summed it up best at a recent Open House Program: “We try to help all of our students understand and appreciate their gifts and those of their classmates and teachers. Using the academic and extra-curricular programs, we try to inculcate self-confidence, respect, inter-dependence and a sense of moral purpose. Our ultimate goal is to help young people learn how to carry themselves as talented, purposeful, morally grounded people in a complex world. Really, it’s all about the relations among God, self and neighbor.” 3-90’s Each Day: Austin uses a simplified Block schedule, offering a core curriculum of 6 courses in a 6 day rotation. We offer 3 - 90 minute classes daily, adding in time for a 40 minute Activity Period, and a 25 minute lunch. Students can focus on 3 major classes and have optimal face to face time with their teachers. Small classes encourage students to engage with each other, to ask and answer questions, to offer ideas. With 90 minutes of teacher time, students know they will be challenged, and they come to class better prepared to face these daily challenges. They are more involved in their own education! Rounding Out the Experience: At Austin Prep our students can lift their voices in our Chorus, perform on stage in the Meelia Theatre, our
For More Information: To learn more about this amazing experience and to become a part of our vibrant and growing community of learners, contact the Admission Office, 781-944-4900, ext. 834, or email Katie LeBlanc, assistant director of admission, kleblanc@austinprepschool.org. Austin invites candidates for Middle School and High School to contact us now through the early summer as we operate on a rolling admission basis. If you are intending to apply during the traditional admission season in the Fall of 2012, look for Austin Prep representatives at various School Fairs in the region in September and October, and plan to visit our Open House in October. The information contained in this education feature was submitted by Austin Preparatory School, and published in partnership with North Shore Children & Families;.
10 North Shore Children & Families
Higher Education
Going to College? What to Look for in An Educational Experience Don’t let your education get in the way of your learning. ~ Mark Twain Are you looking for a college for your son or daughter? What should you look for in a college education? How can you make a decision about what college or university is best for your child? The answer to this question, of course, has much to do with what you are looking for in a college education. Of course for many if not most of us, the choice of what college or university our children will attend will have a lot to do with money. A college education costs money – lots of it. Families who are fortunate enough to have the means to support a child through college will have a great deal of choices at their disposal. Families with fewer resources often find that their choices are limited to what they can afford. If you are looking for bang for your buck, in general, the more competitive the college, the more likely your investment in college will yield financial results (see Is College A Good Financial Investment?, in this issue). Colleges are often rated in terms of their “competiveness”. Competitiveness is generally defined in terms of admission rates – the percentage of applicants who gain admission into a college. So a college with a low admission rate is
regarded as “more competitive” than one with a high admission rate. Indeed, more competitive private colleges tend to have certain advantages: • They tend to attract talented, higher-achieving students. The presence of such students tends to raise the level of instruction and interaction between and among students and professors; this tends to result in a higher caliber of education; • More competitive private colleges tend to have smaller classrooms and higher quality learning facilities; • Most important, such colleges tend to have high quality faculty – instructors who care about teaching and learning, and who are active and productive scholars in their fields. However, competitive colleges have certain disadvantages as well. They are, of course, more difficult to gain admission to. They are also quite expensive. One solution to the problem of expense is to consider public universities. Each state has a flagship public university. Such universities (such as the
NOW E N R O LLI N G !
North Shore Children & Families
11
Interactive and caring teachers. The faculty and the students are the lifeblood of a quality college education. Seek colleges where your child is likely to encounter interactive teachers who are willing to guide students actively through their learning. Such teachers will avoid the errors of being either too “teacher-centered” (e.g., the traditional lecture and multiple-choice test model of learning) or too “student-centered” (e.g., allowing students more freedom of choice than they are able to handle). Attend the open-house offered by your school. Interact with the teachers. Are they dynamic? Are they interesting? Or are they boring? Ask them how they teach. Do they simply lecture? Do they have students write papers and do projects? Do they provide guiding feedback to their students?
University of Massachusetts at Amherst) tend have nationally and internationally known faculty members. Because they are public universities, they tend to be much less expensive than private colleges. However, such universities tend to place a higher value on faculty research over undergraduate teaching. As a result, even though the faculty might be wonderful scholars, they may not be wonderful teachers. Graduate students frequently teach classes in large lecture halls. There is less one-on-one student-teacher interaction and a greater reliance on multiple-choice tests. This is not necessarily the best way to learn.
A serious teaching and learning community. So much of learning takes place outside of the classroom. As a result, it is important to get a sense of the culture that exists both in and outside of the classroom. Is this an institution where professors and students alike value learning? Are there academic talks that take place outside of class? Does faculty work with students on scholarly projects and research? What do students do when they are not in class? Is this a “party school”? Are there students and groups available that take learning and community service seriously? To answer these questions, it is essential to go beyond the marketing materials that schools will send you. Remember, a college is trying to sell you something – something with an extraordinarily high price tag. To make intelligent decisions about something as important as college, it is essential to look beyond the flashy exterior. Seek out professors and students in the programs to which your child is applying. Talk to them. Ask hard questions. Then decide.
Regardless of whether one attends a private or public institution, it does not follow that a student will receive a good education merely because a college or university is regarded as competitive. What else should one look for when seeking a good college education? A real core curriculum. A curriculum simply refers to the courses that students take at a college or university. Most colleges will market what they call their “core curriculum”. A core curriculum is designed to teach students foundational knowledge and skills that they will need throughout college and life. The problem is that the vast majority of “core curricula” are “cores in name only”. Most colleges adopt a system of “distribution requirements”. Students choose which courses they want to take from large lists of courses from various academic areas (for example sciences, humanities, social sciences, etc.). On the one hand, this sounds like a good thing. Students are allowed to make choices on the basis of what interests them. On the other hand, the distribution requirement system virtually destroys the idea of a “core” curriculum. The lists of courses from which students choose is generally quite large; the actual courses offered do not follow any particular pattern. No clear themes or organizing principles structure a student’s selection of courses. As a result, the education that students end up receiving is “catch as catch can”. Whenever possible, seek colleges that offer a core curriculum that is as organized and coherent as possible.
For young men & women grades 6 - 12, who seek a challenging,yet supportive learning environment, in a Catholic, faith based setting.
12
North Shore Children & Families
Summer Camps & Programs Showcase Series Part 3 of 4
Series continues in our Summer issue.
JUNIOR GOLF CAMP
One Willow Road, Nahant 781.581.0840
PGA Golf Pro David Nyman is offering a 3 day instructional golf clinic for boys & girls ages 7-15 this summer! The camp runs for 6 weeks, June - August, on Monday Wednesday mornings from 9 - 11:30a.m. The $165 fee includes: • 3 days of individualized instruction • A professional playing lesson • Kelley Greens T-Shirt • Coupon for a free pizza from the Kelley Greens Clubhouse Call 781.581.0840 or visit for more info. & to register!
SWING & SWIM®
Salem
S TAT E U N I V E R S I T Y
HIGH SCHOOL TENNIS WEEK
Ask about our Lexington camps, too!
North Shore Children & Families
13
Summer Camps & Programs Showcase Series Part 3 of 4
Series continues in our Summer issue.
14
North Shore Children & Families
Summer Camps & Programs Showcase Series Part 3 of 4
Series continues in our Summer issue.
Camp Birch Hill your home away from home
Located In The Beautiful Lakes Region Of New Hampshire
Campers choose from 50 activities to create their own personalized schedule! TWO, FOUR and SIX WEEK SESSIONS AVAILABLE
Boys And Girls Ages 6-15
Many Activities to choose: • Land Sports • Tennis • Paintball • Dance
• Water Sports • Horseback Riding • Go Karts
• Zip Line • Adventure • Canoeing • Golf
• Fine Arts • Climbing • Waterski • and more!
Celebrating 20 years of friendship and memories of a lifetime summer@campbirchhill.com • (603) 859-4525
DON’T MISS OUR SUMMER SHOWCASE!
Ad Space Closes 5/18!* Final ase Showc n our rs i appea mer Sum issue!
North Shore Children & Families presents the 5th Annual
Summer Camps & Programs Showcase Series – 2012! CAMPS & SUMMER PROGRAMS!
Secure your summer! ✔ Boost your summer enrollments & reach parents throughout the North Shore! ✔ Over 50,000 local readers - moms & dads with children of all ages & interests! ✔ Showcases run on bannered pages! ✔ Participation includes complimentary online text listing & link!
LA CHANSCT E!
The largest camp showcases in print on the North Shore! *DEADLINE FOR SUMMER (June/July) SHOWCASE ADS: If you require ad production assistance, secure your ad space & submit your ad materials by Wed., May 16. If you do not require ad production assistance, secure your ad space by noon, Fri., May 18 – then share your completed ad by Tues., May 22.
Special Showcase ad sizes and pricing are offered for this series. To learn more or to secure your space, please contact Suzanne: suzanne@northshorefamilies.com or 781.584.4569.
North Shore Children & Families
15
Summer Camps & Programs Showcase Series Part 3 of 4
Series continues in our Summer issue.
Tara Montessori School
Regular and Summer Sessions Enrolling Now!
We are math specialists who have helped thousands of children worldwide not only learn math, but love math.
SUMMER CAMP – Infants (3 mos.) through Age 6
Whether your child is struggling to stay at grade level, has already fallen behind, or needs to be challenged, we will develop an individualized learning plan to ensure success.
Join Us This Summer! Tara Montessori Summer Camp provides a comforting place that focuses on love and trust. Founded in 1988 by Toni Dunleavy, Director.
For Infants (3 mos.+) & Toddlers: features singing, reading, giggling, cuddling & tummy time For Preschool & Kindergarten (through age 6): features mini-sports, soccer, t-ball, bike riding & sprinkler fun
2012 Summer Camp Dates: June 12 – August 16, Mon. – Thurs., 8:30a.m. – 12:15p.m., with extended day options available.
WARN
YOUR CING: HILD COULD BE C
CRAZOYME ABOUT MATH
Summer is a great time to catch up, get ahead and keep skills fresh. Flexible hours and programs available for all ability levels. Call or visit to learn about our convenient and affordable options. Your neighborhood center is
Mathnasium Dodge Street Crossing 4 Enon St. N. Beverly, MA 01915 northbeverly@mathnasium.com 978.922.2200
62 School Street, Manchester, MA •
Call today to register! 978.526.8487
1ST - 12TH GRADES • SAT & ACT PREP • HOMEWORK HELP • SUMMER PROGRAMS
16
North Shore Children & Families
Summer Camps & Programs Showcase Series Part 3 of 4
Series continues in our Summer issue.
Limited Supply of Flex Passes Available to Pay for Your Camp Days – Buy One Today!
BROOKS SCHOOL
Anytime, Summertime Camps at The Little Gym. Our unique camps provide three hours of fun and activities in a non-competitive, nurturing environment. Each day, different creative themes keep your child on their toes as they take part in exciting imaginative journeys. Choose one day, a few days, or a few weeks. Now Enrolling for Summer Classes and Camps.
Call Today!
NORTH ANDOVER, MA
Ages 4-12 – Four Two-Week Sessions Red Cross Swim Lessons, Outdoor Adventures, Crafts
• The Little Gym is the home of Serious Fun! Kids have a blast playing with their friends (that’s the FUN part), while at the same time getting all of the benefits of 3-dimensional learning: Brain Boost, Citizen Kid and Get Moving! • Parent/child classes for infants and toddlers up to 3 years of age • Classes in Gymnastics, Sports Skills, Dance and more for children 3-12 • AWESOME birthdays and fun theme-based day camps too! • Open all summer long…air conditioned, clean, safe and FUN! Danvers, MA • 978.777.7977 Woburn, MA • 781.933.3388
Grades 7-10 – Eight One-Week Sessions Adventure, Performing and Creative Arts, Field Trips
CALL FOR MORE INFORMATION Grades 3-8 – Six One-Week Sessions Movie Making, Game Design, Robotics, Swimming
Tel: 978-725-6253 – daycamp@brooksschool.org
LAST CHANCE for CAMPS & SUMMER PROGRAMS! Fun & innovative keyboard instruction.
NOW ENROLLING for Summer Camp! 6 week programs offered in July & August.
Call for your FREE introductory lesson! June 23 – Open House Please call to register & for location. Private & group lessons are available year-round.
Serving the Amesbury & Newburyport Areas: Alia Mavroforos, 978.834.3104 aliamusic@mail.com
If you still have slots to fill – we have one FINAL camp showcase coming up in our 2-month Summer issue, which covers June AND July! Ad space must be reserved by 5/16 if you require ad production assistance – or by noon, 5/18 if you will be submitting a completed ad.
Contact Suzanne to BOOST your enrollments today!
781.584.4569 suzanne@northshorefamilies.com
North Shore Children & Families
17
Higher Education
Is College A Good Financial Investment? The cost of college is skyrocketing. In the northeast, the average cost of tuition, room and board at a private, four-year college is in excess of $40,000.00. That is a lot of money. Does college pay off financially for students? Are the financial benefits of college worth the investment? This question is especially significant in today’s troubled economy. Graduates of the class of 2011 have entered into one of the worst job markets in recent history. In 2011, only 21% of college graduates entered the workforce, as compared with 51% of college graduates in the pre-recession days of 2007. Now, before moving on, it is essential to understand that what we are talking about here is the financial return associated with investing in a college education. There are many reasons to attend college. One of the reasons for attending college is to prepare for a career. Another reason to go to college is to increase one’s earning potential in one’s career. While these are legitimate reasons for attending college, despite what many people might think, they are not the only reasons for attending college or even the best reasons for attending college. A good college education brings about benefits that extend far beyond career preparation and income maximization. As a result, in the case of a college education, the bottom line simply is not the bottom line. Having made this important point, we return to the narrow question of whether a college education produces financial returns? Despite the skyrocketing costs of college, the answer to this question remains “yes”. It’s an increasingly troubling “yes”, but a “yes” nonetheless. One way to think about the financial benefits of going to college is to think of college as an investment. People make investments because they believe that they will make money above and beyond the amount of their investment. The money that people make beyond that spent on their initial investment is called the rate of return on investment (ROI). When we look at the rate of return of a college education, we find that not all colleges are created equal. In general, the more selective a college, the greater the rate of return on investment. That is, students who graduate from more highly selective colleges tend to earn higher rates of return (make more money beyond their investments) than those who graduate from less selective colleges.
of return. The rate of return for public schools is somewhat higher than that for private schools, especially for noncompetitive schools. This may be because public schools are generally less expensive than private schools. How high should a rate of return on investment be to justify the cost of an education? After all, a family may spend $150,000 on a four-year education, but it costs money to borrow that money! A typical interest rate for an unsubsidized college loan is about 6.8%. If the rate of return for attending a given college is less than the interest rate of a college loan, it would be more desirable to attend a different college. This can become a problem for students who attend less competitive private colleges, whose rate of return on investment is comparable to the interest rate of a college loan. The simple fact of the matter is that we are living in a rapidly changing informational age. The jobs of today and tomorrow require a suite of skills that require students to go beyond a high school education. Lifetime income is highly related to level of education. The more education a person has, the higher a person’s lifetime income. That relationship is not likely to go away any time soon. However, there is some reason to be concerned about the capacity for students who attend relatively non-competitive institutions to reap meaningful financial rewards from their family’s investment in a college education.
Wish you could give the person who has everything something they don't have?
Personalized Poems & Prose by Suzanne The perfect gift to enhance any special occasion. Clever verses for your invitations and thank you notes. Speeches, toasts and roasts. Birthdays • Graduations • Showers Weddings • Anniversaries • Births • Retirements • Holidays All Special Occasions
Life Celebrations
specializing in poignant, personalized eulogies – available in prose and in verse. Celebrate your loved one's life and share their story. Your guests will leave with smiles, fond memories and lots to talk about. This is shown in the graph that appears above. For non-profit private colleges, the rate of return is about 6% for noncompetitive colleges and 11% for highly selective colleges. So even the least competitive college yields a non-zero rate
781.584.4569
or suzanne@northshorefamilies.com Samples available.
18 North Shore Children & Families
Childhood Education – Guest Contributor
Partnering with Your Child’s School: What Parents Can Do by Mari Matt, Branch Executive Director of Salem YMCA While this issue focuses on higher education, a recent issue of North Shore Children & Families was devoted to the issue of how to improve the education of our young people both locally and nationally. Mari Matt, Branch Executive Director of the Salem YMCA, shares some additional suggestions about what parents can do now to support their children’s learning and to improve the quality of education on the North Shore. A good, early foundation helps achieve lifelong learning success. Read to your young children (books, menus, appropriate magazine articles, street signs, food and product packaging, etc.). Let your child hear the tone of your voice as it
each other and laugh together. My kids tell me about their days and I tell them about mine.
changes depending on what you are reading. Let them see the letters with the sounds, and have them repeat the words back to you. All of these are important pre- and early reading skills. Read to your older children. You need not stop reading to your children once they learn to read by themselves. In fact, learning to read alone is only the first step to learning to read! Even though children may know how to read words and sentences, learning to read for comprehension is something that continues to develop throughout childhood and even through adulthood. Reading aloud together not only teaches the value of reading and sharing ideas together, it also helps children to fine-tune their reading comprehension skills.
Share some quiet time with your children. Try to have some quiet time with your children every day, or as often as you can. Many of us miss the importance of simply being together in fostering children’s development. If spending calm time during dinner is difficult, how about at breakfast or right before bed – or even a few extra minutes together before heading to the bus or to the car. Check in with
Remind your child that education is important and a priority to you. Make sure that your children get a good night’s sleep and they get to school on time. Make sure that they complete their homework – and check their homework. If your child is struggling with homework, help them or find help. If your child completes homework at an after school program, check it quickly. Congratulate your child on its completion and re-affirm its importance. It is no shame to seek help if you need it. If you don’t have the time or the ability to help your children with
North Shore People Are Talking About Us!
North Shore Children & Families is available for free each month at over 425 familyfrequented locations throughout the North Shore!
Attention Advertisers: Ask us about our …
2012 PUBLISHING SCHEDULE Issue
Ad Space Deadline
Ads Due
Summer (June/July) August September
Fri., May 18 Fri., July 20 Fri., Aug. 17
Tues., May 22 Tues., July 24 Tues., Aug. 21
To explore your advertising options or to secure your space, please contact Suzanne at 781.584.4569 or suzanne@northshorefamilies.com. To learn more, please visit.
“
We’ve been advertising for several years now – and our ads consistently get a great response. We know, because we track our marketing effectiveness with the different advertising/marketing mediums we use! We measure the amount of inquiries from each advertising source, and use that data to identify our cost per inquiry as well as our cost per new member. (When it comes to inquiries, both the quantity and quality matter!)
We are very pleased with our partnership with this local parenting publication. North Shore Children & Families is a professional and classy publication, and Suzanne is passionate about making sure advertisements are accurate, attractive and effective. We believe this publication is a great marketing source to present our message to our target customers, and we’re optimistic that with its excellent content it will continue to be an excellent resource for area parents and local businesses.
“
… “Try Us!” program for new advertisers … Annual advertising frequency programs … The Annual Planner for Schools program … The North Shore Party Planner program … Annual Summer Camps & Programs Showcase series … Service Directory Target your message to North Shore parents. We’ve got the North Shore covered!
We periodically fine tune our marketing plan, reducing investment in those publications that yield less value per dollar invested in them. Regarding North Shore Children & Families, we have increased our marketing there, because of its impact with our target demographic…that is…it gets results for our businesses! Alan Ruthazer, Owner The Little Gym, Danvers & Woburn
their homework, don’t feel guilty. This is a common challenge. Find someone else who can help. It is better to admit that you are too busy or don’t quite grasp their homework and that you need someone else to help your child rather than to let your child struggle. If someone is not available to help, or if you cannot afford a tutor or homework helper, contact your child’s teacher or the principal of your child’s school. You will find people who are more than happy to try to find ways to help your child learn. Be involved in your child’s education. Even if you cannot join the PTO or volunteer in the classroom, your child’s teacher wants to hear from you and wants to work together to help your child be successful and happy. Schools can also connect you to additional services your family may need – so don’t be afraid to ask for help when you need it. Don’t give up. There are amazing, talented, hard-working teachers and great things happening in our schools. Public and independent schools are a great resource and a wise investment.
North Shore Children & Families
Higher Education Continued from page 3
engaged? Meet with current students and ask lots of questions. There is so much more to explore than the large marketing package that you receive in the mail. You probably wouldn’t buy a house after seeing a photo and fancy flyer – so approach your child’s higher education search with as much (or more!) research and consideration as you would when buying a house. A higher education is a huge investment, but you can do many things to help that investment pay off for your children and to lessen the financial burden now and in years to come. We also have a guest contributor this month, who shares what parents can be doing now to help your younger children reach lifelong learning success. Parents must play an active role from the very beginning, or they must find others who can help support their child when they can’t. It’s about more than just getting the
kids to school on time each day and glancing to see that their homework is done. It’s about getting involved in your child’s education, classroom, school and school work and instilling the importance of education in your children. Learning takes place inside and outside of the classroom. In our Calendar, you will find many familyfriendly events and things to see and do that are not only fun – many offer
19
a learning experience, too. Attend a recycling event and use this opportunity to teach your children about our environment. Visit the local zoo or museums – and check out their websites first as many have free days with no admission fees. Join a local parenting group with your young child or get involved in a local race or fundraiser that benefits local people in our own community. Teach your children well. Turn everyday moments into learning experiences. Encourage your children to engage in family and community responsibilities. It’s never too early to start. Today’s students are tomorrow’s leaders and visionaries. “Children must be taught how to think, not what to think.” – Margaret Mead
Summer Advertising Specials For New Display Advertisers:
Buy One - Get One 15% Off! Buy a display ad in our Summer issue at open rate –
$ave 15% off your August ad! Or - "Try Us!" in 3 consecutive issues –
and $ave 10% off all 3 display ads! Summer issue ad space deadline is Fri., May 18 (or by May 16 if you require our ad production assistance!); completed ads are due by Tues., May 22. Our Summer issue covers 2 months – June AND July. See page 2 to learn more! To secure your space and $ave – contact Suzanne by May 18: 781.584.4569 or suzanne@northshorefamilies.com.
To see our current issue, advertising rates, sizes & more, please visit us online at.
"Try Us!" – You'll LOVE Us!
UNIQUE GIFT IDEA/WORDS FOR SPECIAL OCCASIONS:
20 North Shore Children & Families. MAY IS THE MONTH FOR: Dating Your Mate, Foster Care, Barbeques, Bikes, Blood Pressure Awareness, Hamburgers, Photographs, Recommitments, Salads, Older Americans, Asian Pacific American Heritage, Asparagus, Asthma & Allergy Awareness, Better Hearing & Speech, Flowers, Eggs, Ducklings, Mental Health, Physical Fitness & Sports, Strawberries,Transportation
Week 1: Nurses’ Week, Postcard Week,Teacher Appreciation Week; Week 2: Pet Week, Police Week, Stuttering Awareness Week, Wildflower Week;Week 3: National Bike Week, National Police Week; Week 4: Emergency Medical Services Week, Backyard Games Week APARTMENT for RENT: 2 bedrooms, 1.5 baths apartment available June 1st in Nahant – just in time for summer fun and island living! See ad on page 2!
If you do not need ad production assistance Ad Space Closes Fri., May 18
The
Bayside of Nahant
Please submit your listings directly through our website.
North Shore's best kept secret & the perfect location for:
781.584.4569
SEEKING HOST FAMILIES FOR SUMMER: Host an international student (ages 14-17) and earn up to $2,400 this summer! See ad on page 8 to learn more about Educational Homestay Programs from Education First! BIRTHDAY PARTIES & SPECIAL OCCASIONS:
New Advertiser Summer Specials! If you’d like to try advertising with us – see page 19 for 2 special introductory programs (with great discounts!) for new advertisers! Indoor Playspace Available for Parent Groups at the Recreational Education Center, Pine St., Peabody. Available Tues.-Fri. 11am-6pm; for groups with kids ages 0-16. Book a day, your group will enjoy our ball pit, climbing structures, crafts, games, puzzles & more; see ad on page 8. JLC Advocacy is offering a free phone consultation for parents who need help with special education and IEPs. See ad on page 8. FREE CLASSES: Call today to schedule a FREE introductory class at The Little Gym! Danvers: 978.777.7977; Woburn: 781.933.3388. GET TICKETS NOW: North Shore Music Theatre, Beverly, presents musicals, concerts and kids’ shows! See ad on back cover – get tickets at today!
To advertise, please contact suzanne@northshorefamilies.com.
June AND July Calendar Listings Due By May 22
suzanne@ northshorefamilies.com
It’s really time to start registering for summer camps & programs! See pages 12-16 in this issue for lots of great summer camps & programs! Take advantage of early registration discounts now! Pick up our Summer issue to see more options! To advertise in our 2-month Summer issue [June/July] Showcase (final showcase for this season!), contact suzanne@northshorefamilies.com by May 16!
The North Shore Party Planner
Oceanfront Splendor... Magnificent Views... Elegant & Affordable
To secure your ad space:
If you have or are looking for a party business – locations, entertainment,
SUMMER (June/July) ISSUE DEADLINES! If you need ad production assistance Ad Space Closes Wed., May 16
Personalized Poems & Prose by Suzanne – the perfect words to enhance any special occasion. Personalized poems as gifts, clever verses for invitations, speeches, toasts, roasts and poignant eulogies. See ad on page 17.
invitations, decorations, cakes, favors & more – please see our North Shore Party Planner on page 20! To appear in our 2-month Summer [June/July] issue NSPP, please contact Suzanne by May 16: suzanne@northshorefamilies.com or 781.584.4569.
• Weddings,
Personalized Poems & Prose by Suzanne For Gifts A Personalized Poem Makes a Perfect Gift for Any Special Occasion
For Invitations
Showers • Birthdays, Sweet 16s • Bar/Bat Mitzvahs • Anniversaries • All Special Occasions • Wedding & Function Packages • Many Menus to Choose From
Speeches, Toasts & Roasts
781.592.3080
781.584.4569
One Range Road, Nahant
Clever, Custom Verses for Your Invitations & Thank You Notes
For Events
suzanne @northshorefamilies.com
Have an Awesome Birthday Bash at The Little Gym!
Birthday Party on Roller Skates! Roller World, Saugus 781.233.3255 Party Line
BOOST Your PARTY Business HERE! Secure your ad space by May 16 to appear here in our 2-month Summer issue!
Big Apple Circus – Dream Big! All new show – Grandma’s Farewell Tour! Tix start at $20, shows run through May 13 at Boston City Hall. WEDNESDAYS: Cape Ann Waldorf School presents Morning Glory Parent & Child Classes, meets every Wed., 12:30-2pm; $280/10 wk. session. For parents/caregivers with children ages 20 months – 3.5 years. Call to register: 978.927.1936. Select Wednesdays at PEM, Salem: PEM Pals, for caregivers w/children 5+; free with museum adm., 10:30am. Fun, interactive program with books, movement, music, art & hands-on activities.Visit for specific dates. THURSDAYS: Cape Ann Waldorf School presents Morning Glory for the Youngest Child Parent & Child Classes, meets every Thurs., 12:30-2pm; $180/10 wk. session. For parents/caregivers with infants ages 3-19 months. Call to register: 978.927.1936. FRIDAYS: Cape Ann Waldorf School presents Morning Glory for the Youngest Child Parent & Child Classes, meets every Fri., 9-10:30am: $180/10 wk. session. For parents/caregivers with infants ages 3-19 months. Call to register: 978.927.1936. Stargazing at the Gilliland Observatory, free, every Friday 8:3010pm, weather permitting; at Museum of Science, Boston. Call 617.589.0267, updated every Fri. at 5:30pm, with info. about that night’s observing session. SATURDAYS: Parent & Preschooler Playgroup, ages 2.5-5 years, meets most Saturdays, 910:30am, at Harborlight-Stoneridge Montessori School, Beverly. Free, but RSVP at 978.922.1008. See ad on page 7. New Parent & Child (20 mos.-3.5 yrs.) Morning Glory Classes at Cape Ann Waldorf School, Beverly. Features play, bread making, circle games, snack & conversation. Space is limited. Registration open; see ad on page 6. Call 978.927.1936 to register. Bring your bottles & cans to Stone Zoo, Stoneham! 10am-2:30pm, parking lot. Help the environment and a worthy cause – held the 2nd Saturday of each month through October. All proceeds benefit conservation efforts supported by Zoo New England. SUNDAYS: Global Gods: Multigenerational Religious Education – Sunday School for the Whole Family. Free, ages 6-100; May 20 & 27 at Northshore Unitarian Universalist Church, Danvers.
SATURDAYS & SUNDAYS at PEM: Family Tours & Gallery Explorations at PEM, Salem, 11:30am-noon. Free w/museum adm.;. Drop-in Art Activities, 1-3pm, free w/mus. adm. at PEM, Salem. MAY 1: May Day, Loyalty Day, Mother Goose Day, Worthy Wage Day MAY 2: Baby Day, Brothers’ & Sisters’ Day MAY 3: World Press Freedom Day Spring Antiques & Collectibles Auction at Lynn Museum & Historical Society, 6pm preview; 7pm. Free for adults; 590 Washington St., Lynn. Fabulous selection of vintage antique & collectible items; cash bar. 100% of auction proceeds benefit the Lynn Museum & Historical Society’s educational & outreach programs. MAY 4: Bird Day, Renewal Day, Space Day, Weather Observers’ Day MAY 5: Cinco de Mayo, Scrapbook Day New Parent & Child (20 mos.-3.5 yrs.) Morning Glory Classes at Cape Ann Waldorf School, Beverly. Features play, bread making, circle games, snack & conversation. Space is limited. Registration open; see ad on page 6. Call 978.927.1936 to register. 2nd Annual Strays in Need Fundraiser; $20/person. Tix on sale now at Danvers Animal Hospital, 367 Maple St., Danvers. To donate a silent auction item, raffle item or gift certificates or to be a sponsor, contact Amy Cyr, Hospital Manager, at dahcyr@aol.com. SalemRecycles & the Beautification Committee invite all to their annual, earth-friendly spring event on Salem Common! Morning neighborhood cleanup w/pizza for volunteers, recycling opportunities, environmental displays, live music & more. To volunteer for a neighborhood clean-up (8:30am-11:30am), meet team leaders on the Essex St. Pedestrian Mall. For a list of other locations or to organize a group to clean another location, call Ellen at 978.619.5676 for neighborhood clean-ups. From 10am-1pm, Green Programs on Salem Common features clothing & household Swap & Drop (10am-noon), recycle plastics, bags, Goodwill textile & small household items recycling. For info. on recycling, contact Julie at 978.619.5679. CourtYard Sale at Lynn Museum, 9am-1pm; free, all ages. 590 Washington St., Lynn. Search out treasures, raffles, refreshments; bring your own table & sell
North Shore Children & Families your items for $20/rental. Call Abby at 781.581.6200 to reserve your spot. Local Georgetown Mom & Author, Maggie van Galen, Book Reading & Signing – The Adventures of Keeno and Ernest – The Banana Tree, 12 noon, free for parents w/pre-K through early elementary school age kids. At Book Rack, 52 State St., Newburyport. Masters of Flight: Birds of Prey returns to Stone Zoo, Stoneham! Through Sept. 3; daily show times at 11am, 1pm, 3pm; free w/zoo admission. MAY 5 & 6: The North Shore Rock & Mineral Club invites all to the 49th Annual New England Gem & Mineral Show at Topsfield Fairgrounds. Fun for the whole family. Hours: May 5, 9am-5pm; May 6, 10am4pm. MAY 6:
21
MAY 12: Birth Mothers’ Day, International Nurses’ Day, Limerick Day, Kite Day New Parent & Child (20 mos.-3.5 yrs.) Morning Glory Classes at Cape Ann Waldorf School, Beverly. Features play, bread making, circle games, snack & conversation. Space is limited. Registration open; see ad on page 6. Call 978.927.1936 to register. Hit the Streets for Little Feet 5K Road Race & 1 Mile Fun Run, 8-11am, all ages; $20/person or $50/family. 36 Lincoln St., Manchester by the Sea. Proceeds support Manchester Memorial Elementary School Enrichment Programs. Community celebration follows the race; raffle drawings. For info. & to register, visit. MAY 13: Happy Mothers’ Day to All North Shore MOMS! Tulip Day
MAY 7:
Mother’s Day Festival at Peabody Essex Museum, Salem. Features film, animal presentation, presentations, art activity, story, art, brunch & more. For full schedule & pricing, visit.
Astronomy Day,Tourism Day
MAY 14:
MAY 8:
Dance Like A Chicken Day
Teachers’ Day, Iris Day, No Socks Day, World Red Cross Day
MAY 16:
National Nurses’ Day, International No Diet Day,Tourist Appreciation Day
MAY 9: School Nurses’ Day, Receptionists’ Day, Train Day MAY 10: Clean Up Your Room Day Nutrition & Wellness Speaker, Ranan Cohen, 6:30-8pm, at Sparhawk Theatre & Center for the Arts, 196 Main St., Amesbury. Tickets are $10; major credit cards accepted. For parents who want to learn how to successfully incorporate wellness & good nutrition into everyday life; Q&A follows. For tickets, email Norah at ntinti@sparhawkschool.com or call her at 978.388.5354. MAY 11: Family Child Care Providers’ Day,Twilight Zone Day MAY 11 & 12: Hunt’s 11th Annual Digital Demonstration & Sale at Hunt’s Photo & Video, 100 Main St., Melrose; 10am-8pm. Admission & seminars are free for teens & adults. Learn how to use & care for your camera, polish your picture-taking skills, see new models & save! Register online or by phone: or 781.662.8822.
If you need to advertise in our 2month Summer [June/July] issue, with bonus distribution for our regular rates, and if you need our ad production assistance, please confirm your ad size and submit your ad materials TODAY! You can see our regular display ad rates, sizes, available discounts & more at. Do you have a summer camp or program? Do you need to BOOST your enrollments? See page 14 for more info. on our 5th Annual Summer Camps & Programs Showcase Series – the largest in print on the North Shore! Showcase appears in this issue and continues in our 2-month Summer [June/July] issue – see above & next page for advertising deadlines. Contact suzanne@northshorefamilies.com for camp showcase ad rates & sizes. Boston Ballet presents the third annual Next Generation performance, 7pm, at The Boston Opera House. Featuring students from the PreProfessional Program and Boston Ballet II, with musical accompaniment from over 50 musicians with New England Conservatory Youth Philharmonic Orchestra. For tickets:. Join us on the Ballet Bus from the North Shore studio to The Boston Opera House – call 617.456.6380 for more info.
Continued on page 22
22 North Shore Children & Families Community Calendar Continued from page 21
AND JULY events directly through our website (see beg. of this Calendar for details).
MAY 16:
Buy A Musical Instrument Day
Wear Purple for Peace Day
MAY 23:
MAY 17:
Lucky Penny Day
Happy 50th Birthday,Tyla! MAY 18: Advertising Space Reservation DEADLINE at NOON for ADS in our 2month SUMMER [June/July, with bonus distribution] issue! To advertise, contact suzanne@northshorefamilies.com! If you need our ad production assistance, please confirm your ad size and submit your ad materials by May 16! You can see our regular display ad rates, sizes, available discounts & more at. Contact Suzanne for camp showcase ad rates & sizes. Museum Day, Bike to Work Day, No Dirty Dishes Day, Visit Your Relatives Day MAY 19: Armed Forces’ Day, Boys & Girls Clubs’ Day, Circus Day New Parent & Child (20 mos.-3.5 yrs.) Morning Glory Classes at Cape Ann Waldorf School, Beverly. Features play, bread making, circle games, snack & conversation. Space is limited. Registration open; see ad on page 6. Call 978.927.1936 to register. Step by Step performance by Boston Ballet School students at their North Shore Studio (Lynch/van Otterloo YMCA, Marblehead); free/all ages. Mommie’s Night Out, 6-10pm. Recreational Education Center in Peabody is hosting a Mommie’s Night Out in conjunction with the Peabody PTO for Moms to get together for laughs & relaxation; appetizers, beverages, reps. from Pampered Chef, Lia Sophia, Avon; $10/mom, proceeds benefit the Peabody PTO.
Children’s Garden Opening Day, The Trustees of Reservations, Long Hill, 572 Essex St., Beverly; 3:30-5pm. Members free; non-members $5/family. Help plant colorful annual flowers & organic vegetables in our magical Children’s Garden. Space is limited, please pre-register online at. MAY 25: National Missing Children’s Day,Tap Dance Day
Deadline to enter to win tickets to see a musical at North Shore Music Theatre! See page 2! MAY 27:
MAY 28: Memorial Day, Amnesty International Day MAY 29: Learn About Composting Day, Paper Clip Day Spring Concert at Cape Ann Waldorf School, Beverly; featuring CAW strings program. Free; 7pm, at new campus at Moraine Farm, 701 Cabot St., Rte. 97, Beverly. MAY 30: Water A Flower Day MAY 31: Save Your Hearing Day JUNE 8, 9 & 10:
Ukulele Jam, 3-4:30pm at Hamilton/Wenham Community House. $5/person, all ages, all levels & drop-ins welcome. Participants need the book The Daily Ukulele: 365 Songs for Better Living by Jim Beloff. MAY 21:
JUNE 9:
Memo Day, Waitservers’ Day
Tales of Mother Goose, student performances at 5:30 & 7pm, free at Boston Ballet School’s Marblehead Studio at the Lynch/van Otterloo YMCA, 40 Leggs Hill Rd., Marblehead.
Peace Day, Be A Millionaire Day, Pick Strawberries Day
MAY 22: Community Calendar listings’ DEADLINE at NOON for 2-month SUMMER issue! Please submit your listings for JUNE
APARTMENT FOR RENT
GEM & MINERAL SHOW
Just in Time for Summer! 2 bdrm. apartment available in Nahant – across from ocean! See ad on page 2!
North Shore Rock & Mineral Club’s 49th Annual N.E. Gem & Mineral Show Topsfield Fairgrounds – May 5 & 6
COACHING
GIFTS/SPECIAL OCCASIONS
Coaching for Couples, Parents Life Coaching See ad on page 19! DANCE INSTRUCTION Boston Ballet School/NS Studio Marblehead 781.456.6333
Happy Birthday, David!
Anything Goes, musical production by Sparhawk Spotlights, at The Sparhawk Theatre & Centre for the Arts, 196 Main St., Amesbury. Performances at 7:30pm on 6/8 & 9 and at 2pm on 6/10. Tickets are $10/advance, $15 at door; group discounts available, credit cards accepted. For groups & to purchase tickets, email Norah at ntinti@sparhawkschool.com or call her at 978.388.5354.
MAY 20:
Service Directory
EARLY EDUCATION Next Generation Children’s Centers Locations include Andover & Beverly 866.711.NGCC ENTERTAINMENT North Shore Music Theatre Beverly 978.232.7200 See ad on back cover! FAMILY FUN! Big Apple Circus presents Dream Big – through May 13 at Boston City Hall
Personalized Poems & Prose by Suzanne Speeches, eulogies, gifts, verses for invitations, etc. See ad on page 17! IN THIS ISSUE New advertiser specials – page 19 Reader contest – page 2 SCHOOLS Austin Preparatory School Reading 781.944.4900 Brookwood School Manchester 978.526.4500 Cape Ann Waldorf School Beverly 978.927.1936
FUN & FITNESS
Clark School Danvers 978.777.4699
The Little Gym Danvers and Woburn
Cohen Hillel Academy Marblehead 781.639.2880
Recreational Education Center Peabody 978.717.5062
Covenant Christian Academy West Peabody 978.535.7100
North Shore Children & Families
23
SCHOOLS
SPEECH-LANGUAGE THERAPY
Glen Urquhart School Beverly Farms 978.927.1064
Karen J. Cronin, MS CCC-SLP Middleton 978.239.5520
SUMMER CAMPS & PROGRAMS
SUMMER CAMPS & PROGRAMS
SUMMER CAMPS & PROGRAMS
Mathnasium North Beverly 978.922.2200
Summer Quest at Crane Ipswich 978.380.8360
North Shore Children’s Theatre Salem • 781.248.9458
Tara Montessori School Summer Camp Manchester • 978.526.8487
Phoenix Summer Adventures Salem • 978.741.0870
Waring School Summer Programs Beverly • 978.927.8793
Harborlight-Stoneridge Montessori School Beverly 978.922.1008 The Phoenix School Salem 978.741.0870 Plumfield Academy Danvers 978.304.0273 Shore Country Day School Beverly 978.927.1700 Sparhawk School Amesbury 978.388.5354 Tower School Marblehead 781.631.5800 Waring School Beverly 978.927.8793 SEEKING HOSTS Host an Int’l. Student & earn up to $2,400 this summer! See ad on page 8! SPECIAL EDUCATION JLC Advocacy Lynnfield 781.334.4363 See ad on page 8!
Boston Ballet School/NS Studio Marblehead 617.456.6333 Brooks School - Summer North Andover 978.725.6253 Brookwood - Summer Manchester 978.526.4500 Camp Birch Hill Lakes Region, NH 603.859.4525 Camp Quinebarge White Mountains, NH 603.253.6029 Cape Ann Waldorf Camp Beverly 978.927.8811 Glen Urquhart School Beverly Farms 978.927.1064 ext. 131
Shore Sports & Enrichment Camps Beverly • 978.927.1700 Summer’s Edge Tennis School at Salem State University & in Lexington 781.391.EDGE Summer at Tower Marblehead 781.631.5800 Summer Programs at North Shore Comm. College community.northshore.edu/sod
TUTORING A+ Reading Center Reading Tutor/Individual Lessons
Serving the North Shore 781.799.2598 mperkins@aplusreadingcenter.com Mathnasium The Math Learning Center North Beverly • 978.922.2200 See ad on page 15!
Please Support Our Advertisers, Who Sponsor this Publication for You & Your Family!
ATTN: SUMMER CAMPS!
Kelley Greens Jr. Golf Camp Nahant 781.581.0840
Boost your summer enrollments in our 5th Annual Summer Camps & Programs Showcase series!
Keys for Kids Serving the Amesbury & Newburyport Areas
Series continues in our 2-month Summer issue – space closes May 16!
The Little Gym Danvers & Woburn
GET YOUR SUMMER CAMP OR PROGRAM LISTED HERE! See page 14! | https://issuu.com/michaelfmascolo/docs/nscf_may_2012 | CC-MAIN-2017-04 | refinedweb | 12,807 | 65.32 |
Has anyone did any work with bitcoins?
Has anyone did any work with bitcoins?
You can load custom datasets using fetch_csv()
There are several relevant datasests on quandl, here's one:
With Tyler's tip, I was able to cook together an example.
You can clone this and make a trading strategy now that you have the price imported.
I tried combining this with the sample algo but had no luck: halp?
import pandas def rename_col(df): df = df.rename(columns={'Weighted Price': 'price'}) df = df.fillna(method='ffill') df = df[['price', 'sid']] log.info(' \n %s % df.head()') return df def initialize(context): fetch_csv('', date_column='Date', symbol='weighted_price', usecols=['Weighted Price'], post_func=rename_col, date_format='%Y-%m-%d' ) context.stock = sid(3766) context.max_notional = 1000000.1 context.min_notional = -1000000.0 def handle_data(context, data): if 'price' in data['weighted_price']: record(weighted_price=data['weighted_price'].price) vwap = data[context.stock].vwap(3) price = data[context.stock].price notional = context.portfolio.positions[context.stock].amount * price if price < vwap * 0.995 and notional > context.min_notional: order(context.stock,-100) elif price > vwap * 1.005 and notional < context.max_notional: order(context.stock,+100)
What was the error?
I agree, the error will be helpful.
Also, Kent, what are you trying to do? Currently the backtester only permits buying and selling US equities. Bitcoin prices can be used as a signal, but you can't (yet) model buying and selling bitcoin. Is there a stock you are trying to trade as Bitcoin value changes?
it was a runtime error; didn't know modeling buying/selling wasn't implemented: that answers my question :)
Dan, will buying and selling bitcoins be possible in the future?
Hello Cos,
Our livetrading model depends on connecting your Quantopian account to a broker. So far there isn't an obvious bitcoin broker to do livetrading with.
I expect our bitcoin modeling tools will get strong this summer, though.
Dan
ahahhaha without read all the thread i try and try to do what Kent Davis just try to do without success!!!
anyway you have to contact:
coinsetter
coinMKT
they are going to open in days....
i doesn't understand why having the price is not possible to backtest, let's the user choose the fees and the spread if is this the problem, if not what is?
anyway the problem will be solved soon because now 2 broker will be open in days (coinmkt in 5 days, coinsetter will also work in around a week)
Is there any hope of having the ability to backtest buying / selling of bitcoin in the future? Arthur, I'm not quite clear on your comment, are you saying that coinsetter and coinMKT would allow us to backtest buying / selling within Quantopian? Thanks | https://www.quantopian.com/posts/anyway-to-import-bitcoin-data | CC-MAIN-2015-22 | refinedweb | 456 | 59.7 |
Python Programming, news on the Voidspace Python Projects and all things techie.
Beauty in Simplicity
I'm converting over the contents of my website using docutils. There's hundreds of pages of badly marked up contents, of which I want to keep a fair proportion. This is from the days when I was Learning HTML.
Luckily using reST I can just copy and paste the text contents from the browser. Because reST is so simple, only the minimum of markup is needed. The next step is to use the html_parts function to generate the HTML, and insert it (along with doc title and breadcrumbs [1]) into a template.
I'm still learning about reST though, and I needed to do something that looked like a blockquote. To look right it needed to be indented - and reST uses indentation as part of it's syntax. I trawled through the docs trying to find out how to markup a passage to be displayed as a blockquote. Eventually I came across this gem :
A text block that is indented relative to the preceding text, without markup indicating it to be a literal block, is a block quote.
In other words - just leaving it alone caused it to be marked up correctly. Nice one.
The advantage of this approach is that restyling in the future will be much easier - I just modify the template, or the HTML generation tools, and run it over the document tree. I'm not sure yet if I'll autogenerate the indexes, I'll still need to write the descriptions even if I do.
Even including images and blocks with different CSS classes is easy in reST. There's very little I won't be able to do with it.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2005-04-08 17:24:11 | |
Categories: Python, Website
Get It While it's Hot
Get your luvverly fresh updates.... Yes it's that time of year again. Spring is in the air and things are changing over at the Voidspace Python Emporium [1].
Lots of updates, big and little, to various of the Python modules here. There'll be an announcement over at comp.lang.announce shortly, but here's a summary.
- downman.py - improved version available for download
- guestbook.py - quite a nice improvement here
- logintools - bugfix
- cgiutils.py - better functions for creating/sending emails
- upload.py - now set I/O mode to binary on windows (bugfix really)
- googleCacheServer.py - browse the internet through the google cache, no really !
Last but not least, there's a new plugin for Firedrop. It's called FireMail - and it allows you to send your blog posts as emails. I've also updated FireSpell so that you can configure the language (amongst other improvements). See the docs (and download them) over in the Firedrop thingy.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2005-04-07 14:32:45 | |
Categories: Python, Projects
Aargh....
Not a good time for this to happen, the FTP server of my website is truncating files. This means my blog files are all cut short. Normally it wouldn't matter - except Daily Python URL have just featured two of my blog entries. About 600 visitors will have just decided my blog is rubbish !!
Website Woes
Unfortunately my host wasn't too gracious when I complained about the problem. He (they ? I'm actually not sure) claimed the problem was my end. They don't handle problems very well - but on the other hand I only paid them $49 for the year and I'm using almost 20 gig of bandwidth every month. They also installed Python 2.3.4 and various extensions for me. Finding a professional host that offers all that will cost me a minimum of 5 times this cost.
Luckily the problem might be resolving itself. A customer of TBS might be hiring me to build his website and a custom gallery application (which I'll be able to open source). I may well be able to piggy back on the hosting package I arrange for him.
Whilst We're On the Subject
There has been some discussion on the docutils mailing list about using reST to build websites. I'm getting close to needing to convert a lot of articles to the new design - so I'll have to build something like this. Watch this space :-)
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2005-04-06 21:57:31 | |
Back to the Days of Mosaic
The Problem
Our internet at work is heavily restricted. Very heavily - it's whitelist only, which means 1% of 1% of 1% of the web. We have a single computer enabled for unrestricted web access.
As well as a huge inconvenience, this is a great challenge. There are various possible solutions - all requiring access to a server 'on the outside'. As we are white list only this will take a little bit of stealth and time. In the meanwhile I've found an unusual, temporary, solution.
Now all google domains are allowed (which thankfully gives me access to comp.lang.python). Unfortunately, attempting to fetch pages from the google cache via the web interface usually fails. It accesses them by IP address - which IPCop blocks. Mercifully when I use the excellent pygoogle interface to the google web api it does return the cached page (if google has it). This gave me an idea.
Breaking Out in 50 Lines of Code
I've hacked up an implementation of a proxy server based on SimpleHTTPServer. Run it and set your browser proxy settings to localhost:8000. Any pages requested are then fetched from the google cache.
Retro Internet
Because the google API limits us to the number of fetches we can make (1000 per day) we restrict the pages we fetch to pages that google is likely to have cached. This means pages that are plain text or html - forget javascript, CSS, images, etc. This is for those in similar situations - or those who want a real retro internet experience. This is back to the days of mosaic. Not bad for many pages though. Here's through the proxy. Even better, when I checked PlanetPython.org - it was yesterday's version, blimey.
Here's the Code
If you exclude docstrings, comments, and docstrings, there's actually 45 lines of code following. Update: it's now slightly longer - possible 50 lines - as it now bypasses the google cache for anything in google domains, this allows you to do google searches !
# v0.1.3
# googleCacheServer.py
# A simple proxy server that fetches pages from the google cache.
# Released subject to the BSD License
# Please see
# For information about bugfixes, updates and support, please join the Pythonutils mailing list.
#
# Scripts maintained at
""".
Some useful suggestions and fixes from 'vegetax' on comp.lang.python
"""
import google
import BaseHTTPServer
import shutil
from StringIO import StringIO # cStringIO doesn't cope with unicode
import urlparse
__version__ = '0.1.0'
cached_types = ['txt', 'html', 'htm', 'shtml', 'shtm', 'cgi', 'pl', 'py'
'asp', 'php', 'xml']
# Any file extension that returns a text or html page will be cached
google.setLicense(google.getLicense())
googlemarker = '''<i>Google is not affiliated with the authors of this page nor responsible for its content.</i></font></center></td></tr></table></td></tr></table>\n<hr>\n'''
markerlen = len(googlemarker)
import urllib2
# uncomment the next three lines to over ride automatic fetching of proxy settings
# if you set localhost:8000 as proxy in IE urllib2 will pick up on it
# you can specify an alternate proxy by passing a dictionary to ProxyHandler
##proxy_support = urllib2.ProxyHandler({})
##opener = urllib2.build_opener(proxy_support)
##urllib2.install_opener(opener)
class googleCacheHandler(BaseHTTPServer.BaseHTTPRequestHandler):
server_version = "googleCache/" + __version__
cached_types = cached_types
googlemarker = googlemarker
markerlen = markerlen
txheaders = { 'User-agent' : 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)' }
def do_GET(self):
f = self.send_head()
if f:
self.copyfile(f, self.wfile)
f.close()
def send_head(self):
"""Only GET implemented for this.
This sends the response code and MIME headers.
Return value is a file object, or None.
"""
print 'Request :', self.path # traceback to sys.stdout
url_tuple = urlparse.urlparse(self.path)
url = url_tuple[2]
domain = url_tuple[1]
if domain.find('.google.') != -1: # bypass the cache for google domains
req = urllib2.Request(self.path, None, self.txheaders)
self.send_response(200)
self.send_header("Content-type", 'text/html')
self.end_headers()
return urllib2.urlopen(req)
dotloc = url.rfind('.') + 1
if dotloc and url[dotloc:] not in self.cached_types:
return None # not a cached type - don't even try
print 'Fetching :', self.path # traceback to sys.stdout
thepage = google.doGetCachedPage(self.path) # XXXX should we check for errors here ?
headerpos = thepage.find(self.googlemarker)
if headerpos != -1:
pos = self.markerlen + headerpos
thepage = thepage[pos:]
f = StringIO(thepage) # turn the page into a file like object
self.send_response(200)
self.send_header("Content-type", 'text/html')
self.send_header("Content-Length", str(len(thepage)))
self.end_headers()
return f
def copyfile(self, source, outputfile):
shutil.copyfileobj(source, outputfile)
def test(HandlerClass = googleCacheHandler,
ServerClass = BaseHTTPServer.HTTPServer):
BaseHTTPServer.test(HandlerClass, ServerClass)
if __name__ == '__main__':
test()
Shhhhh Don't Tell Anyone
Of course if my sysadmin sees this... it'll stop working :-)
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2005-04-05 11:39:16 | |
Categories: Python, Work, Hacking
You Have Outgoing Mail
What's that quote - any sufficiently advanced program evolves the ability to send email ? - well firedrop just became that advanced.
The new version with plugins is now released - and I've just finished FireMail. FireMail adds the ability to email an entry to a specified e-mail address - as HTML or text. This was something I really missed from Blogger.
In the new version Python source colouring is built in to Firedrop. Here's the email function I use (along with a function from the new Python Cookbook to construct the html email) :
Needs hostname, username, password etc.
Takes either a single to_email or a list, and a single from_email
Can either receive an HTML email (output by 'createhtmlmail') or will build message headers
Pass in html=False keyword for it to build headers.
"""
head = "To: %s\r\n" % ','.join(to_email)
if from_email is not None:
head = head + ('From: %s\r\n' % from_email)
if not html:
if subject is not None:
head = head + ("Subject: %s\r\n\r\n" % subject)
msg = head + msg
server = smtplib.SMTP(host, port)
if user:
server.login(user, password)
server.sendmail(from_email, to_email, msg)
server.quit()
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2005-04-04 10:59:47 | |
Categories: Python, Blog on Blogging, Projects
First Time Flasher
This weekend I flashed a BIOS for the first time. It was for Aidan who had killed his 20gigabyte hard drive. Bereft of the solitaire that she played on his computer, his gran bought him a nice new 120gig hard drive. Needless to say his six year old motherboard went into a sulk and refused to boot with it plugged in.
Now, flashing a BIOS is a nerve wracking process. Get it wrong and you can't use the computer at all. Thankfully gigabyte [1] (manufacturers of the motherboard) made it almost problem free. Their website is very easy to navigate and it was easy to find the right BIOS image, even for a piece of hardware that is pretty obsolete. They also had clear instructions on how to do the upgrade.
I say almost problem free..... when we ran the BIOS upgrade program from a DOS boot disk it warned us that the BIOS image we had didn't match the hardware. Luckily according to their instructions the next screen ought to give us an option to backup the existing BIOS - so I merrily hit continue... which proceeded to flash the BIOS anyway !! Thankfully it was the correct image and the new hard drive is successfully installed.
The Panda Replies
I had a very nice reply to my blog from Josh Yelon, who is the maintainer for Panda3D. He says that although Panda3D doesn't natively support open source modelling tools, there is work afoot on a blender exporter and a program called milkshape 3D might be able to produce DirectX "X" files which can be converted. It's not over good news, but if the blender support gets there then I might have another look at blender. Last time I looked the UI seemed so unintuitive that I threw my hands up in horror. 3D modelling needn't be over complex [2] - but blender clearly states that they're not interested in newbies. Oh well.
It seems that Josh saw my blog entry through getting some referrals from it. This is probably because this blog is now listed at Planet Python. If I was to stick with tradition, I ought to start making lots of off topic and irrelevant blogs ;-) I suppose I haven't done too badly by starting with the tales of my BIOS exploits.
Starting to Cook
On a quick flick through I wasn't overly impressed with the Python Cookbook. Some of the early recipes covered dealing with whitespace through the aString.strip() method - and such complexities. However now that I'm getting into it I'm finding it extremely useful. It's already solved a problem I had with binary uploads being terminated on windows (need to switch to binary upload mode first !), and a neat recipe on sending HTML emails. This is becoming the new Firemail extension for Firedrop - so I can email new posts to Void-Shockz like I could with blogger. Whilst we're on the Firedrop subject.... the new version (with plugins) is now available over at
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2005-04-03 15:09:57 | |
Categories: Computers, Python
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter... | http://www.voidspace.org.uk/python/weblog/arch_d7_2005_04_02.shtml | CC-MAIN-2016-40 | refinedweb | 2,341 | 67.25 |
C++ Using lower_bound() and upper_bound() methods in Map in STL
Hello Everyone!
In this tutorial, we will learn about the working of the lower_bound() and the upper_bound() methods in a Map in STL in the C++ programming language.
To understand the basic functionality of the Map Container in STL, we will recommend you to visit STL Map Container, where we have explained this concept in detail from scratch.
The
lower_bound() method:
The
lower_bound() method returns an iterator pointing to the first element which has a value not less than the given value.
The
upper_bound() method:
The
upper_bound() method an iterator pointing to the first element which has a value greater than the given value.
For a better understanding of its implementation, refer to the well-commented C++ code given below.
Code:
#include <iostream> #include <bits/stdc++.h> using namespace std; int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the lower_bound() and upper_bound() in Map, in CPP ===== \n\n\n"; //Map declaration (Map with key and value both as integers) map; for (i = m.begin(); i != m.end(); i++) { cout << "( " << i->first << ", " << i->second << " ) "; } map<int, int>::iterator low, high; //lower_bound(x) returns the iterator to the first element that is greater than or equal to element with key x low = m.lower_bound(5); cout << "\n\nThe lower bound of 5 has key: " << low->first << " and value: " << low->second << ". "; low = m.lower_bound(6); cout << "\n\nThe lower bound of 6 has key: " << low->first << " and value: " << low->second << ". "; //upper_bound(x) returns the iterator to the first element that is greater than element with key x high = m.upper_bound(3); cout << "\n\nThe upper bound of 3 has key: " << high->first << " and value: " << high->second << ". "; high = m.upper_bound(4); cout << "\n\nThe upper bound of 4 has key: " << high->first << " and value: " << high->second << ". "; cout << "\n\n\n"; return 0; }
Output:
We hope that this post helped you develop a better understanding of the concept of the lower_bound() and the upper_bound() methods in the Map Container in STL and its implementation in CPP. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : ) | https://studytonight.com/cpp-programs/cpp-using-lower_bound-and-upper_bound-methods-in-map-in-stl | CC-MAIN-2021-04 | refinedweb | 365 | 59.23 |
Introduction to spring boot jwt
Spring boot jwt is the URL safe and compact means we can represent the claims by transferring them between two parties. The claim in spring boot jwt is encoded as the object which was used in the JWS (JSON web signature) payload or it was used in the plain text of the JWE (JSON web encryption) structure. After enabling the claim by using digitally signed or protected by integrity with MAC message authentication code and encrypted. We have used spring boot jwt in any application which contained private information, we have authenticating users without login info and cookies.
What is spring boot jwt?
- We have used spring boot jwt in the application where we require to validate the request without processing the credentials of client login for every single request.
- Spring boot jwt is representing a set of claims of JSON object which was encoding in JWS or JWE structure.
- This JSON object is nothing but a claim set of JWT. The spring boot jwt json object consisting the zero or more pairs.
Set Up Java Spring Boot JWT
- To set up the application by using jwt we need to set up a token, this token consists of the following three-part which was separated by the dots
- Signature
- Header
- Payload
- We can say that our JWT token looks like as below.
AAAAA.BBBBB.CCCCC
- The header consists of the two parts i.e. type of token and algorithm which was used in the application.
- The JWT token second part is the payload that contained the claims. The claims are nothing but the additional metadata and entity.
Using JWT with Spring Security
- As we know that JSON is less verbose as compare to XML, so after encoding JWT is smaller as compared to the token on SAML.
- Using JWT is very good to pass in environments like HTTP or HTML. Spring boot jwt uses the private or public key pair is in form of X.509 signing certificate.
- JWT parser is more common in the language of programming because jwt is directly mapped to the objects.
- To do the document object mapping we have used jwt. It will make them easier to work with SAML and assertion in JWT. The use of JWT is easy to process on the device of the user.
- Below is the benefits of JWT are as follows.
- More compact
- More common
- More secure
- Easier process
- To use JWT with spring security we need to follow the below steps are as follows.
- First, we need to create the authorization server of OAuth2. The OAuth stack offering the possibility to set up the server of authorization in the jwt application.
- After creating the authorization server next step is to create the resource server. We have to create the resource server by creating the application.yml file.
- After creating the authorization server next step is to add the claims of custom to the access token which was returned by the server of authorization. All the claim which was sent by the framework is all good.
- After adding custom claims to the token next step is to configure the authorization server. To add the authorization server we need to create the JSON file.
- After configuring the authorization server next step is to access the token by using the angular application of the client.
- After accessing the token from the client of the angular application next step is to access the claim from the resource server.
- After accessing the claim from the resource server next step is to load the key from a key store in java.
- After doing all the steps lastly, we have to do the maven configuration of the JWT application.
Spring boot jwt examples
Below example shows to set up a jwt application are as follows.
- Create a project template using a spring initializer and give the following name to the project metadata.
Group – com.example
Artifact name – spring-boot-jwt
Name – spring-boot- jwt
Description - Project of spring-boot- jwt
Package name - com.example.spring-boot- jwt
Packaging – Jar
Java – 11
Dependencies – spring web.
- After generating project extract files and open this project by using spring tool suite –
- After opening the project using the spring tool suite check the project and its files –
- Add the dependency
Code –
<dependency> -- Start of dependency tag.
<groupId>org.springframework.boot</groupId> -- Start and end of groupId tag.
<artifactId>spring-boot-starter-security</artifactId> -- Start and end of artifactId tag.
</dependency> -- End of dependency tag.
- Add simple controller
Code –
@RestController @RequestMapping ("hello")
public class Controller {
@GetMapping ("user")
public String helloUser() {
return "welcome to spring JWT";
}
@GetMapping ("admin")
public String helloAdmin() {
return "welcome to spring boot JWT";
} }
- Run application
- Check the application URL
Spring Security and JWT for performing
The below example shows spring security and jwt for performing are as follows.
- Add password to the user
Code –
spring.security.user.password = User@123
- Configure authentication manager and web security –
Code –
public class secConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure /* configure web manager */ (AuthenticationManagerBuilder auth) throws Exception {
}
@Override
protected void configure /*configure web security */ (HttpSecurity http) throws Exception {
}
}
- Add JWT response
Code –
public class jwtResponse implements Serializable {
private static final long serialVersionUID = -8091879091924046844L;
private final String jwttoken;
public jwtResponse(String jwttoken) {
this.jwttoken = jwttoken;
}
public String getToken() {
return this.jwttoken;
} }
- Add JWT request
Code
public class jwtRequest implements Serializable {
private static final long serialVersionUID = 5926468583005150707L;
private String username;
private String password;
public jwtRequest()
{
}
public jwtRequest(String username, String password) {
this.setUsername (username);
this.setPassword (password);
}
public String getUsername() {
return this.username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return this.password;
}
public void setPassword(String password) {
this.password = password;
}
}
- Run application –
Conclusion
Spring boot jwt is symmetrically signed by using the algorithm of HMAC. The SAML token is using the private or public key pair of JWT, XML signing, and digital signature of XML without introducing any security of obscure. We have used JWT in the scale of the internet.
Recommended Articles
This is a guide to spring boot jwt. Here we discuss What is spring boot jwt along with the example which shows to set up a jwt application. You may also have a look at the following articles to learn more – | https://www.educba.com/spring-boot-jwt/?source=leftnav | CC-MAIN-2022-40 | refinedweb | 1,041 | 54.52 |
finally display the fetched data in a
List, using
SwiftUI.
What is Combine?
At WWDC 2019 Apple introduced the new framework Combine, which is their take on reactive programming for Swift. The term “reactive programming” probably deserves its own blogpost, but for now, let’s keep it simple, and say that reactive programming is like an observer/observable pattern. You subscribe to, in this case, a publisher, and when the publisher gets the data you’re waiting for, it informs a subscriber, which is then able to act on that newly fetched data.
As Combine was released in 2019, the minimum iOS version is 13, so for Combine to run with your app, you must specify a minimum target of iOS 13+.
When to use Combine, and how does it work?
Reactive programming is mainly used to handle asynchronous datastreams inside your app, such as API calls (which is what this blogpost will cover). It can also be used if your UI has to wait for some action to happen, before being updated.
The way it works is described as a dataflow pipeline. There are three elements involved in this pipeline: publishers, operators and subscribers. Before diving into the actual elements, here is a simple overview of how they work:
The
subscriberasks the
publisherfor some data, which sends it through an
operatorbefore it ends up at the subscriber who requested it.
Publishers
Simply, the publisher provides the data and an error if necessary. The data will be delivered as an object defined by us, and we can also handle custom errors. There are 2 types of publishers:
Just: initialised from a value that only provides a single result (no error)
Future: initialised with a closure that eventually resolves to a single output value or a failure completion
Subjects are a special kind of publisher, that is used to send specific values to one or multiple subscribers at the same time. There are two types of built-in subjects with Combine;
CurrentValueSubject &
PassthroughSubject. They act similarly, the difference being currentValueSubject remembers and requires an initial state, where passthroughSubject does not.
Subscribers
The subscriber asks the publisher for data. It’s able to cancel the request if needed, which terminates a subscription and shuts down all the stream processing prior to any Completion sent by the publisher.
There are two types of subscribers build into Combine;
Assign and
Sink.
.assign assigns values to objects, like assigning a string to a labels text property directly.
.sink defines a closure, that accepts the value from the publisher when it’s read.
In this blogpost’s example, we will only use
.sink.
Operators
An operator is a middle ground between the
publisher and the
subscriber, and because of this, it acts as both. When a publisher talks to the operator, it’s acting as a subscriber, and when the subscriber talks to the operator, it’s acting as a publisher.
The operators are used to change the data inside the pipeline. That could be if we want to filter out nil-values, change a timestamp etc. before returning.
Operator is actually just a convenience name for higher order functions that you probably already know, such as
.map ,
.filter,
.reduce etc.
Let’s dig into some code 🧑💻
Alright, now that we have the basic knowledge (kinda) sorted, let’s dive a bit into the code that makes Combine useful as a reactive programming-framework.
In this example, we will use
The Movie Database API, which takes that we have an
api_key. So first of all, you need to go to their documentation and generate a token, which you will need to get actual data from their API: The Movie Database API documentation
Then create a new Xcode project, and make sure to pick
SwiftUI as our
User Interface.
💡 If you’re following the example, please be aware that you might be met with compiler errors, before completing all steps
Alright, so this leaves us with an all fresh project, with the SwiftUI basic setup.
For good sakes measure, you might want to change the filename to
MoviesView.
Afterwards, change the boilerplate code to look like this:
import SwiftUI struct MoviesView: View { // 1 @ObservedObject var viewModel = MovieViewModel() var body: some View { List(viewModel.movies) { movie in // 2 HStack { VStack(alignment: .leading) { Text(movie.title) // 3a .font(.headline) Text(movie.originalTitle) // 3b .font(.subheadline) } } } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { MoviesView() } }
Explanation:
- We add the
@ObservedObjectproperty wrapper, to let our app know, what we need to observe for any changes in the viewModel property.
- We give our List the array of movies that we are going to fetch together with Combine. This will later be the part that automatically updates the list, when the data is added to the movies-array.
- a+b: We add the movie’s title and its original title to a Text-object.
💡 If you changed the name of the ContentView, you have to go to your
SceneDelegate-file and change the name there as well
Not much code needed to set up our super-simple listview - if we were to do this with UIKit and a UITableView, we would have had a lot more code. So let’s take a second to appreciate SwiftUI 👏.
Create the models for the data we are about to fetch:
MovieResponse and
Movie
Add the following to the two newly created files:
import Foundation struct MovieResponse: Codable { let movies: [Movie] enum CodingKeys: String, CodingKey { case movies = "results" } }
and for the Movie-model:
import Foundation struct Movie: Codable, Identifiable { var id = UUID() let movieId: Int let originalTitle: String let title: String enum CodingKeys: String, CodingKey { case movieId = "id" case originalTitle = "original_title" case title } }
Now that we got our models in place, let’s get to the fun part.
🕺 ..Finally! It’s time to fetch some data!
We are going to build a very general
APIClient, which we will call from our
MovieViewModel that we instantiated in the very top of our
MoviesView that we just did.
Create a new file named
MovieViewModel, and add the following to it:
import Foundation import Combine class MovieViewModel: ObservableObject { @Published var movies: [Movie] = [] // 1 var cancellationToken: AnyCancellable? // 2 init() { getMovies() // 3 } } extension MovieViewModel { // Subscriber implementation func getMovies() { cancellationToken = MovieDB.request(.trendingMoviesWeekly) // 4 .mapError({ (error) -> Error in // 5 print(error) return error }) .sink(receiveCompletion: { _ in }, // 6 receiveValue: { self.movies = $0.movies // 7 }) } }
Explanation:
- The
@Publishedproperty wrapper lets Swift know to keep an eye on any changes of this variable. If anything changes, the body in all views where this variable is used, will update.
- Subscriber implementations can use this type to provide a “cancellation token” that makes it possible for a caller to cancel a publisher. Be aware that your network calls won’t work if you’re not assigning your call to a variable of this type.
- We are fetching the data as soon as the ViewModel is created, since there’s no lifecycle in SwiftUI like we’re used to from UIKit.
- Here we start the actual request for “trending movies weekly”
- Here we can handle the errors, if any
- Here the actual subscriber is created. As mentioned earlier, the sink-subscriber comes with a closure, that lets us handle the received value when it’s ready from the publisher.
- We assign the received data to the movies-property - this will trigger the action mentioned in step 1
Alright, so far so good - we still need to make the actual API call. For this, we need a general
APIClient.
So, create a new file named
APIClient and add the following code:
import Foundation import Combine struct APIClient { struct Response<T> { // 1 let value: T let response: URLResponse } func run<T: Decodable>(_ request: URLRequest) -> AnyPublisher<Response<T>, Error> { // 2 return URLSession.shared .dataTaskPublisher(for: request) // 3 .tryMap { result -> Response<T> in let value = try JSONDecoder().decode(T.self, from: result.data) // 4 return Response(value: value, response: result.response) // 5 } .receive(on: DispatchQueue.main) // 6 .eraseToAnyPublisher() // 7 } }
Explanation:
- This is our generic response object. The value property will be the actual object, and the response property will be the URL response including status code etc.
- This is our only entry point for network requests, no matter if it’s
GET,
POSTor whatever - it’s all specified in the request parameter.
- Here we are “turning the
URLSessioninto a publisher”
- Decode the result to the generic type we defined in the
APIClient(in this case
MovieResponse)
- Our “homemade” Response object now contains the actual data + the URL response from which we can find status code etc.
- Return the result on the main thread
- We end with erasing the publisher’s type, since it can be very long and “complicated”, and then transform and return it as the return type we want
(AnyPublisher<Response<T>, Error>)
Now we have the
APIClient set up, and as you can see, it just takes an
URLRequest as parameter, so let’s make something that can build URLRequests for us, that matches our API.
Create a new file and name it
MovieDBAPI
Add the following code to the newly created file:
import Foundation import Combine // 1 enum MovieDB { static let apiClient = APIClient() static let baseUrl = URL(string: "")! } // 2 enum APIPath: String { case trendingMoviesWeekly = "trending/movie/week" } extension MovieDB { static func request(_ path: APIPath) -> AnyPublisher<MovieResponse, Error> { // 3 guard var components = URLComponents(url: baseUrl.appendingPathComponent(path.rawValue), resolvingAgainstBaseURL: true) else { fatalError("Couldn't create URLComponents") } components.queryItems = [URLQueryItem(name: "api_key", value: "your_api_key_here")] // 4 let request = URLRequest(url: components.url!) return apiClient.run(request) // 5 .map(\.value) // 6 .eraseToAnyPublisher() // 7 } }
Explanation:
- Set up the basics needed for making the request
- Set up the paths we want to be able to call from the API.
- Here we create the URL request. The request is a
GET-request by default, hence we don’t need to specify that.
- Add the api_key you created at
The Movie Databasehere!
- We run the newly created request through our API client
- Map is our operator, that lets us set the type of output we want.
\.valuein this case, is our generic type defined as return value of this method (
MoviesData), since the client returns a Response-object, which contains both a value and a response property, but for now, we’re only interested in the value.
- This call cleans up the return type from something like
Publishers.MapKeyPath<AnyPublisher<APIClient.Response<MoviesData>, Error>, T>to the expected type:
AnyPublisher<MoviesData, Error>
🎉 That’s it!
Run the project, and you should be presented with a list of weekly trending movies. Combine publishes the list as soon as it’s ready, all reactive and smoothly.
The way we built the
APIClient, allows us to easily add more paths to the
APIPath enum in a nice and clean way.
For simplicity, we only made a very basic and simple
GET request, but we could make the
func request(_ path: APIPath) function build any other kinds of requests for us as well, e.g a
POST-request. The
APIClient just takes a
URLRequest of any kind, and that’s what we feed it.
I hope you made it work and can see the advantages of using Combine for networking. It’s super powerful as soon as you get the grasp of it!
Article Photo by Unsplash | https://engineering.monstar-lab.com/2020/03/16/Combine-networking-with-a-hint-of-swiftUI | CC-MAIN-2020-45 | refinedweb | 1,880 | 59.43 |
This article aims to teach how to do Generalized LR parsing to parse highly ambiguous grammars. I've provided some experimental code which implements it. I wanted to avoid making the code complete and mature because that will involve adding a lot of complexity to it, and this way it's a bit easier to learn. There's not a lot of information on GLR parsing online so I'm entering this article and the associated code into the public domain. All I ask is you mention honey the codewitch (that's me!) in your project somewhere if you use it.
GLR stands for Generalized Left-to-right Rightmost derivation. The acronym is pretty clunky in its expanded form, but what it means essentially is that it processes ambiguous parses (generalized) from left to right, and bottom-to-top. Where did bottom-to-top come from? It's hinted at with Rightmost derivation.
Parsers that use the right-most form create right associative trees and work from the leaves to the root. That's a bit less intuitive than working from the root downward toward the leaves, but it's a much more powerful way to parse, and unlike top-to-bottom parsing, it can handle left recursion (and right recursion) whereas a left recursive grammar would cause a top-to-bottom parse to stack-overflow or otherwise run without halting, depending on the implementation details. Either way, that's not desirable. Parsing from the leaves and working toward the root avoids this problem as well as providing greater recognizing power. The tradeoff is it's much harder to create translation grammars for it which can turn one language into another, and it's generally a bit more awkward to use due to the counterintuitiveness of building the tree upward.
A Generalized LR parser works by using any standard LR parsing underneath. An LR parser is a type of Pushdown Automata which means it uses a stack and a state machine to do its processing. An LR parser itself is deterministic but our GLR parser is not. We'll square that circle in a bit. First we need to outline the single and fundamental difference between a standard LR parse table and GLR parse table. Don't worry if you don't understand the table yet. It's not important to understand the specifics yet.
A standard LR parser parsing table/state table looks like this:
The above was generated using this tool.
It looks basically like a spreadsheet. Look at the cream colored cells though. See how there are two values? This causes an error with an LR parse table because it cannot hold two values per cell. The GLR parse table cells are arrays so they can hold multiple values. The above table, despite those cream colored conflicting cells, is perfectly acceptable to a GLR parser.
Any time there is a conflict like this, it's due to some localized or global ambiguity in the grammar. In other words, a grammar may not be ambiguous but if you're only looking ahead a little bit sometimes, it's not enough to disambiguate. It can be said that the production in the grammar is locally ambiguous. Otherwise, if the grammar is just generally ambiguous, then it's a globally ambiguity. Either way, this causes multiple values to be entered into the GLR table cell that handles that portion of the grammar.
No matter if it's local or global, an ambiguity causes the GLR parser to fork its stack and any other context information and try the parse each different way when presented with a multi-valued cell. If there is one value in the cell, no forking occurs. If there are two values, the parser forks its stack once. If there are three values, the parser forks its stack twice, and so on.
Since we're trying more than one path to parse, it can be said that our parser is non-deterministic. This impacts performance, and with a GLR parser performance is inversely proportional to how many forks are running. Additionally, each fork will result in a new parse tree. Note that this forking can be exponential in the case of highly ambiguous grammars. You pay the price for this ambiguity.
That's acceptable because GLR parsing was designed to parse natural/human language which can be highly ambiguous and the cost is baked in - to be expected in this case, and GLR is I believe still the most efficient generalized parsing algorithm to date since it only becomes non-deterministic if it needs to. As mentioned, rather than resolving to a single parse tree, a GLR parser will return every possible parse tree for an ambiguous parse, and again, that's how the ambiguity is handled whereas a standard LR parser will simply not accept an ambiguous grammar in the first place.
We'll begin by addressing a fundamental task - creating those parse tables like the one above from a given grammar. You can't generate these by hand for anything non-trivial so don't bother trying. If you need a tool to check your work, try this. It takes a grammar in Yacc format and can produce a parse table using a variety of LR algorithms.
You may need to read this to understand rules, grammars and symbols. While it was written to teach about LL(1) parsers, the same concepts such as rules, symbols (non-terminal and terminal) and FIRST/FOLLOWS sets are also employed for this style of parsing. It's highly recommended that you read it because it gives all of the above a thorough treatment. Since it's enough to be an article in its own right, I've omitted it here.
Let's kick this off with an excellent resource on generating these tables, provided by Stephen Jackson here. We'll be following along as we generate our table. I used this tutorial to teach myself how to generate these tables, and it says they are LALR(1) but I've been told this is actually an SLR(1) algorithm. I don't know enough about either to be certain either way. Fortunately, it really doesn't matter because GLR can use any LR algorithm underneath. The only stipulation is that the less conflicts that crop up in the table, the more efficient the GLR parser will be. Less powerful algorithms create more conflicts. Because of this, while you can use LR(0) for a GLR parser, it would significantly degrade the performance of it due to all the local ambiguity. The code I've provided doesn't implement LR(0) because there was no good reason to.
Let's begin with the grammar from the tutorial part way down the page at the link:
Generally, the first rule (0)/non-terminal S is created during the table generation process. We augment the grammar with it and point it to the actual start symbol which in this case would have been N, and then follow that with an end of input token (not shown)
That being said, we're going to overlook that and simply use the grammar as is. Note that the above grammar has its terminals as either lowercase letters like "x", or literal values like "=". We usually want to assign descriptive names to these terminals (like "equals") instead but this is fine here.
We need a data structure that's a lot like a rule with one important addition - an index property that indicates a position within the rule. For examine, we'd represent rule 0 from above using two of these structures like this:
S → • N
S → N •
The bullet is a position within the rule. We'll each data structure an LR(0) Item, or LR0Item. The first one above indicates that N is the next symbol to be encountered, while the next one indicates that N was just encountered. We need to generate sets of these.
LR0Item
We start at the start rule, and create an LR0Item like the first one above, with the bullet before the N. Now since the bullet is before a non-terminal (N), we need to add each of the N rules, with a bullet at the beginning. Now our itemset (our set of LR0Items) looks like this:
S → • N
N → • V = E
N → • E
So far so good but we're not done yet as now we have two new LR0Items that have a bullet before a non-terminal (V and E, respectively) so we must repeat the above step for those two rules which means our itemset will now look like this:
S → • N
N → • V = E
N → • E
E → • V
V → • x
V → • * E
Now our itemset is complete since the bullets are only before terminals or non-terminals that we've already added to the itemset. Since this is our first itemset, we'll call it itemset 0 or i0.
i0
Now all we do to make the rest of the itemsets is apply each possible input** to the itemset and increment the cursor on the accepting rules. Don't worry, here's an example:
We start by giving it x. There's only one entry in the itemset that will accept it next and that is the second to last LR0Item - V → • x so we're going to create our next itemset i1 with the result of that move:
i1
** We don't need to actually pass it each input. We only need to look at the next terminals already in the itemset. For i0 the next terminals are * and x. Those are the inputs we use.
V → x •
That is the single entry for i1. Now we have to go back to i0 and this time move on * which yields:
*
V → * • E
That is our single entry for itemset i2 so far but we're not done with it. Like before, a bullet is before a non-terminal E, so we need to add all the rules that start with E. There's only one in this case, E → • V which leaves us with this itemset so far:
i2
V → * • E
E → • V
Note that when we're adding new items, the cursor is at the start. Once again, we have a bullet before a non-terminal in the rule E → • V so we have to add all the V rules, which leaves is with our final itemset for i2:
V → * • E
E → • V
V → • x
V → • * E
Look, we encountered our V → • x and V → • * E and items from i0 again! This can happen and it's supposed to. They were added because of E → • V. Now we're done since the cursor is only before terminals or non-terminals that have already been added.
Now we need to move on i0 again, this time on the non-terminal V. Moving on V yields the following itemset i3:
V
i3
N → V • = E
E → V •
One additional difficulty of generating these itemsets is duplicates. Duplicate itemsets must be detected, and they must not be repeated.
Anyway, here are the rest of the itemsets in case you get stuck:
i4:
S → N •
i5:
N → E •
i6:
V → * E •
i7:
E → V •
i8:
N → V = • E
E → • V
V → • x
V → • * E
i9:
N → V = E •
Generating these is kind of tricky, perhaps moreso than it seems and I haven't laid out all we need to do above yet. While you're building these itemsets, you'll need to create some kind of transition map between the itemsets. For example, on i0 we can transition to i1 on x and to i2 on *. We know this because we worked it out while we were creating our i0 itemset. Stephen Jackson's tutorial has it laid out as a separate step but for expediency we want to roll it into our steps above. It makes things both easier and more efficient. Remember to detect and collate duplicate sets.
Now, I'll let you in on something I've held back up until now: Each itemset above represents a state in a state machine, and the above transitions are transitions between the states. We have 10 states for the above grammar, i0 through i9. Here's how I store and present the above data, as a state machine:
i9
sealed class LRFA
{
public HashSet<LR0Item> ItemSet = new HashSet<LR0Item>();
public Dictionary<string, LRFA> Transitions = new Dictionary<string, LRFA>();
// find every state reachable from this state, including itself (all descendants and self)
public ICollection<LRFA> FillClosure(ICollection<LRFA> result = null)
{
if (null == result)
result = new HashSet<LRFA>();
if (result.Contains(this))
return result;
result.Add(this);
foreach(var trns in Transitions)
trns.Value.FillClosure(result);
return result;
}
}
Basically, this allows you to store the computed transitions along with the itemset. The dictionary maps symbols to LRFA states. The name is ugly but it stands for LR Finite Automata, and each LRFA instance represents a single state in a finite state machine. Now that we have it we're going to pull a fast one on our algorithm, and rather than running the state machine, we're going to walk through it and create a new grammar with the state transitions embedded in it. That's our next step.
LRFA
The state numbers are different below than the demonstration above out of necessity. I wanted you to be able to follow the associated tutorial at the link, but the following was generated programmatically and I don't control the order the states get created in.
For this next step, we'll be walking through the state machine we just created following each transition. Start at i0. Here we're going from i0 to each transition, through the start symbol, to the end of the input as signified by $ in the below grammar, so first we write 0/S/$ as the rule's left hand side which signifies i0, the start symbol S and the special case end of input $ as there is no actual itemset for that. From there, we only have one transition to follow, on N which leads us to i1, so we write as the single right hand side 0/N/1 leaving us with:
0/S/$
0/N/1
0/S/$ → 0/N/1
Those are some ugly symbol names, but it doesn't matter, because the computer loves them, I swear. Truthfully, we need these for a couple of reasons. One, this will disambiguate transitions from rule to rule because now we have new rules for each transition possibility, and two we can use it later to get some lookahead into the table, which we'll get into.
Next we have to follow N so we do that, and repeat. Notice how we've create two rules with the same left hand here. That's because we have two transitions.
0/N/1 → 0/V/2 2/=/3 3/E/4
0/N/1 → 0/E/9
We only have to do this where LR0Items are at index 0. Here's what we're after:
0/S/$ → 0/N/1
0/N/1 → 0/V/2 2/=/3 3/E/4
0/N/1 → 0/E/9
0/V/2 → 0/x/6
0/V/2 → 0/*/7 7/E/8
0/E/9 → 0/V/2
3/E/4 → 3/V/5
3/V/5 → 3/x/6
3/V/5 → 3/*/7 7/E/8
7/E/8 → 7/V/5
7/V/5 → 7/x/6
7/V/5 → 7/*/7 7/E/8
Finally, we can begin making our parse table. These are often sparse enough that using a dictionary is warranted, although a matrix of nested arrays works too as long as you have a little more memory.
An LR parser has four actions it can perform: shift, reduce, goto and accept. The first two are primary operations and it's why LR parsers are often called shift-reduce parsers.
The parse table creation isn't as simple as it is for LL(1) parsers, but we've come pretty far. Now if you're using a dictionary and string symbols you'll want a structure like Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[] for a straight LR parse table or Dictionary<string,ICollection<(int RuleOrStateId, string Left, string[] Right)>>[] for a GLR parse table.
Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[]
Dictionary<string,ICollection<(int RuleOrStateId, string Left, string[] Right)>>[]
That's right, an array of dictionaries with a tuple in them, or in GLR's case, an array of dictionaries with a collection of tuples in them.
Now the alternative, using integer symbol ids can be expressed as int[][][] for straight LR or int[][][][] for GLR. Despite the efficiency, so many nests are confusing and it's best to use this form for generated parsers. You can create these arrays from a dictionary based parse table anyway. Below we'll be using the dictionary form.
int[][][]
int[][][][]
Initialize the parse table array with one element for each state. Above we had 10 states, so our declaration would look like new Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[10].
new Dictionary<string,(int RuleOrStateId, string Left, string[] Right)>[10]
Next compute the closure of our state machine we built earlier as a List<LRFA>. You can use the FillClosure() method by passing in a list. You can't use a HashSet<LRFA> here because we'll need a definite order and indexed access.
List<LRFA>
FillClosure()
HashSet<LRFA>
Create a list of itemsets (List<ICollection<LR0Item>>). For each LRFA in the closure, take its ItemSet and stash it in the aforementioned list.
List<ICollection<LR0Item>>
ItemSet
Now for each LRFA in the closure, treat its index in the closure as an index into your parse table's dictionaries array. The first thing you do is create the dictionary for that array entry.
Then go through each of the transitions in that LRFA. For any symbol, use that as a dictionary key. This will be a "shift" operation in the case of a terminal and a "goto" operation if it's a non-terminal. To create tuple for a shift or goto operation set RuleOrStateId to the index of the transition's destination state. You can find this index by using IndexOf() over the closure list.
RuleOrStateId
IndexOf()
Now while you're on that LRFA in the loop, scan its itemset looking for an LR0Item that has the Index at the very end and where the left hand of the associated rule is the start symbol, and if you find it add an entry to the parse table in that associated state's dictionary, key of the end symbol ($) and value of a tuple with RuleOrStateId of -1 and both Left and Right set to null. This indicates an "accept" action.
Left
Right
Now we need to fill in the reductions. This can be sort of complicated. Remember our extended grammar? It's time to use it. First, we take the FOLLOW sets for that grammar. This is non-trivial and lengthy to describe so I'm punting the details of doing it to this article I linked you to early on. The project contains a FillFollows() function that will compute the follows set for you.
FillFollows()
Now that you have them, you'll need to map each extended rule in the grammar to the follows for its left hand side. I use a dictionary for this, like Dictionary<CfgRule,ICollection<string>>, albeit with a catch. We need to merge some of these rules. I do this by creating a class implementing IEqualityComparer<CfgRule> and then writing a custom comparison which does two things - it grabs the final id from the rule's rightmost symbol and strips the rules of their annotations so they are original rules again. For example 0/N/1 → 0/V/2 2/=/3 3/E/4 would cause us to stash the number 4 at the end, and then strip the rule down to N → V = E, now reflecting the original rule in the non-extended grammar. If two of these unextended rules are the same and they share the same stashed number, then they can be merged into one so we return true from the Equals() function. Pass this comparer to the constructor of the above dictionary, so it will now do the work for us. As we add items just make sure you don't fail if there's already an item present. At most, you'll just merge their FOLLOW sets instead adding the new entry but I'm not sure this is necessary as I think they might always have the same FOLLOW set. My code plays it safe and attempts to merge existing entries' FOLLOW sets.
Dictionary<CfgRule,ICollection<string>>
IEqualityComparer<CfgRule>
Equals()
Note: In the case of epsilon/nil rules like A →, you'll need to handle them slightly differently below.
Now for each entry in your above map dictionary, you'll need to do the following:
Take the rule and its final number** - the final number is the index into your array of dictionaries that makes up your parse table, (remember that thing still?) For each entry entry in the FOLLOW items in your map entry's Value, add a tuple to the parse table. To create the tuple for this, simply strip the rule to its unextended form, find the index of the rule within the grammar, and then use that, along with the left and right hand side of the rule in the corresponding spots in the tuple. For example, if rule at index 1 is N → V = E then the tuple would be (RuleOrStateId: 1, "N" , new string[] { "V", "=", "E" }). Now, check to see if the dictionary already has an entry for this FOLLOW symbol. If it does, and it's the same, no problem. That may happen, but it's fine. If it does and it's not the same, this is a conflict. If the existing tuple indicates a shift, this is shift-reduce conflict. This isn't a parse killer. Sometimes, these are unavoidable such as the dangling else in C family languages. Usually, the shift will take precedence. If the existing tuple indicates a reduce action, this is a reduce-reduce conflict. This will stop a standard LR parser dead in its tracks. With a GLR parser, both alternatives will be parsed. Anyway, if this a GLR table and a tuple is already present, just add another tuple. If it's a regular LR parser, and it was a shift just pick an action (usually the shift) to take priority, and ideally issue a warning about the conflict to the user, but not with GLR. If the previous tuple is a reduce, issue an error, but not with GLR.
Value
(RuleOrStateId: 1, "N" , new string[] { "V", "=", "E" })
** For epsilon/nil rules, you'll need to take the final number from the left hand side of the rule.
We did it! That's a lot of work but now have a usable parse table. For production code, you may want to have some sort of progress indicator because this operation can take some significant time for real world grammars.
No matter the algorithm, be it LALR(1), SLR(1) or full LR(1) the parse tables are the same in overall structure. The GLR parse table contains one important difference in that the cells are arrays.
Because of this for LR, our parse code is the same regardless of the algorithm (excepting GLR). However, since GLR uses these principles to parse we will cover standard LR parsing here.
Parsing with LR is somewhat convoluted but overall simple once you get past some of the twists involved.
You'll need the parse table, a stack and a tokenizer. The tokenizer will give you tokens with symbols, values, and usually line/position info. If the tokenizer reports only symbol ids, you'll need to do a lookup to get the symbol, so you'll need at least an array of strings as well.
The parse table will direct the parser and use of the stack and input using the following directives:
If no entry is found, this is a syntax error.
Let's revisit Stephen Jackson's great work:
Let's walk through the first steps in parsing x = * x. The first step is to initialize the stack and read the first token, basically pushing 0 onto the stack
The first token we found in state 0 was an x, which indicates that we must shift to state 1. 1 is placed on the stack and we advance to the next token. That leads us to the following:
When we reduce, we report the rule for the reduction. Rule 4, V → x, has 1 token (x) on the right-hand side. Pop the stack once, leaving it with 0. In state 0, V (the left-hand side of rule 4) has a goto value of 3, so we push a 3 on the stack. This step gives us our table a new row:
Here's the whole process:
By itself, that doesn't look very useful but we can easily build trees with that information. Here, Stephen Jackson explains the results:
Stephen Jackson writes::
V(x) = V(* E(V(x))) V(x) = E(V( * E(V(x)))) N(V(x) = E(V( * E(V(x))))) S(N(V(x) = E(V( * E(V(x))))))).
V →.
Naturally, parsing with GLR is a bit more complicated than the above, but it is at least based on the same principles. With GLR, it's best to make an overarching parser class, and then delegate the actual parsing to a worker class. This way, you can run several parses concurrently by spawning multiple workers, which is what we need. Each worker manages its own stack, an input cursor and any other state.
The complication is not really in running the workers like one might think but rather managing the input token stream. The issue is that each worker might be in a slightly different position than the next even if we run them in lockstep so what we must do is create a sliding window to manage the input. The window should expand as necessary (when the workers get further apart from each other in the input) and shrink when possible (such as when a worker is removed or the workers move closer together.) I use my LookAheadEnumerator<Token> class to handle much of the bookkeeping on the actual window. The rest is making sure the window slides properly - once all the workers have stepped/advanced count the times each moved. Take the minimum of all of those and advance the primary cursor by that much. Finally, update the worker's input position to subtract that minimum value from its position. I've noticed many other GLR offerings requiring the entire input to be loaded into memory (basically as a string) before it will parse. If we had done that, this wouldn't be so difficult, but the flexibility is worth the tradeoff. This way, you can parse directly from a huge file or from a network stream without worrying.
string
The only issue for huge documents is you may want to use the pull parser directly rather than generating a tree, which could take an enormous amount of memory. The pull parser can be a little tricky to use, because while it works a bit like XmlReader, it also reads concurrently from different parses, meaning you'll have to check the TreeId every time after you Read() to see which "tree" you're currently working on.
XmlReader
TreeId
Read()
Anyway, in implementing this, we use the parse table exactly like before except when we encounter multiple tuples in the same cell. When we find more than one we must "fork" a new worker for each additional tuple which gets its own copy of the stack and its own independent input cursor (via LookAheadEnumerator<Token>) - whenever the main parser is called to move a step it chooses a worker to delegate to based on a simple round-robin scheduling scheme which keeps all the workers moving in lockstep. Removal of a worker happens once the worker has reached the end of the input or optionally when it has encountered too many errors in the parse tree, with "too many" being set by the user.
LookAheadEnumerator<Token>
There's a wrinkle in generating the parse trees that's related to the fact that we parse multiple trees at the same time. When the parser forks a worker (as indicated by the presence of a new TreeId that hasn't been seen yet we must (similarly to the parsing itself) clone the tree stack to accommodate it. Basically, we might be happily parsing along with worker TreeId of 1, having already built a partial tree when all of the sudden we fork and start seeing TreeId 2. When that happens, we must clone the tree stack from #1 and then continue, adding to each tree stack as appropriate. Finally, each tree id that ends up accepting winds up being a parse tree we return.
I have not provided you a parser generator, but simply an experimental parser and the facilities to define a grammar and produce a parser from it. A parser generator would work the same way except that the arrays we initialize the parser with would have been produced from generated code. That makes things just a little easier.
All of the directly relevant code is under namespace C. The other namespaces provide supporting code that isn't directly related to what we've done so far above. You'll see things like LexContext.cs in the project which exposes a text cursor as a LexContext class under namespace LC. We use that to help parse our CFG files, which we'll get to very soon, but it isn't related to the table building or parsing we were exploring above, as CFG documents are trivial and parsed with a hand rolled parser. CFG document here, means Context Free Grammar document. This is a bit of a misnomer with GLR since GLR can technically parse from a contextful grammar as well, but it still has all the properties of a regular CFG so the name is fine, in fact it's still preferable as all of the mathematical properties that apply to CFGs apply here too.
Open the solution and navigate to the project "scratch" in your IDE. This is our playground for twiddling with the parser.
Let's try it now. Create a new CFG document. Since I like JSON for examples, let's define one for JSON which we'll name json.cfg:
(This document is provided for you with the project to save you some typing.)
Great, but now what? First, we need to load this into a CfgDocument class:
CfgDocument
var cfg = CfgDocument.ReadFrom(@"..\..\json.cfg");
cfg.RebuildCache(); // not necessary, but recommended
The second line isn't necessary but it's strongly recommended especially when dealing with large grammars. Any time you change the grammar, you'll need to rebuild the cache. Since we don't intend to change it, just to load it and generate tables from that means we can cache it now.
There's also a Parse() function and a ReadFromUrl() function. ToString() fulfills the correlating conversion back into the above format (except longhand, without using | ) while an overload of it can take "y" as a parameter which causes it to generate the grammar in Yacc format for use with tools like the JISON visualizer I've linked to prior.
Parse()
ReadFromUrl()
ToString()
|
A CfgDocument is made up of Rules as represented by CfgRule. In the above scenario, we loaded these rules from a file but you can add, remove and modify them yourself. The rule has a left hand and right hand side which are composed of symbols. Meanwhile, each symbol is represented by a string, with the reserved terminal symbols #EOS and #ERROR being automatically produced by CfgDocument. Use the document to query for terminals, non-terminals and rules. You can also use it to create FIRST, FOLLOW, and PREDICT sets. All of these are accessed using the corresponding FillXXXX() functions. If you remember, we use the FOLLOW sets to make our LR and GLR parse tables above. This is where we got them from, except our CFG was an extended grammar rules like 0/S/$ -> 0/N/1 as seen before.
Rules
CfgRule
#EOS
#ERROR
FillXXXX()
This is all well and good, but what about what we're actually after? - parse tables! Remember that big long explanation on how to generate them? I've provided all of that in CfgDocument.LR.cs which provides TryToLR1ParseTable() and TryToGlrParseTable(). The former function doesn't generate true LR(1) tables because those would be huge. Instead, it takes a parameter which tells us what kind of LR(1) family of table we want. Currently the only option is Lalr1 for LALR(1), but that suits us just fine. TryToGlrParseTable() will give us the GLR table we need.
TryToLR1ParseTable()
TryToGlrParseTable()
Lalr1
Each of these functions returns a list of CfgMessage objects which can be use to report any warnings or errors that crop up. There won't be any for GLR table creation since even conflicting cells (ambiguous grammars) are valid, but I've provided it just the same for consistency. Enumerate through the messages and report them or simply pass them to CfgException.ThrowIfErrors() to throw if any of the messages were ErrorLevel.Error. The out value gives us our CfgLR1ParseTable or our CfgGlrParseTable, respectively. Now that we have those, we have almost enough information to create a parser.
CfgMessage
CfgException.ThrowIfErrors()
ErrorLevel.Error
CfgLR1ParseTable
CfgGlrParseTable
One thing we'll need in order to parse is some sort of implementation of IEnumerable<Token>. I've provided two of them with the project, one for JSON called JsonTokenizer and one for the Test14.cfg grammar which is ambiguous - for trying the GLR parsing. That tokenizer is called Test14Tokenizer. The other way to create tokens is simply make a List<Token> instance and manually fill it with tokens you create. You can pass that as your "tokenizer". The tokenizers I've provided were generated by Rolex although they had to be modified to take an external Token definition. Eventually, I'll add an external token feature to Rolex but it's not important as our scenario here is somewhat contrived. In the real world, we'd be generating the code for our parsers and those won't suffer the same problem because all the generated code can share the same Token declaration.
IEnumerable<Token>
JsonTokenizer
Test14.cfg
Test14Tokenizer
List<Token>
Token
On to the simpler things. One is the symbol table which we can get by simply calling FillSymbols() and converting that to a string[]. The other is the error sentinels (int[]) which will require some explanation. In the case of an error, the parsers will endeavor to keep on parsing so that all errors can be found in a single pass. This is easier said than done. We use a simple technique called "panic mode" error recovery that is a form of local error recovery. This requires that we define a safe ending point for when we encounter an error. For languages like C, this can mean ;/semi and }/rbrace which is very reasonable. For JSON, we'll use ]/rbracket, ,/comma and }/rbrace. When an error happens, we gather input tokens until we find one of those, and then pop the stack until we find a valid state for a sentinel or run out of states to pop. Ideally, we'd only use this method for a standard LR parser, and use a form of global error recovery for the GLR parser. However, the latter is non-trivial and not implemented here, even though it would result in better error handling. Our errorSentinels array will contain the indices into the symbol table for our two terminals.
FillSymbols()
string[]
int[]
;/semi
}/rbrace
]/rbracket
,/comma
errorSentinels
Finally, we can create a parser. Here's the whole mess from beginning to end:
var cfg = CfgDocument.ReadFrom(@"..\..\json.cfg");
cfg.RebuildCache();
// write our grammar out in YACC format
Console.WriteLine(cfg.ToString("y"));
// create a GLR parse table. We can use a standard LR parse table here.
// We're simply demoing the GLR parser
CfgGlrParseTable pt;
var msgs = cfg.TryToGlrParseTable(out pt);
// there shouldn't be any messages for GLR, but this is how we process them if there were
foreach(var msg in msgs)
Console.Error.WriteLine(msg);
CfgException.ThrowIfErrors(msgs);
// create the symbol table to map symbols to indices which we treat as ids.
var syms = new List<string>();
cfg.FillSymbols(syms);
// our parsers don't use the parse table directly.
// they use nested arrays that represent it for efficiency.
// we convert it using ToArray() passing it our symbol table
var pta= pt.ToArray(syms);
// now create our error sentinels. If this is an empty array only #EOS will be considered
var errorSentinels = new int[] { syms.IndexOf("rbracket"), syms.IndexOf("rbrace") };
// let's get our input
string input;
using (var sr = new StreamReader(@"..\..\data2.json"))
input = sr.ReadToEnd();
// now make a tokenizer with it
var tokenizer = new JsonTokenizer(input);
var parser = new GlrTableParser(pta, syms.ToArray(), errorSentinels,tokenizer);
// uncomment the following to display the raw pull parser output
/*
while(parser.Read())
{
Console.WriteLine("#{0}\t{1}: {2} - {3}",
parser.TreeId, parser.NodeType, parser.Symbol, parser.Value);
}
// reset the parser and tokenizer
tokenizer = new JsonTokenizer(input);
parser = new GlrTableParser(pta, syms.ToArray(), errorSentinels,tokenizer);
*/
Console.WriteLine();
Console.WriteLine("Parsing...");
Console.WriteLine();
// now for each tree returned, dump it to the console
// there will only be one unless the grammar is ambiguous
foreach(var pn in parser.ParseReductions())
Console.WriteLine(pn);
To try with different grammars, simply switch out the grammar filename, the error sentinels, the tokenizer and the input. Test14.cfg is a small ambiguous grammar that can be used to test. I recommend it with an input string like bzc.
bzc
Anyway, for the JSON grammar and the associated input, we get:
%token lbrace rbrace comma string colon lbracket rbracket number null true false
%%;
Parsing...
+- json
+- object
+- lbrace {
+- fields
| +- field
| +- string "backdrop_path"
| +- colon :
| +- value
| +- string "/lgTB0XOd4UFixecZgwWrsR69AxY.jpg"
+- rbrace }
Meanwhile, doing it for Test14 as suggested above yields two parse trees:
%token a c d b z
%%
S : a A c
| a B d
| b A d
| b B c;
A : z;
B : z;
Parsing...
+- A
+- b b
+- A
| +- z z
+- #ERROR c
+- S
+- b b
+- B
| +- A
| +- z z
+- c c
The first one was no good as it tried the wrong reduce - or rather, that reduce didn't pan out this time but it theoretically may have with the right grammar and given a different input string. You'll note the error recovery is still poor, and this is still a work in progress. I'm experimenting with different techniques to improve it so that it will continue without clearing the stack in so many cases. In any case, our second tree resulted in a valid parse. With some grammars, multiple trees may be valid and error free due to ambiguity. For some inputs, every tree might have errors, but the errors might be different in each one. Figuring out which tree to pick is not the GLR parsers job. It depends heavily on what you intend to do with it. With C# for example, you might get many different trees due to the ambiguity of the language, but applying type information can narrow the trees down to the single valid tree the code represents.
GlrTableParser delegates to GlrWorker which implements an LR(1) parser in its own right. We spawn one of these workers for each path through the parse. Only the first one is created by GlrTableParser itself. The workers create themselves after that during the parse. At each fork, we create a new one for each alternate path which can yield exponential GlrWorker creation in highly ambiguous grammars, so make your grammars tight to get the best performance. We know we've encountered a fork when there are multiple entries in the parse table cell at our current state and position. Here, we see the lookup in the parse table in GlrWorker.Read(). Note the code in bold, and how it spawns more workers for every tuple after the first one.
GlrTableParser
GlrWorker
GlrWorker.Read()
public bool Read()
{
if(0!=_errorTokens.Count)
{
var tok = _errorTokens.Dequeue();
tok.SymbolId = _errorId;
CurrentToken = tok;
return true;
}
if (_continuation)
_continuation = false;
else
{
switch (NodeType)
{
case LRNodeType.Shift:
_ReadNextToken();
break;
case LRNodeType.Initial:
_stack.Push(0);
_ReadNextToken();
NodeType = LRNodeType.Error;
break;
case LRNodeType.EndDocument:
return false;
case LRNodeType.Accept:
NodeType = LRNodeType.EndDocument;
_stack.Clear();
return true;
}
}
if (0 < _stack.Count)
{
var entry = _parseTable[_stack.Peek()];
if (_errorId == CurrentToken.SymbolId)
{
_tupleIndex = 0;
_Panic();
return true;
}
var tbl = entry[CurrentToken.SymbolId];
if(null==tbl)
{
_tupleIndex = 0;
_Panic();
return true;
}
int[] trns = tbl[_tupleIndex];
// only create more if we're on the first index
// that way we won't create spurious workers
if (0 == _tupleIndex)
{
for (var i = 1; i < tbl.Length; ++i)
{
_workers.Add(new GlrWorker(_Outer, this, i));
}
}
if (null == trns)
{
_Panic();
_tupleIndex = 0;
return true;
}
if (1 == trns.Length)
{
if (-1 != trns[0]) // shift
{
NodeType = LRNodeType.Shift;
_stack.Push(trns[0]);
_tupleIndex = 0;
return true;
}
else
{ // accept
//throw if _tok is not $ (end)
if (_eosId != CurrentToken.SymbolId)
{
_Panic();
_tupleIndex = 0;
return true;
}
NodeType = LRNodeType.Accept;
_stack.Clear();
_tupleIndex = 0;
return true;
}
}
else // reduce
{
RuleDefinition = new int[trns.Length - 1];
for (var i = 1; i < trns.Length; i++)
RuleDefinition[i - 1] = trns[i];
for (int i = 2; i < trns.Length; ++i)
_stack.Pop();
// There is a new number at the top of the stack.
// This number is our temporary state. Get the symbol
// from the left-hand side of the rule #. Treat it as
// the next input token in the GOTO table (and place
// the matching state at the top of the set stack).
// - Stephen Jackson,
var state = _stack.Peek();
var e = _parseTable[state];
if (null == e)
{
_Panic();
_tupleIndex = 0;
return true;
}
_stack.Push(_parseTable[state][trns[1]][0][0]);
NodeType = LRNodeType.Reduce;
_tupleIndex = 0;
return true;
}
}
else
{
// if we already encountered an error
// return EndDocument in this case, since the
// stack is empty there's nothing to do
NodeType = LRNodeType.EndDocument;
_tupleIndex = 0;
return true;
}
}
Refer to the tutorial on running a parse given earlier. See how we initially grab the tuple given by _tupleIndex and then reset it to zero? That's because we only want to take the alternate path once. The worker only takes an alternate path the first time it is Read() and after that, it spawns additional workers for each of the alternate paths it encounters, wherein they revert to the first path after the inital Read() as well, and spawn more workers for any alternates they encounter and so on. Yes, again, this can yield exponential workers. It's the nature of the algorithm. Walking each possible path requires exponential numbers of visits for each choice that can be made.
_tupleIndex
Also note how we report the rule definition used during a reduction. This is critical so that the user of the parser can match terminals back to the rules they came from. It's simply stored as int[] where index zero is the left hand side's symbol id, and the remainder are the ids for the right hand symbols.
Another issue is in creating a new worker from an existing worker. We must copy its stack and other state and receive an independent input cursor and a tuple index to tell us which path to take on the initial read. On the initial read, we skip the first part of the routine which is what _continuation is for - we're restarting the parse from where we left off. Here's the constructor for the worker that takes an existing worker:
_continuation
public GlrWorker(GlrTableParser outer,GlrWorker worker,int tupleIndex)
{
_Outer = outer;
_parseTable = worker._parseTable;
_errorId = worker._errorId;
_eosId = worker._eosId;
_errorSentinels = worker._errorSentinels;
ErrorTokens = new Queue<Token>(worker.ErrorTokens);
_tokenEnum = worker._tokenEnum;
var l = new List<int>(worker._stack);
l.Reverse();
_stack = new Stack<int>(l) ;
Index = worker.Index;
_tupleIndex = tupleIndex;
NodeType = worker.NodeType;
Id = outer.NextWorkerId;
CurrentToken = worker.CurrentToken;
unchecked { ++outer.NextWorkerId; }
_continuation = true;
_workers = worker._workers;
}
Here, we create our new worker, using a somewhat awkward but necessary way to the clone the stack - I really should use my own stack implementation to solve this but I haven't for this code. We assign a new id to the worker, we indicate that it's a continuation, copy our parse table and error sentinel references and create a queue to hold errors we need to report. It's not shown here but _Outer is actually a property that wraps _outer, itself a WeakReference<GlrTableParser>. This is to avoid circular references which create strain on the garbage collector. The parser will always live at least as long as its workers so a weak reference is inconsequential for us but not for the GC. The Index property serves as an offset into the current input. We need this for our sliding window technique mentioned earlier. We simply take the index from the current worker since we're in the same logical position. _tupleIndex once again tells us which path to take on this next fork.
_Outer
_outer
WeakReference<GlrTableParser>
Index
That covers the meat of our worker class. Let's cover the GlrTableParser that delegates to it. Mainly, we're concerned with the Read() method which does much of the work:
public bool Read()
{
if (0 == _workers.Count)
return false;
_workerIndex = (_workerIndex + 1) % _workers.Count;
_worker = _workers[_workerIndex];
while(!_worker.Read())
{
_workers.RemoveAt(_workerIndex);
if (_workerIndex == _workers.Count)
_workerIndex = 0;
if (0 == _workers.Count)
return false;
_worker = _workers[_workerIndex];
}
var min = int.MaxValue;
for(int ic=_workers.Count,i=0;i<ic;++i)
{
var w = _workers[i];
if(0<i && w.ErrorCount>_maxErrorCount)
{
_workers.RemoveAt(i);
--i;
--ic;
}
if(min>w.Index)
{
min=w.Index;
}
if (0 == min)
break;
}
var j = min;
while(j>0)
{
_tokenEnum.MoveNext();
--j;
}
for (int ic = _workers.Count, i = 0; i < ic; ++i)
{
var w = _workers[i];
w.Index -= min;
}
return true;
}
Here if we don't have any workers, we're done. Then we increment the _workerIndex, round-robin like and use that to choose the current worker we delegate to among all the current workers. For each worker, starting at the _workerIndex, we try to Read() from the next worker, and if it returns false then we remove it from the _workers and try the next worker. We continue this process until we find a worker that read or we run out of workers. If we're out of workers we return false.
_workerIndex
_workers
Now, after a successful Read(), we check all the workers for how much they've advanced along the current primary cursor by checking their Index. We do this for all the workers because some may not have been moved forward during the last Read() call. Anyway, the minimum advance is what we want to slide our window by, so we increment the primary cursor by that many. Next, we fix all of the workers Indexes by subtracting that minimum value from them. Finally, we return true to indicate a successful read.
One more thing of interest about the parser is the ParseReductions() method which will return trees from the reader. The trees of course, are much easier to work with then the raw reports from the pull parser. Here's the method:
ParseReductions()
public ParseNode[] ParseReductions(bool trim = false, bool transform = true)
{
var map = new Dictionary<int, Stack<ParseNode>>();
var oldId = 0;
while (Read())
{
Stack<ParseNode> rs;
// if this a new TreeId we haven't seen
if (!map.TryGetValue(TreeId,out rs))
{
// if it's not the first id
if (0 != oldId)
{
// clone the stack
var l = new List<ParseNode>(map[oldId]);
l.Reverse();
rs = new Stack<ParseNode>(l);
}
else // otherwise create a new stack
rs = new Stack<ParseNode>();
// add the tree id to the map
map.Add(TreeId, rs);
}
ParseNode p;
switch (NodeType)
{
case LRNodeType.Shift:
p = new ParseNode();
p.SetLocation(Line, Column, Position);
p.Symbol = Symbol;
p.SymbolId = SymbolId;
p.Value = Value;
rs.Push(p);
break;
case LRNodeType.Reduce:
if (!trim || 2 != RuleDefinition.Length)
{
p = new ParseNode();
p.Symbol = Symbol;
p.SymbolId = SymbolId;
for (var i = 1; RuleDefinition.Length > i; i++)
{
var pc = rs.Pop();
_AddChildren(pc, transform, p.Children);
if ("#ERROR" == pc.Symbol)
break;
}
rs.Push(p);
}
break;
case LRNodeType.Accept:
break;
case LRNodeType.Error:
p = new ParseNode();
p.SetLocation(Line, Column, Position);
p.Symbol = Symbol;
p.SymbolId = _errorId;
p.Value = Value;
rs.Push(p);
break;
}
oldId = TreeId;
}
var result = new List<ParseNode>(map.Count);
foreach (var rs in map.Values)
{
if (0 != rs.Count)
{
var n = rs.Pop();
while ("#ERROR" != n.Symbol && 0 < rs.Count)
_AddChildren(rs.Pop(), transform, n.Children);
result.Add(n);
}
}
return result.ToArray();
}
This is just an iterative way of interpreting the parse results as Stephen Jackson covered above, except his was recursive and was only dealing with one parse tree. It would be very difficult, if not impossible to implement this recursively given that different tree information can be returned in any order.
Stay tuned for GLoRy, a parser generator that takes this code to the next level to generate parsers for virtually anything parseable.. | https://www.codeproject.com/Articles/5259825/GLR-Parsing-in-Csharp-How-to-Use-The-Most-Powerful | CC-MAIN-2022-27 | refinedweb | 8,286 | 63.49 |
.
First of all, all the numbers menthioned must be integer, right?
Also should it be p^2 = x^2 + y^2.
Indeed integers, I should should have been more precise :) p, n, x and y are all elements of N.
We are quite sure it's p = x^2 + y^2;
for p = 13, n = 3, x = 2 and y = 3.
Whats the problem? I think Euler is a fairly respectable mathematician(:-)). The brtitannica article on this says that Gauss gave the first complete proof, so you might want to try looking him up. (The brittanica article:)
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
I found a lot of euler's theorems and proofs on the net, but not this one. Has it been prooven yet?
Thanks for your support so far. The Brittanica article looks good, we are looking in to it tonight.
Let me revise the method, if a condition is true for 1 then assume
that it true for any number n and then prove that it is true for (n+1)
Then the condition is true for all numbers 1 to n.
I have coded this program which demonstrates that for every prime
number satisfying p=4*n+1 has a valid pair of x and y such that
p=x^2+y^2
#include <iostream.h>
#include <iomanip.h>
#include <math.h>
#include <conio.h>
void main(void)
{
unsigned int p,n,c,sroot,x,y;
char at_least_one_factor=0;
float p_xx;
for(n=0;n<=16250;n++)
{
p=4*n+1;
sroot=sqrt(p);
at_least_one_factor=0;
for(c=3;c<=sroot;c+=2)
{
if((p % c)==0)
{
at_least_one_factor=1;
break;
}
}
if(at_least_one_factor==0)
{
cout << "n=" << n << setw(8) << "p=" << p;
for(x=1;x<=sroot;x++)
{
p_xx=sqrt(p-x*x);
if((unsigned int)p_xx==p_xx)
{
y=p_xx;
cout << " x=" << setw(3) << x
<< " y=" << setw(3) << y << '\n';
break;
}
}
}
}
cout << "\nEnd.";
getch();
}
It reminds me of the prolbem where a mathmatician, a physicist and a programmer are all asked to prove that all odd numbers are prime.
The mathmatician says, "1 is prime, 3 is prime, 5, is prime, 7 is prime, by induction all odds are prime."
The Physicist says that is incorrect and resorts to experimentation He tests 1 and finds it is prime, then finds that 3 is prime, 5 is prime, 7 is prime, 9 is...an error in measument..11 is prime, so all odd numbers are prime."
The programmer says that is incorrect and writes a program to prove it. he runs the program and it prints out
"1 is prime"
"1 is prime"
"1 is prime"
"1 is prime"
"1 is prime"
"1 is prime"
"1 is prime"
....
That is a very little contribution in solving the problem
Although this is the C++ I did ask the (mathimatical) proof or a good link to it. I can see no decent area for this kind of questions (maybe in Pascal or The Lounge)...
is there any typos on this formula.....
i guess it's
p = 4^n + 1
but p=4^n=1 is not correct for n=5, 6 , or 7
which formula are you trying to prove.......
p = x^2 + y^2
not every prime number can be represented by x^2 + y^2 .........
Oh BTW, if p is a prime and there is an n such that p = 4*n+3 then there are no (x,y) such that p = x^2 + y^2
To show the Rule in notational form, I shall use the following terms:
~ means: is approximately equal to
^ means: raised to the power of
<= means: less than or equal to
--------------------------
The Rule states that the numbers 2^n times pq, and 2^n times r, are Amicable Numbers if the three Integers (i.e. p, q, and r) are such that
p ~ 2^m times (2^(n-m) + 1) - 1
q ~ 2^m times (2^(n-m) + 1) - 1
r ~ 2^(n+m) times (2^(n-m) + 1)^2 - 1
are all Prime numbers for some Positive Integer 'm' satisfying 1 <= m <= n - 1.
However, there are exotic Amicable Numbers which do not satisfy Euler's Rule, so it is a 'Sufficient' but not 'Necessary' condition for amicability.
--------------------------
It was the last part of your statement and the last part of his Rule which led me to believe that this is what you're talking about.
There was no proof accompanying it, but I shall keep researching further.
rbr, couldn't find this particular 'problem' Can you elaborate?
"Every Prime of the form '6n + 1' can be written in the form 'x^2 + 3y^2"
I have found an ingenious proof of this, but time does not permit me to provide it here.
The original statement came from Pierre de Fermat, himself, in a letter he wrote to a friend named, Bernard Frenicle de Bessy in 1640, in which he (Fremat) stated, that for any Prime 'p' and any whole number 'N', 'p' divides (N^p - n) [later proven by Euler]. That same year, Fermat wrote to another friend by the name of Marin Mersenne, stating that a Prime of the form '4n + 1' can be expressed in exactly one way as the sum of two squares (e.g. 13 = 4 + 9) and hence is the hypotenuse of exactly one right angle triangle with sides of integral length. This is what Euler later picked up on (after disproving it), refined it and issued his (Euler) own statement that "Every Prime of the form '6n + 1' can be written in the form 'x^2 + 3y^2".
Tomorrow I'll be meeting with a fellow colleague of mine at one of the local universities to hear what he may have discovered. But from initial inquiries, no one seem to have ever come across Euler's proof of this particular statement, and if it turns out to be anything like Fermat's Last Theorem, I recall seeing a documentary of a Princeton Math professor, who spent seven long and arduous years proving it, having retreated and abstained from all but the essential social events. In the end, his voice began cracking up, his eyes became filled with liquid, and his demeanor succumbed to emotion as he related the epiphanic moment the solution manifested itself to him. (What can I say! For some people, fame hits them hard.)
I've been waiting for the proof quite some time. I am really curious how Euler actually prove this theorem.
Is there really a proof for this theorem, I doubt it since we are still competing on the biggest prime number right now.
As such, is the proof only work for those prime til the biggest prime number known to the world. How could it be a proof for all prime numbers if we are still looking for the biggest prime number right now in this century(new).
Is this question out of the scope for C++ section. Should we have another section dealed with MATH?
But still it's a good question after all and I am curious abt it.
nietod,
could you at least tell us abt your ingenius proof. Just the idea of how you prove it if you really dont have time to show us.
That was a "Fermat" spoof. In his notes he proposed his famous theorem and said he had an elegant and simple proof, but could not include it at the time. He never wrote the proof down and for nearly 3 centuries (is that right?) no one ever could proove it. As a result his theorem has failed to be prooved many times. Once it appeared as graffetti on a New York City Subway station but the mathematician's train came before he was able to complete the proof so once again the world missed an opportunity for an answer. It was finally prooved and _published_ about 10 years ago. The proof required 100s of pages and extremely complex mathematics not know in Fermet's time. There is no doubt that this is not the proof Fermet had in mind. It is quite possible that Fermat was mistaken, that his proof did not work, and so he never wrote it down and unfortunately never made mention of this fact.
Let 'n' = 1 || 6n + 1 = 7
Let x=2; y=1 || 2^2 + 3(1)^2 = 7
Let 'n' = 2 || 6n + 1 = 13
Let x=1; y=2 || 1^2 + 3(2)^2 = 13
Let 'n' = 3 || 6n + 1 = 19
Let x=4; y=1 || 4^2 + 3(1)^2 = 19
....
Let 'n' = 10 || 6n + 1 = 61
Let x=7; y=2 || 7^2 + 3(2)^2 = 61
....
Let 'n' = 101 || 6n + 1 = 607
Let x=10; y=13 || 10^2 + 3(13)^2 = 607
It holds up! 7, 13, 19, 61, 607 are all Primes.
--------------------------
There's no such thing as the 'biggest' prime number, because there is no such thing as the 'biggest' number. Any number you can conceive, you can always add 1 to it, which is what Euler has demonstrated as '6n + 1'. 'n' can be any number, ... ANY NUMBER; multiply it by 6, ... and then add 1 to it.
I'll be meeting with my colleague later today, and I'm sure he'll have something interesting to tell.
>> Theorem, is how well it holds up,
O course, technically that is not proof. Of course if it failed to hold up in any example, that would be proof that it is incorrrect.
I did meet with my colleague and after a little over an hour of discussion, we don't have a solution as yet, BUT some rather interesting deduction arose from the exchange. (To be honest, I was happy when the meeting was over, because my head was beginning to ache.)
Anyway, what we have agreed on is that the best approach to solving this theorem is to use the Legendre Symbol which states that:
(m/n) = (m|n)
--------------------------
let ~ denotes congruency such that
1) (m/n) ~ 0 (i.e. zero) if m|n
2) (m/n) ~ 1 if 'n' is a quadratic residue modulo 'm'
3) (m/n) ~ -1 if 'n' is a quadratic nonresidue modulo 'm'
--------------------------
NOTE: If 'm' is an ODD Prime, then the Jacobi Symbol reduces to the Legendre Symbol. Happily, we did not get into applying the Jacobi Symbol.
Also, needful of saying, is that the Legendre Symbol obeys
(ab|p) = (a|p)(b|p)
such that
--------------------------
let +/- means plus or minus
--------------------------
1) (3/p) = 1 if p ~ +/- 1(mod 12)
2) (3/p) = -1 if p ~ +/- 5(mod 12)
What this all means, is that in applying the Legendre Symbol we used the fact that (-3/p) = 1, in saying there is a solution to x^2 = -3(mod p).
We have another meeting arranged for tomorrow.
--------------------------
For those who don't understand congruency (or might be a little forgetful about it), here is a little brushing up.
If 'b - c' is integrally divisible by 'a', then 'b' and 'c' are said to be congruent with modulus 'a', and is written 'b ~ c(mod a)'. IOW, the minus sign gets replaced by the congruency sign followed by the modulus value denoted within parentheses, or
a|b - c
--------------------------
Finally, I did not mention it earlier, but here it is: To prove Euler's statement, 'n' must be greater than zero.
p prime is sum of two quadrates =>
= = 4*n + 1.
Proof: assume p = x^2 + y^2 for p a prime greater than 2. Then p is odd. Then it results that x is even and y is odd, OR , x is odd and y is even.
(If x and y are even numbers, then x^2 and y^2 are even numbers, and their sum will be even ! If x and y are odd, then x^2 and y^2 are both odd, and thier sum will be even !)
Conclusion: there are k and l elements of Natural Numnbers, for which is true: x = 2*k + 1 and y = 2*l or
x = 2*k and y = 2*l + 1
Assume x = 2*k + 1 and y = 2*l. then: p = x^2 + y^2 = (2*k + 1) ^2 + (2*l)^2 = = 4*k^2 + 4*k + 1 + 4*l^2 =
= 4( k^2 + k + l^2) + 1
This is a number with shape 4n + 1.
The other way around is more difficult:
p prime, p = 4n + 1 => There are x,y such that p = x^2 + y^2.
Assume P prime , p = 4n +1 . Then Legendre symbols say: (-1/p)= 1. This means there is an x such that x^2 = -1 (mod p). Therefore there is an x such that x^2 + 1^2 = 0 (mod p).
This means p is a diviser of x^2 + 1^2, but it does not say that it is exactly the same as p.
If you can prove there is a z such that z^2 ( x^2 + 1) = 0 (mod p) and
(x*z mod p)^2 + z^2 = p then you have solved this matter.
Jack.
--------------------------
The Legendre Symbol does not say that (-1/p) = 1. It says (-3/p) = 1, which gives us the quadratic nonresidue modulo 'm' of 'n'. Look at it again. I'm 100% sure of that!
Your approach on first reading, sounds plausible. I would have to look at it more deeply to determine if it holds up under scrutiny.
BTW, I'm NOT doing this for the points, OK! I'm doing it purely for the enjoyment and as a homage to Euler as a sort of expression of gratitude for all I have gained from him. I say this because EE has that feature wherein a questioner can convert a person's comment as an answer (even if the person responding did not submit an answer). With due respect to you, I do not want any points.
(If, by chance, you should have a question on Riemannian Geometry, I'd be happy to give it a shot. He is another to whom I feel I owe an expression of gratitude for what I have learnt from his works.)
So, Try, you always state you don't want points I'm going to nag you. Besides, your comments seem to be the right ones.
Oh, and I will get back to you if I ever have any questions on Riemann Geometry. I just wish I could understand that.
Thanx all of you, it has been an enjoyable thread.
Frans.
My colleague (at the university) and I, are still battling with this theorem, meeting at least for an hour every week, to examine and evaluate what progress we might have made, and what additional information we may have discovered during the interim. I will say this, our efforts are not altogether confined to mathematics for we (at least I am) conducting research into Euler's "Omnia Opera" to see if some mention of this theorem from him to a friend (by way of correspondence) might shed a time period when (perhaps) another of his peers might have done collateral work. It was not unusual (I have discovered) for one person to have introduced a theorem (based on extrapolation) without any proof to the statement, many times leaving someone else to either prove or disprove it. Fermat was especially famous for this practice, and I would not be surprised if after many more hours we may not find a proof to this particular theorem, leaving us (my friend and I, in this 21st century) to produce the proof. Such thought has not escaped us.
I shall keep you updated with any revelation. ITM, the exercise is both fun and (this part is true) ... nurturing. | https://www.experts-exchange.com/questions/10282463/Euler-Prime-p-p-4n-1-p-x-2-y-2.html | CC-MAIN-2018-09 | refinedweb | 2,658 | 77.67 |
- What Is a Race Condition?
- Time-of-Check, Time-of-Use
- Secure File Access
- Temporary Files
- File Locking
- Other Race Conditions
- Conclusion
"O let not Time deceive you,
You cannot conquer Time.
In the burrows of the Nightmare
Where Justice naked is,
Time watches from the shadow
And coughs when you would kiss."
W. H. AUDEN
"As I Walked Out One Evening"
Race conditions are among the most common classes of bugs found in deployed software. They are only possible in environments in which there are multiple threads or processes occurring at once that may potentially interact (or some other form of asynchronous processing, such as with UNIX signals). People who have experience with multithread programming have almost certainly had to deal with race conditions, regardless of whether they know the term. Race conditions are a horrible problem because a program that seems to work fine may still harbor them. They are very hard todetect, especially if you're not looking for them. They are often difficult to fix, even when you are aware of their existence. Race conditions are one of the few places where a seemingly deterministic program can behave in a seriously nondeterministic way. In a world where multithreading, multiprocessing, and distributed computing are becoming more and more prevalent, race conditions will continue to become a bigger and bigger problem.
Most of the time, race conditions present robustness problems. However, there are plenty of times when race conditions have security implications. In this chapter we explore race conditions and their security ramifications. It turns out that file system accesses are subject to security-related race conditions far more often than people tend to suspect.
What Is a Race Condition?
Let's say that Alice and Bob work at the same company. Through e-mail, they decide to meet for lunch, agreeing to meet in the lobby at noon. However, they do not agree on whether they mean the lobby for their office or the building lobby several floors below. At 12:15, Alice is standing in the company lobby by the elevators, waiting for Bob. Then it occurs to her that Bob may be waiting for her in the building lobby, on the first floor. Her strategy for finding Bob is to take the elevators down to the first floor, and check to see if Bob is there.
If Bob is there, all is well. If he isn't, can Alice conclude that Bob is either late or has stood her up? No. Bob could have been sitting in the lobby, waiting for Alice. At some point, it could have occurred to him that Alice may be waiting upstairs, at which point he took an elevator up to check. If Alice and Bob were both on an elevator at the same time, unless it is the same elevator, they will pass each other during their ride.
When Bob and Alice each assume that the other one is in the other place and is staying put and both take the elevator, they have been bitten by a race condition. A race condition occurs when an assumption needs to hold true for a period of time, but actually may not. Whether it does is a matter of exact timing. In every race condition there is a window of vulnerability. That is, there is a period of time when violating the assumption leads to incorrect behavior. In the case of Alice and Bob, the window of vulnerability is approximately twice the length of an elevator ride. Alice can step on the elevator up until the point where Bob's elevator is about to arrive and still miss him. Bob can step on to the elevator up until the point that Alice's elevator is about to arrive. We could imagine the door to Alice's elevator opening just as Bob's door shuts. When the assumption is broken, leading to unexpected behavior, then the race condition has been exploited.
When it comes to computer programs, windows of vulnerability can be large, but often they are small. For example, consider the following Java servlet:
import java.io.*; import java.servlet.*; import java.servlet.http.*; public class Counter extends HttpServlet{ int count = 0; public void doGet(HttpServletRequest in, HttpServletResponse out) throws ServletException, IOException { out.setContentType("text/plain"); Printwriter p = out.getWriter(); count++; p.println(count + " hits so far!"); } }
This tiny piece of code may look straightforward and correct to most people, but it has a race condition in it, because Java servlets are multithreaded. The programmer has implicitly assumed that the variable count is the same when printed as it is after the previous line of code sets its value. This isn't necessarily the case. Let's say that Alice and Bob both hit this servlet at nearly the same time. Alice is first; the variable count becomes 1. Bob causes count to be changed to 2, before println in Alice's thread runs. The result is that Alice and Bob both see 2, when Alice should have seen 1. In this example, the window of vulnerability isn't very large. It is, at most, a fraction of a second.
Even if we move the increment of the counter into the expression in which we print, there is no guarantee that it solves our problem. That is, the following change isn't going to fix the problem:
p.println(++count + " hits so far!");
The reason is that the call to println takes time, as does the evaluation of the argument. The amount of time may seem really small, maybe a few dozen instructions. However, this isn't always the case. In a multithread system, threads usually run for a fraction of a second, then wait for a short time while other threads get the chance to run. It could be the case that a thread increments the counter, and then must wait to evaluate the argument and run println. While that thread waits, some other thread may also increment the counter.
It is true that the window of vulnerability is very small. In practice, this means the bug may show up infrequently, if ever. If our servlet isn't receiving several hits per second, then it is likely never to be a problem. This alludes to one of the reasons why race conditions can be so frustrating: When they manifest themselves, reproducing the problem can be almost impossible. Race conditions tend not to show up in highly controlled test environments. If you don't have any clue where to begin looking for a problem, you may never find it. The same sorts of issues hold true even when the window of opportunity is bigger.
In real-world examples, an attacker with control over machine resources can increase the odds of exploiting a race condition by slowing down the machine. Another factor is that race conditions with security implications generally only need to be exploited once. That is, an attacker can automate code that repeatedly tries to exploit the race condition, and just wait for it to succeed. If the odds are one in a million that the attacker will be able to exploit the race condition, then it may not take too long to do so with an automated tool.
In general, the way to fix a race condition is to reduce the window of vulnerability to zero by making sure that all assumptions hold for however long they need to hold. The main strategy for doing this is to make the relevant code atomic with respect to relevant data. By atomic, we mean that all the relevant code executes as if the operation is a single unit, when nothing can occur while the operation is executing. What's happening with race conditions is that a programmer assumes (usually implicitly) that certain operations happen atomically, when in reality they do not. When we must make that assumption, then we need to find a way to make the operation atomic. When we don't have to make the assumption, we can code the algorithm differently.
To make an operation atomic, we usually use locking primitives, especially in multithread applications. For example, one way to fix our Java servlet would be to use the object lock on the servlet by using the synchronized keyword. The synchronized keyword prevents multiple threads from running code in the same object that is governed by the synchronized keyword. For example, if we have ten synchronized methods in a Java class, only one thread can be running any of those methods at any given time. The JVM implementation is responsible for enforcing the semantics of the synchronized keyword.
Here's a fixed version of the counter servlet:
import java.io.*; import java.servlet.*; import java.servlet.http.*; public class Counter extends HttpServlet{ int count = 0; public synchronized void doGet(HttpServletRequest in, HttpServletResponse out) throws ServletException, IOException { out.setContentType("text/plain"); Printwriter p = out.getWriter(); count++; p.println(count + " hits so far!"); } }
The problem with this solution is that it can have a significant impact on efficiency. In this particular case, we have made it so that only one thread can run our servlet at a time, because doGet is the entry point. If the servlet is incredibly popular, or if the servlet takes a long time to run, this solution won't work very well. People will have to wait to get their chance inside the servlet, potentially for a long time. The solution is to keep the code we need to be atomic (often called a critical section) as small as possible [Silberschatz, 1999]. In Java, we can apply the synchronized keyword to blocks of code. For example, the following is a much better solution to our servlet problem:
import java.io.*; import java.servlet.*; import java.servlet.http.*; public class Counter extends HttpServlet { int count = 0; public void doGet(HttpServletRequest in, HttpServletResponse out) throws ServletException, IOException { int my_count; out.setContentType("text/plain"); Printwriter p = out.getWriter(); synchronized(this) { my_count = ++count; } p.println(my_count + " hits so far!"); } }
We could just put the call to println inside the synchronized block, and avoid the use of a temporary variable. However, println is a method call, which is somewhat expensive in and of itself. There's no need for it to be in the block, so we may as well remove it, to make our critical section finish as quickly as possible.
As we have seen, race conditions may be possible whenever two or more operations occur and one of the latter operations depends on the first. In the interval between events, an attacker may be able to force something to happen, changing the behavior of the system in ways not anticipated by the developer. Making this all work as an attacker requires a security-critical context, and explicit attention to timing and knowledge of the assumptions a developer may have made.
The term race condition implies a race going on between the attacker and the developer. In fact, the attacker must "race" to invalidate assumptions about the system that the programmer may have made in the interval between operations. A successful attack involves a quick-and-dirty change to the situation in a way that has not been anticipated. | http://www.informit.com/articles/article.aspx?p=23947&seqNum=4 | CC-MAIN-2017-09 | refinedweb | 1,870 | 62.98 |
This document contains the following sections:1.0 Foreign functions introduction
The description of Foreign Types is not in this document. It can be found in ftype.htm.
The foreign-function interface allows one to link compiled foreign code dynamically into a running Lisp. Foreign code is defined to be code not written in Lisp. For example, code written in C or Fortran is foreign code. The foreign-function interface allows users to load compiled code written in a foreign programming language into a running Lisp, execute it from within Lisp, call Lisp functions from within the foreign code, return to Lisp and pass data back and forth between Lisp and the foreign code.
This mechanism is very powerful, as programs need not be recoded into Lisp to use them. Another advantage arises during program development. For example, a large graphics library can be linked into Lisp and all the functions will be accessible interactively. This enables rapid prototyping of systems that use the library functions, since the powerful Lisp debugging and development environment is now available.
We use the word link because all foreign code should be in a shared object (typically .so or .sl or .dylib on Unix) or dynamic library (.dll on Windows) file which is mapped into a running Lisp process. The function that causes this linking is load, which has been extended to accept and do the right thing with .so/.sl/.dylib/.dll files. Because load is used, we sometimes speak of foreign code being loaded into Lisp. Please understand that foreign code is not truly made part of the image. See Using the load function in loading.htm for details of the Allegro CL implementation to load.
SWIG is a software development tool that reads C/C++ header files and generates the wrapper code needed to make C and C++ code accessible from other languages. See. An interface for Allegro CL has been added to SWIG. See for specific information on the interface to Allegro CL and information on downloading the software. (The SWIG software is not included with the distribution because it is regularly updated. Users should always get the latest update.)
The cbind facility provides tools for automatically generating Lisp code for calling foreign functions using information obtained by scanning C header files. This facility is only available for Solaris and Windows. Look at cbind-intro.htm. That file contains pointers to more documentation.
In this chapter, we discuss C or FORTRAN routines and the Lisp functions that call them. These often have the same names. In order to distinguish them, names ending with () are foreign routines and names without () are Lisp functions. Thus we might say:
The foreign function bar() is loaded into Lisp. We use def-foreign-call to define the Lisp function bar which calls bar().
The differences are not that significant since all platforms use
some form of dynamic linking of shared objects. However, the internal
mechanisms are different and the differences may sometimes be
important. The type of loading is identified by a feature on the
*features* list. The
following table lists the relevant features:
The following appendices describe how to create files suitable for loading on various platforms.
The foreign-function interface in Lisp is in the package
foreign-functions, nicknamed
ff. Users must either use the qualifier
ff: on these symbols or evaluate
(use-package :ff)
before using the interface.
The code for the foreign-function interface may not be contained in the basic Allegro CL image. It is loaded only when needed. Executing certain of the interface functions will cause the correct module to be loaded (def-foreign-call for instance), but we recommend that you ensure that the code is loaded by evaluating the following form before using the foreign-functions interface:
(require :foreign)
This will cause the foreign.fasl module to be loaded from the Lisp library. The form should be included in any source file using functions in the interface. (It is not an error to call require when a module is already loaded.)
Note that the foreign-function interface was designed for the C and Fortran compilers on the system at the time of the release of this version of Allegro CL. New versions of the C or Fortran compilers from the hardware manufacturers may, for purposes of using the foreign-function interface, be incompatible with the version current when the interface was written. In that case, it is possible that already written and compiled Lisp code may cease to work, and that, for a time, the interface may fail altogether. We will maintain the foreign-function interface, and make it compatible with each new release of the system compilers. We cannot guarantee, however, that already compiled code will continue to work in the presence of changes in the C or Fortran compilers.
Foreign code in a .so/.sl/.dylib/.dll file is loaded into Lisp with load (or the top-level command :ld). load is, of course, also used to load Lisp source and compiled (fasl) files. Since load may just be presented with a filename as a single argument, it should be able to determine based on that argument alone whether Lisp code or a foreign library is being loaded. Sometimes other arguments are provided. In that case, those arguments may indicate the type of file being loaded.
As described below, load will
consider the file type to determine whether the file is a foreign
file. load also has a
non-standard keyword argument foreign which, when
true, tells the system the file is a foreign file. (When
foreign is
nil, the type
determines whether the file will be treated as a foreign file or not.)
See Using the load
function in loading.htm for details of
the Allegro CL implementation to load.
Note that the Operating System must know where to find foreign files, either those needed to resolve externals in a file or the file itself if it is specified to load with no directory information.
The specifics on the various Operating Systems differ, but the principle is usually the same: some single environment variable or group of variables is set by the user to tell the OS where to look for library files. On most UNIX and UNIX-like (i.e. LINUX and Mac OS X) operating systems the variable is LD_LIBRARY_PATH. On HP-UX, it is SHLIB_PATH. Note that once Lisp has started, changes to the value of environment variables will likely not be seen by the running Lisp.
On Windows, the following locations are searched:
An additional keyword argument is provided to load, unreferenced-library-names. If specified, it should be a list of entry points (strings). The system will check if these entry points exist and signal an error if they do not.
Here is how load works.
stream, load from it (as a Lisp or fasl file) and return. (You should not open a stream to a Foreign library or shared object file and pass that stream to load.)
nilor unspecified, then the file type is considered. If the searched pathname has a type listed in
*load-foreign-types*(the test is case insensitive on Windows), foreign load processing is done, as described below.
*load-foreign-types*the file is assumed to be a Lisp source or compiled file and it is loaded as a Lisp file.
Foreign file processing behavior can differ slightly, depending on your version of Lisp.
Versions of Lisp with
:dlfcn on
*features* (e.g., SunOS 5.x and later):
:unreferenced-lib-nameswas given, then make sure all entry points are defined and return.
nil, then signal an error, since
dlopen()will not find the pathname.
dlopen()to map the searched pathname into the address space and return.
Versions of Lisp with
:dlwin on
*features* (e.g. Windows):
nil, then signal an error, since
GetModuleHandle()will not find the pathname.
GetModuleHandle()to map the searched pathname into the address space and return.
Versions of Lisp with
:dlhp on
*features* (e.g., HP-UX 11.0):
:unreferenced-lib-nameswas given, then make sure all entry points are defined and return.
niland there is a directory or host component to this pathname, then signal an error, since
shl_load()will not find the pathname.
shl_load() to map the searched pathname into the address space and return.
Versions of Lisp with
:dlmac on
*features* (e.g., Mac OS X machines):
:unreferenced-lib-nameswas given, then make sure all entry points are defined and return.
nil, then signal an error, since
NSLoadModulewill not find the pathname.
NSLoadModuleto map the searched pathname into the address space and return.
Suppose one library file contains a reference to a function defined in another file. For example, suppose we are on a Solaris machine and t_double.so, contains a call to foo() which is defined in foo.so. When we try to load t_double.so, we get the following error (on Solaris, see below for behavior on other platforms):
Below, we point out that loading foo.so (before or after the attempt to load t_double.so) will not resolve the unsatisfied external. The point, which we emphasize here, is any unsatisfied external must be resolved when the so/sl/.dylib/dll file is created (e.g. by using -l[libname] arguments passed to ld on Solaris). Even though the error is signaled when Lisp attempts to load the .so file, the problem cannot be resolved within Lisp. A new .so file, created with the correct arguments passed to ld, must be created and that new .so file must be loaded into Lisp.
This problem is not as bad as it might be. Many standard shared libraries are linked automatically and calls to routines in them will be resolved when the .so file is loaded into Lisp. The important point, to state it again, is if you do get an unsatisfied external error, you must recreate the .so file and then load the modified .so file into Lisp. The example below shows how a .so file might be recreated.
Here is the behavior in this case on various platforms. On the following platforms, the load will fail with an error message that lists an unresolved external symbol:
On the following platforms, the load will fail without an adequate explanation. Unresolved externals is always a possibility for failure.
Each file must be complete in itself: all necessary routines must either be defined in the file or must be in a library specified when the file is created. On Windows, Dec Unix, and AIX, by default, the linker prevents building a shared library with unresolved external symbols.
Consider the example on a Solaris machine: t_double() (defined in t_double.so) calls foo() (defined in foo.so). Even if you have already loaded foo.so, trying to load t_double.so will fail, as follows:
USER(23): :ld foo.so ; Foreign loading /net/rubix/usr/tech/dm/acl/foo.so. Restart actions (select using :continue): 0: retry the load of t_double.so [1] USER(25):
There are two solutions to this problem. You can combine all necessary files (t_double.so and foo.so in our case) into a single file. On Solaris, this could be done with the following command starting with the .o files used to make the .so files:
% ld -G -o combine.so foo.o t_double.o
Alternatively, you can specify the file holding the needed routines (foo.so in our example) as a library for the file needing them (t_double.so in our example). On Solaris again, a command like:
% ld -G -o t_double.so t_double.o -R /net/rubix/usr/dm/acl -lfoo
Suppose you load a library file which defines bar() and then use def-foreign-call to define bar, which calls bar(). Then you load another .so file which defines bar(). What happens?
Lisp will automatically modify bar so that it calls the bar() defined in the newly loaded file (and it prints a warning that it is doing so).
Let us make this clear with an example on a Solaris machine. Consider the two files bar1.c and bar2.c:
/* bar1.c */ void bar() { printf("This is BAR 11111 in bar1.c!\n"); fflush(stdout); } /* bar2.c */ void bar() { printf("This is BAR 22222 in bar2.c!\n"); fflush(stdout); }
bar1.so defines bar() and bar2.so also defines bar(). We first load bar1.so and use def-foreign-call to define bar, and then load bar2.so.
USER(37): :ld bar1.so ; Foreign loading /net/rubix/usr/tech/dm/acl/bar1.so. USER(38): (ff:def-foreign-call bar nil :returning :void :strings-convert nil) BAR USER(39): (bar) This is BAR 11111 in bar1.c! NIL USER(40): :ld bar2.so ; Foreign loading /net/rubix/usr/tech/dm/acl/bar2.so. Warning: definition of "bar" moved from /net/rubix/usr/tech/dm/acl/bar1.so to /net/rubix/usr/tech/dm/acl/bar2.so. USER(41): (bar) This is BAR 22222 in bar2.c! NIL USER(43):
Since one library file cannot depend on another unless specified when it was built, there is a never a problem with duplicate entry points. It is important, however, to be sure that you know which foreign routine is being called.
Failure to do so may cause Lisp to fail unrecoverably.
Suppose on Solaris you have bar() defined in bar1.so and you load bar1.so into Lisp. You also define the function bar with def-foreign-call. Then, you decide to modify bar1.c and bar1.so. Once the new bar1.so has been created, you must load it into Lisp before calling bar. Otherwise, Lisp may fail.
The problem is that the function bar knows it should look for the definition of bar() in bar1.so and it knows where to look in that file. If bar1.so is modified in any way, the place where Lisp will look for the definition of bar() is likely wrong. Lisp, however, does not know that and will look at that location anyway, taking whatever it finds to be valid code. The result can be disastrous, since typically Lisp hangs unrecoverably. In that case, Lisp may have to be killed from a shell and restarted.
In earlier versions of Allegro CL on Unix, it was possible and relatively easy to build an executable image that included foreign code. Because of changes in the way images are built and the fact that the executable and image files are different (which is a new feature on Unix platforms), it is no longer easy to build foreign code into an image. The only way is to write a main() which does the necessary linking. See main.htm.
There are two models of multiprocessing used by Allegro CL (as
described in multiprocessing.htm): the
:os-threads model and the non
:os-threads model. (:os-threads
appears on the
*features* list of implementations that use
it).
In the non :os-threads model, foreign code is not interruptable and Lisp code never runs until foreign code returned control to Lisp, typically by completing but sometimes when the foreign code called back to Lisp. In the :os-threads model, Lisp code on one process may run while foreign code in another process may be waiting or be interrupted. This means that certain implicit features of foreign code may no longer hold. Among them the pointer in the foreign code invalid. We recommend storing all values used by both Lisp and foreign code in foreign (not garbage-collected) space or dynamically allocated on the stack. It is also possible to block Lisp code from running until foreign code completes.
See the release-heap keyword argument to def-foreign-call. It prevents or permits other threads from running along with the foreign code.
Note that in all implementations of Allegro CL it is possible for a foreign function called from Lisp to explicitly start computation in additional threads, as supported by the OS (by having foreign code make the appropriate system calls to start the threads). One could imagine any of these new threads using the foreign function interface to invoke a Lisp function defined as foreign-callable (see defun-foreign-callable). In an Allegro CL with the non :os-threads model of multiprocessing, doing this is almost certain to have disastrous consequences. It is wholly unsupported but there is no protection in the foreign function interface to prevent it from happening. The only legitimate calls to a foreign-callable function will occur in the Lisp's own thread of control, as call-backs from foreign code that was itself called from lisp..
When creating foreign code to load dynamically into Lisp, it is sometimes necessary to refer to variables within Allegro CL's runtime such as nilval or UnboundValue, or else to call a function directly from C such as lisp_call_address(). In cases like these it is necessary to link in the Allegro CL shared-library (either .dll, .so, or .sl, depending on the architecture) with the shared-library that is being built. The actual name of the Allegro CL shared-library is available using the function get-shared-library-name and can be added to the link line to resolve symbols within it that are referenced in the foreign code. The situation on Mac OS X is different. See Section 1.9.1 Linking to Allegro CL shared library on Mac OS X.
It is also possible to avoid delay linking the library until runtime. The advantage is that the library location can easily be determined at runtime, while at shared-library creation time finding the Allegro CL shared-library in a robust fashion (without worrying about LD_LIBRARY_PATH or equivalent and without hardwiring the location) can be difficult. See Section 1.9.2 Delaying linking the Allegro CL shared-library until runtime for details.
For example, on a Sparc,
% cat foo.c #include "lisp.h" LispVal get_nil () { return nilval; } % cc -c -K pic -DAcl32Bit -I[ipath] foo.c
[ipath] is the location of lisp.h (usually
[Allegro directory]/misc/). Note that on platforms that
support 64 bit Lisps (HP's and HP Alphas, Sparcs, Mac OS X, etc.), the
flag
-DAcl64Bit must be used instead of
-DAcl32Bit when compiling for the 64-bit Lisp.
Now, before linking, we need to find out where the Allegro CL shared-library is, so in the Lisp (for example, your own name and path might be different):
user(4): (get-shared-library-name) "libacl80.so" user(5): (translate-logical-pathname "sys:") #p"/usr/acl80/" user(6):
Then back in the shell and build the shared library (this example is from a Solaris machine):
ld -G -o foo.so foo.o /usr/acl80/libacl80.so
Note that other libraries, including system libraries, might need to be linked in order to complete the ld. Then, back in the Lisp, the load can be done:
user(6): :ld ./foo.so ; Foreign loading ./foo.so user(7): (ff:def-foreign-call get_nil () :returning :lisp) get_nil user(8): (get_nil) nil user(9):
Allegro CL does the NSLoadModule of libacl8*.dylib (the Allegro CL shared library) with the options NSLINKMODULE_OPTION_BINDNOW | NSLINKMODULE_OPTION_RETURN_ON_ERROR. The first of these options is just like the RTLD_NOW option of dlopen, and the second causes a zero return-value from a failed call to NSLinkModule instead of calling some error handlers (whose default action would be to kill the program). Missing from this set of options is the NSLINKMODULE_OPTION_PRIVATE, the lack of which means that all symbols that are exported from the module are made global and thus accessible in the running program. Unfortunately this means that there can be no multiply defined externals in either the program or the files it loads.
The file /usr/lib/bundle1.o that gets linked into a bundle file has an entry point called "dyld_stub_binding_helper". Whenever any unresolved symbol is used in the loaded bundle file, this helper calls on the dynamic loaded (dyld) to find the symbol in the running image. Thus the entry points are looked up in a truly dynamic manner, and linking against the Allegro CL library is unnecessary.
But it is also impossible to link against the Allegro CL shared-library, because being a bundle type object, it is self-contained, and the operating system doesn't allow relinking against this kind of shared library.
Thus the shared-libraries that can be dynamically loaded into a program can be thought of as separate entities which communicate via the dynamic loader.
When your foreign function needs to call into Lisp (for example, to call lisp_call-address or lisp_value), it may be complicated when the shared library created from your foreign function is created to link to the Allegro CL shared-library (aclxxx.dll or libaclxx.{sl,so,dylib}). It is possible to pass the Allegro CL shared-library handle to another shared-library so that it doesn't have to explicitly call entry-points within the Allegro CL shared-library. As a result, you need not specify the Allegro CL shared-library location when your shared library is created.
The runtime information is provided by the functions get-shared-library-handle and get-executable-handle.
Consider the following example, from a Solaris. A shared-library is
created to call lisp_call_address without referring to that
function in the link phase (lisp_call_address' address is
looked up at runtime). Instead, information about the location is
determined at runtime and passed to the foreign functions using the
init_mylib function (see the def-foreign-call form in the
#ifdef HP_CC extern "C" { #if !defined(Acl64Bit) #define hp32 1 #endif #endif #ifdef WINDOWS # define Dllexport _declspec(dllexport) #else # define Dllexport #endif #define CALLNAME "lisp_call_address" #ifdef WINDOWS void * find_ff_symbol(void* handle, char *name) { return GetProcAddress(handle, name); } #endif #if defined(__APPLE__) && defined(__ppc__) #include <mach-o/dyld.h> #undef CALLNAME #define CALLNAME "_lisp_call_address" void * find_ff_symbol(void *handle, char *name) { NSSymbol sym; sym = NSLookupSymbolInModule(handle, name); if (sym != NULL) { return NSAddressOfSymbol(sym); } return NULL; } #endif #if defined(hp32) #include <dl.h> void * find_ff_symbol(void *handle, char *symbol) { int value; shl_t handlecopy = (shl_t) handle; if (shl_findsym(&handlecopy, symbol, 0, &value) == 0) { return (void *)value; } else { return (void *)0; } } #endif #if !defined(WINDOWS) && !defined(__APPLE__) && !defined(hp32) #include <dlfcn.h> void * find_ff_symbol(void *handle, char *name) { return dlsym(handle,name); } #endif void *(*get_lisp_call_address)(int); int Dllexport init_mylib(void *handle) { get_lisp_call_address = (void *(*)(int))find_ff_symbol(handle, CALLNAME); if(get_lisp_call_address) { return 1; } return 0; } /* * In lisp, initialize by defining and calling init_mylib as a foreign * function: * * (ff:def-foreign-call init_mylib ((handle :foreign-address)) * :returning :int) * (init_mylib (excl:get-shared-library-handle)) * * * After init, usage of get_lisp_call_address() in inline C code is * the same as lisp_call_address(): * * * ... * * int (*lispfunc)(...) = (int(*)())get_lisp_call_address(index); * * lispfunc(...); * * ... * * where ... denotes arguments and their types in proper C convention. * */ /* body of mylib code goes here */ #ifdef HP_CC }; #endif
When you are running multiprocessing on a platform using the
:os-threads model (see
multiprocessing.htm), you have to worry about
foreign functions in different threads modifying values in the Lisp
heap in a thread unsafe way. (This is not a problem when running on a
platform that does not use the
:os-threads model
because no lightweight Lisp process will run while foreign code is
being executed.)
Therefore, on platforms using the
:os-threads
model, in order for any thread in a process to execute lisp code, it
must have exclusive control of the data and control structures that
define the lisp execution environment. These resources are
collectively called "the Heap" and access to them is controlled by OS
synchronization primitives. Initially, the single thread that
initializes the lisp environment has possession of the Heap. If
multiple threads are running lisp code, Allegro arranges that they
will each have access to the heap at appropriate times. A thread that
runs lisp code for a long time may be preempted at various points so
that control of the heap can be given to another thread. A thread that
makes a call to a foreign function has the option of "releasing the
heap" for the duration of the call. This allows another thread to take
control of the heap while the first thread performs an action that may
require significantly more real time than cpu time. A foreign call
that is cpu-bound, however, would be better off not releasing the
heap, especially if the call took a small amount of processor time
and/or occurred frequently. See the discussion of the
release-heap argument to def-foreign-call for information on
how to indicate the heap can be released in a foreign call.
All existing 4.3 exported symbols become deprecated, and preserved only for compatibility. There is a compatibility module for 4.3.x style foreign functions code. The module is called ffcompat and can be loaded with
(require :ffcompat)
The remainder of this document describes the foreign function interface in Allegro CL. Except as noted, the interface is the same on all platforms.
The description of foreign types is in the document ftype.htm.
The following table shows the functions used in the foreign function interface. The Notes in the following table provide brief and by no means complete descriptions of the object being described. Please follow the link to the description page for the complete description including warnings and subtleties.
Locations of foreign objects (usually objects stored outside the Lisp) are typically specified by addresses. A true address is a machine integer. Lisp integers are not machine integers (because certain bits have special meaning in fixnums and bignum have a header) but a true machine integer can be extracted easily enough so long as the system knows to do it. Foreign functions defined to Lisp with def-foreign-call may take foreign type objects, or vectors or foreign-addresses (integers, vectors, or foreign-pointer objects) as arguments.
A foreign type is either one of the built-in types (like
:int) or one of a system of foreign structures
defined with def-foreign-type.
A foreign-pointer is an instance of the class
foreign-pointer (see
also make-foreign-pointer) that has a
special slot (with accessor foreign-pointer-address) intended
to point to raw data.
A foreign adress is either a Lisp integer, a Lisp vector, or a foreign-pointer instance.
Note that foreign-types and foreign-pointers have nothing particular to do with each other. (A foreign-pointer is really a way to identify an integer as a foreign pointer -- rather than any old value -- to allow better argument and type checking.)
The second required argument to def-foreign-call is the argument
list for the foreign function. The elements of that list are lists of
argument names (symbols) followed by the foreign type to which it
should be converted (and perhaps followed by additional values). A
symbol alone is the same as the list
(symbol
:int).
A Lisp value passed as an argument to a foreign function will be converted appropriately according to the specification in the argument list.
Thus, when a foreign type is specified as an argument to a foreign call, the value
passed will be converted from Lisp to C by specific rules. For example, a
:int type expects a Lisp integer and converts it to a machine
integer. A
:struct is adjusted to point to its first
element, etc.
When the argument is specified
:foreign-address,
the system will accept as a value (1) a Lisp integer (converted to a
machine-integer), (2) a Lisp vector (adjusted to point to its first
element), or (3) a foreign-pointer (converted by extracting the
foreign-pointer-address from the
object).
The macro def-foreign-call associates a foreign function with a Lisp symbol, allowing it to be called as a Lisp function would be called.
The problem in making such an association is that Lisp types and foreign types are usually different, and so as part of the definition of a foreign function, the system must be told how Lisp objects passed as arguments should be converted to foreign types and how a returned value should be converted to a Lisp type. Further information can be provided as well: should the call be a candidate for inlining? When the multiprocessing model is :os-threads, should other processes be able to modify the Lisp heap while the foreign function is running? And so on.
The definition of def-foreign-call is on its own description page, found by following the link. We discuss the syntax of a call next and then make some other comments and provide examples.
def-foreign-call name-and-options arglist &key kwopt ... MACRO
This macro is used to describe a function written in a foreign language (usually C). The action of this macro is to define a Lisp function which when called will pass control to the foreign function, making appropriate data transformations on arguments to and the return value from the foreign function.
name-and-options -> name-symbol ;; default conversion to external name -> (lisp-name-symbol external-name) external-name -> [convert-function] [external-name-string] ;; default convert function is ;; convert-foreign-name arglist -> () ;; Implies default argument processing ;; (like old defforeign :arguments t spec) -> (:void) ;; Explicitly looking for no arguments -> (arg ...) arg -> name -> (name complex-type-spec) ;; If name is nil, a dummy name will be supplied. -> ... ;; ... (an ellipsis) must be the string "..." or ;; a symbol in any package (e.g. user::...). ;; If an ellipsis is used, it represents any number ;; of arguments, untyped, regardless of what ;; comes before it. If ... is used it must ;; only be used once as the last argument ;; specification. Any arguments prior to the ;; elipsis are treated as they are without the ;; elipsis, and any arguments from the elipsis ;; onward are treated as if the def-foreign-call ;; had been specified with no prototyping; ;; i.e. the number and type of the arguments from ;; the elipsis and beyond are ignored. type-spec -> foreign-type -> (complex-type-spec) complex-type-spec-> foreign-type [lisp-type [user-conversion]] user-conversion -> symbol ;; Not yet documented lisp-type -> any Lisp type ;; This constrains the runtime arg to this ;; Lisp type. When trusting declarations we ;; assume the Lisp type and optimize accordingly.
;; If declarations are appropriate the compiler ;; may generate checks to validate the lisp type. kwopt -> :RETURNING type-spec ;; Note that if the foreign type mentioned ;; in the :RETURNING clause is expressed as ;; a list (i.e. begins with a parenthesis) then the ;; second form of type-spec -- (complex-type-spec) ;; -- must be used. Otherwise, the first ;; component of the foreign type is ;; treated as the first component of a ;; complex-type-spec and a spurious ;; error is signaled. For example, if ;; the type is (* Foreigntype), that is ;; a pointer to a struct of type ;; Foreigntype, then the correct specification ;; is `:returning ((* Foreigntype))' -> :CONVENTION { :C | :STDCALL | :FORTRAN} ;; On NT, C (cdecl) and stdcall are now equivalent. ;; :FASTCALL is not supported. -> :ARG-CHECKING { NIL | T } -> :CALL-DIRECT { NIL | T } -> :METHOD-INDEX index ;; Ordinal index of C++ member ;; method; vtbl is first argument. -> :CALLBACK { NIL | T };; Callback currently forced to T. -> :RELEASE-HEAP { :NEVER | :ALWAYS | :WHEN-OK } -> :PASS-STRUCTS-BY-VALUE { NIL | T } ;; If nil, ;; structures are passed by reference even if ;; they are not so specified with a *. -> :ERROR-VALUE { NIL | :ERRNO | :OS-SPECIFIC } ;; When ;; non-NIL, return error value as ;; second return value from foreign call.
The
:returning keyword argument specifies what the
foreign function will return when it completes. This value, suitably
modified to a Lisp value, will be returned by the Lisp function
associated with the foreign function.
The value specified for
:returning (1) a foreign
type (defined by def-foreign-type). (2) a list of
a foreign type and a Lisp type (and an optional third element which is
not used but may be in a later release); example:
(:double
single-float). (3)
((* :char)), or
((* :char)
string
), etc.; this causes
native-to-string to be
called automatically after the return. (4) the value
:lisp, meaning a Lisp object will be returned and
no conversion should be done. This is a dangerous option since if the
returned value is not actually a Lisp object, a gc failure may
occur. (5) The value
:foreign-address, which will
be interpreted as an unsigned integer and converted to a positive Lisp
integer. (6) The value
:void, meaning nothing will
be returned and the Lisp function should return
nil. (There are some additional deprecated options:
see the description of def-foreign-call.)
The default for
:returning is the foreign type
:int, which is a value of type (1) in the list
above. The integer value is converted upon return to Lisp type integer
(a fixnum with a possible overflow to a bignum). This default value is
not appropriate when the foreign function is returning a long, an
unsigned long, or a pointer of some sort. However, the fact that it is
not appropriate is masked on 32-bit architectures by the fact that, as
it happens, an
:int is effectively equivalent to
those values.
On 64-bit architectures, however, this is not true so
:int cannot serve for a long, an unsigned long, or
a pointer of some sort. Therefore, when returning a pointer, the type
should be
:unsigned-long and when returning another
integer value, the type should be
:int or
:long or
:unsigned-long, as
appropriate.
Here are some examples of def-foreign-call usage:
(def-foreign-call add2 (x y))
Call a).
(def-foreign-call t_double ((x :double) (y :double single-float) (z :int fixnum)) :returning :double))
Call a function, probably named "t_double" in C, and returns the correct string (so
(dash-to-underscore
't-float) returns
"t_float") or
you can simply specify the correct string. Note again that
dash-to-underscore must be already defined when the
def-foreign-call form in evaluated.
(def-foreign-call c_array ((str (* (* :char)) (simple-array simple-string (*))) (n :int fixnum)) :returning :char)
Call a function whose C name is probably c_array, whose "str" argument is an array of strings, properly converted (by copying from the Lisp parts). The second arg "n" is a Lisp fixnum shifted to make a C int. The C function returns a char which is made into a Lisp character.
The def-foreign-variable.
def-foreign-variable name-and-options &key kwopt ... MACRO
This.
name-and-options -> name-symbol ;; default conversion to external name -> (lisp-name-symbol external-name) external-name -> [convert-function] [external-name-string] kwopt -> :type access-type -> :CONVENTION { :C | :FORTRAN}
Here are some examples of def-foreign-variable usage:
user(1): (ff:def-foreign-variable sigblockdebug) sigblockdebug user(2): sigblockdebug 0 user(3): (setq sigblockdebug 1) 1 user(4): (ff:def-foreign-variable (print-lso-relocation "print_lso_relocation")) print-lso-relocation user(5): print-lso-relocation 0 user(6): (setq print-lso-relocation 1) 1 user(7):
In these examples, variables from the Allegro CL internal runtime are set up to allow Lisp to access the variables in a lispy way. Since these variables are not documented, it would take a C debugger to examine and verify that the values of the C variables had indeed been changed.
WARNING: Do not attempt to use this macro on the C "errno" variable, nor on any other variable that might be bound on a per-thread basis; doing so might cause (incorrect) information to be returned for the wrong thread.
Arguments to function calls can be passed in two ways, by value and by address. When an argument is passed by value, a copy of the value is placed somewhere (typically on the stack) where the function can access it. When an argument is passed by address, a pointer to its actual location is given to the function. Arguments in C are usually (but not always) passed by value, while arguments in Fortran are normally passed by address.
A function that receives an argument called by value can modify that value. There are two problems with such modifications:
We deal with each situation in turn. The solution is the same in both cases: pass the address of an array. is expected behavior and is generally what is intended and desired.
Users therefore should be warned that in many cases (arrays being the main exception) when Lisp code calls a foreign function that modifies one of the arguments passed by address, the Lisp value of that argument will be unchanged. The reason is that Lisp represents objects differently from C or Fortran and so often cannot pass the actual address of the Lisp object to the foreign code, since the foreign code would not correctly interpret the value pointed to. Instead, Lisp makes a copy of the Lisp object, changing the representation appropriately, and passes the address of the copy. Although this copied value is modified by the foreign code, Lisp ignores the copied value after the function returns, looking only at the unmodified Lisp object.
To repeat the above warning: Fortran functions do not always affect the value of a Lisp object when this is passed to a Fortran function. The following example illustrates this behavior. Say we have the Fortran file fnames.f containing the function itimes2():
function itimes2(x) integer x x = 2*x itimes2 = x return end
This function appears to double the (C or Fortran) integer value stored in the location specified by x. We compile and load the file into Lisp and then run the function. Here are the results:
USER(21): (load "fnames.so") t USER(22): (ff:def-foreign-call itimes2 ((x :int fixnum)) :convention :fortran :returning :void) ;; send in a fixnum, but pass by address ITIMES2 USER(23): (setq x 19) 19 USER(24): (itimes2 x) 38 ;; gives 38 as expected USER(25): x 19 ;; but x is unchanged.
The problem is that a Lisp fixnum is not the same as a Fortran (or C) integer, and thus the foreign-function interface must convert it. It copies the Lisp fixnum, converts it to a Fortran integer, and then passes the address of the converted copy to the Fortran function. When passing a fixnum array, though, there is no longer an automatic conversion of the array element-type. Instead of fixnum, an element-type with representation compatible with fortran should be used. The expected Fortran behavior can be achieved by passing an array as follows
USER(30): (ff:def-foreign-call itimes2 ((x :int (simple-array #-64bit '(signed-byte 32) #+64bit '(signed-byte 64) (1)))) :convention :fortran :returning :void) ITIMES2 USER(31): (setq x (make-array 1 :element-type #-64bit '(signed-byte 32) #+64bit '(signed-byte 64) :initial-element 19)) #(19) USER(32): (itimes2 x) 38 ;; gives 38 as expected USER(33): x ;; as does x #(38)
Again, can cause trouble when, for example, you pass a floating point value to a FORTRAN routine that modifies the value. Consider the following FORTRAN subroutine:
subroutine dtest(x) double precision x x = x + 1.0d0 return end
It increases the value of its argument by 1.0d0. We compile this function, load it into Lisp, and define it as a foreign function with def-foreign-call:
(ff:def-foreign-call dtest ((x :double double-float)) :convention :fortran :returning :void)
Now we run it, though the behavior you see may be different from what is reported here. In some cases the contant value is changed (as shown). In other cases, Lisp simply hangs unrecoverably. In any case, things are too broken to continue (computation which believes 0.0d0 equals 1.0d0 is unlikely to be useful).
USER(3): (dtest 0.0d0) ;; Lisp may hang at this point rather than ;; returning control to the Listener. nil USER(4): 0.0d0 1.0d0
Whoa! What happened to 0.0d0? Well, here is what happened. 0.0d0 is a Lisp object of type double-float. As such, it has a type code and a pointer to its actual value, 0.0d0. When dtest is called with 0.0d0 as an argument, the location of the value is passed. FORTRAN then modifies that location. The trouble is that Lisp still thinks that location contains 0.0d0 and so when Lisp reads 0.0d0, it gets the value in the location and finds 1.0d0 instead.
In fact, Lisp reuses floating-point values whenever it can. Since there is no way within Lisp to modify a value, there should be no problem with using the same value over and over and doing so will save time and space.
It could be argued that Lisp should protect itself by making a copy of the floating-point value and passing that to FORTRAN. However, that would put a cost on many foreign calls where protection was not necessary. Instead, there is a simple workaround for users who wish to call by address code that modifies floats: specify the float as a length 1 vector and pass the vector:
(ff:def-foreign-call dtest ((x (:array :double))) :convention :fortran :returning :void) USER(8): (setq x1 (make-array 1 :element-type 'double-float :initial-element 0.0d0)) #(0.0d0) USER(9): (dtest x1) nil USER(10): x1 #(1.0d0) USER(11): 0.0d0 0.0d0
Fortran cannot distinguish between the address of the start of an array and the address of a single value, so passing the address of the double-float array x1 is correct and works as we want.
Another difficulty arising out of differing Lisp and non-Lisp
representations of values is illustrated by the example
itimes2() above. The argument
passed to the foreign function was a fixnum, not an integer. Integers
can be bignums or fixnums. C or Fortran integers may be larger than
all possible Lisp fixnums and smaller than most but not all
bignums. If a fixnum is passed to foreign code, it is always correctly
represented, but a bignum can be represented only if it is small
enough. The foreign-function interface will truncate any bignum that
does not fit into the foreign integer representation without
warning. Users can avoid this by not using integer as a value for
arglist, the second required argument to def-foreign-call and thus not
passing bignums, except when the argument value was generated by
foreign code. The return value from foreign code (the value of the
returning keyword argument) defaults to type
:int, and since some foreign integers are too big
to be fixnums, they may be bignums. But, since they came from foreign
code, they will be correctly represented as foreign integers when
passed back to foreign code. Only in the case where you are sure the
value can be represented as a machine integer do we recommend integer
as a value for arglist (more precisely, as an element in the
list which is the value).
An example illustrates the use of arrays. Say there is a compiled C shared object file myreverse.so:
int myreverse(n, x) double *x; /* pointer to array of doubles */ int n; /* array length */ { int i; double d; for (i=0; i <= n/2; i++) { d = x[i]; x[i] = x[n-1-i]; x[n-1-i] = d; } return n; }
In Lisp you might define (after loading myreverse.so) this function as follows:
USER(40): (ff:def-foreign-call myreverse ((i :int fixnum) (x (:array :double)))) MYREVERSE USER(41): (setq x (make-array 3 :element-type 'double-float :initial-contents '(1.0d0 2.0d0 3.0d0))) #(1.0d0 2.0d0 3.0d0) USER(42): (myreverse (length x) x) 3 USER(43): x #(3.0d0 2.0d0 1.0d0)
Allegro Common Lisp provides a few functions to help converting strings from Lisp to C and back again. In this section, we discuss the issues and provide some examples.
Starting in release 6.0, specifying
((* :char)), or
((* :char)
string
), etc. as the value of
the returning to def-foreign-call causes native-to-string to be called automatically
after the return. This is a safe change as far as compatibility goes,
because pre-6.0 versions would error on such a specification. The
alternative specification described here still works, and should be
used if any external-format other than
:default is
desired. Thus in the example just below, you can specify
'((*
:char)) instead of
:int as the value of
:returning and you will see the string rather than
the address of the string returned. Of you cannot apply (and do not
need to apply) foreign-strlen to the result.
Note that `C strings' are conceptual only. A `C string' is really not
a type, but a usage of a character pointer and its storage. A Lisp
string is an actual type and can be distinguished from other pointers.
There are other uses for a C character pointer besides strings, but
strings are most often passed in interface functions. By default, the
def-foreign-call interface
expands a
((* :char)) to
((* :char)
string), which causes a Lisp string to be either created (on
return) or checked for (when passed as an argument). If such
checking or creation is not desired, (as would be the case where the
Lisp value would be just an integer or an array of a different type)
specify the actual type of the value of the argument, e.g
(def-foreign-call foo ((x (* :char) integer)) :returning ((* char) integer)) ;; or (def-foreign-call foo ((x (* :char) (simple-array (unsigned-byte 8) (*)))))
If you have an address of a C string, you can pass it to native-to-string and a Lisp string with the same contents will be returned. (You can specify a string to receive the C string as a keyword argument to native-to-string, as described in the full description of that function).
The function foreign-strlen takes an address of a C string and returns its length.
;; This example calls a foreign function, GET-MESSAGE, that ;; returns a pointer to a string. ;; The C language file getmessage.c contains: #include <stdio.h> char *mess="this is a test"; char *getmessage() { return mess; } ;; Compile the C language file and load it into Lisp. ;; GETMESSAGE returns an integer pointer to the string. USER(12): (ff:def-foreign-call getmessage (:void) :returning :int) GETMESSAGE USER(13): (getmessage) 8790592 ;; EXCL:NATIVE-TO-STRING converts the integer returned by getmessage ;; to a string. USER(13): (excl:native-to-string (getmessage)) "this is a test" ;; Use FOREIGN-STRLEN to find the length ;; of the string returned by GETMESSAGE. USER(14): (ff:foreign-strlen (getmessage)) 14
With the def-foreign-call foreign function definer, one can specify string as one of the arguments. That will cause Lisp strings to automatically be converted to C style char* arrays at function call time.
To copy a Lisp string to a C style char * string outside of using def-foreign-call is to use the function string-to-native.
Our example passes a string to C, which calculates its length and returns it.
:: UNIX example ;; ;; This example calls a foreign function, PUTMESSAGE, that ;; returns a pointer to a string. Note that this is a simple ;; example. PUTMESSAGE could be def-foreign-call'ed to take a ;; SIMPLE-ARRAY as an argument. ;; ;; The C language file put-message.c contains: #include <stdio.h> /* putmessage expects a pointer to a string as an argument.*/ /* putmessage prints that string to stdout.*/ putmessage(s) char *s; { puts(s); fflush(stdout); } ;; Compile the C language file and load into Lisp. ;; PUTMESSAGE takes a pointer to a string (an integer) as ;; an argument USER(20): (ff:def-foreign-call putmessage (integer) :returning :void) PUTMESSAGE ;; Create a string in lisp. USER(21): (setf lisp-message "This is a message from lisp") "This is a message from lisp" ;; Run PUTMESSAGE with a lisp string as an argument. USER(22): (putmessage (excl:string-to-native lisp-message)) This is a message from lisp NIL
Here is the example for Windows:
;; Windows example ;; ;; Allegro CL makes available to C programs the function ;; aclprintf() which operates just like printf and the result ;; is printed to the Allegro Common Lisp Console. ;; To show the console window right click on the ;; Lisp icon on the system tray and choose Show Console. #| --------- the file put-message.c extern void aclprintf(char *, ...); void _declspec(dllexport) putmessage(char *str) { aclprintf("message from lisp: '%s'\n", str); } |# ;; Compile the C language file and load into Lisp. ;; PUTMESSAGE takes a pointer to a string (an integer) as ;; an argument user(2): (ff:def-foreign-call (putmessage "putmessage") ((str (* :char))) :returning :void) putmessage user(3): (putmessage "a lisp string") nil ;; On the console window you see printed: message from lisp: 'a lisp string'
You may want to pass strings from Lisp to C and from C to Lisp. Passing strings from Lisp to C is pretty easy. A Lisp string will be converted correctly. Passing an array of strings is more complex. Examples of both are shown in Passing strings from Lisp to C and Special case: passing an array of strings from Lisp to C.
Lisp will correctly convert Lisp strings when passing them to C and therefore passing a string from Lisp to C is quite easy. Consider the following example.
We define a C function stringl() which takes a string as an argument and returns its length.
;;; C code for UNIX: # include <stdio.h> # include <string.h> int stringl(char *s) { return strlen(s); } ;; C code for Windows: #ifdef _WIN32 #define DllExport __declspec(dllexport) #else #define DllExport #endif # include <stdio.h> # include <string.h> DllExport int stringl (char *s) { return strlen(s); }
We compile this function and load the resulting .so/.sl/.dll/.dylib file into Lisp. We then call def-foreign-call as follows and then call the C function:
user(50): (ff:def-foreign-call stringl ((string (* :char))) :strings-convert t :returning :int) string1 user(51): (stringl "hello") 5 user(52):
Passing an array of strings from Lisp to C is somewhat more complex, as the next example shows. A common usage in C is typified by the following program fragment:
#define null 0 char *z[] = {"strings1", "string2", null}; ... handle_strings(z); ... handle_strings(argv) char **argv; { while( *argv != null ){ handler_for_string(*argv); argv = argv + 1; } }
Similar usage is also common with the array size included:
char *z[] = {"strings1", "string2", "string3"}; ... handle_strings(3,z); ... handle_strings(argc, argv) char **argv; int argc; { ... }
The variable
argv is an array with each element
pointing to a C string in both cases. (Note, however, that in the
first case a NULL pointer terminates the array.) One may like to call
handle_strings() from Lisp (after doing a def-foreign-call) by
something like the following:
(handle_strings (make-array 3 :initial-contents '("string1" "string2" 0)))
or perhaps
(handle_strings 3 (make-array 3 :initial-contents '("string1" "string2" "string3")))
depending on the definition of handle_strings() above. However, the foreign-function interface does not normally convert the individual elements of a Lisp array.
One can convert an array of Lisp strings to a foreign object acceptable as a C char** argument by using a function such as that below. Note that as written it does fresh allocations on each call, so a user may wish to tailor it as desired. In particular, a call to string-to-native returns a value which must be passed to aclfree in order to be reclaimed.
;; Take a lisp vector of lisp strings, and return an equivalent ;; foreign array of C strings. Useful for C functions expecting ;; 'char **' arguments. ;; (defun lisp-string-array-to-c-string-array (a) (let ((r (ff:allocate-fobject (list ':array '(* :char) (length a))))) (dotimes (i (length a)) (setf (ff:fslot-value-typed '(:array (* :char)) nil r i) (string-to-native (aref a i)))) r))
The array-of-strings (argv) argument to the foreign function can then be declared as follows:
(ff:def-foreign-call handle_strings ((argc :int) (argv (* (* :char)))))
The foreign function can then be called as follows:
(handle_strings 3 (lisp-string-array-to-c-string-array (make-array 3 :initial-contents '("one" "two" "three"))))
Note that before the current Allegro CL deftype facilities were
available, foreign-function definers def-foreign-call and its (now
obsolete) predecessor defforeign were designed to
handle arrays of strings specially by use of the argument type
(simple-array simple-string (*)). This argument type
usage is no longer recommended, but it is documented here to describe
backward compatibility.
While this is not implemented as a distinct data type in Allegro CL, the foreign-function interface will recognize this declaration and convert the array appropriately for C. This is a slow function call as the interface must allocate space to do the conversion. To get the desired behavior (e.g. for the second of the above two possibilities for handle_strings()) you should use:
(ff:def-foreign-call handle_strings ((integer :int) (string (:array (* :char)) (simple-array simple-string))))
Note that if you do not declare arguments - e.g. if you use:
(ff:def-foreign-call handle_strings nil)
The array will not be converted correctly on the call to handle_strings(). Note that this is not typical; the interface normally converts arguments according to their Lisp data type whether or not they are declared.
If you do make this declaration and pass in an arbitrary Lisp array, all bets are off. Only 0 and array elements of type simple-string are guaranteed to be correctly converted.
Structures can be passed and returned by value through the
foreign-function interface. (This is a new feature in 8.2 which
was not documented in the original 8.2 release.) Usually,
programmers pass structures by reference, using
*
notation to indicate that what is being passed is a pointer to the
struct. But in a few cases, such as with graphics objects in the
Cocoa interface, structs are passed by value. This means that the
slots of the structs are spread out over the argument list as if they
are individual arguments, and then received in the called function
without needing a change in syntax; the fields of the struct can be
specified with dot-notation, e.g.:
typedef struct { float x; float y; } Point; Point * jog_by_ref(point *pointp, float incx, float incy) { pointp->x += incx; pointp->y += incy; return pointp; } Point jog_by_val(Point point, float incx, float incy) { point.x += incx; point.y += incy; return point; }
In the above example,
jog_by_ref passes and returns
a
Point struct by reference,
and
jog_by_val passes and returns
a
Point struct by value. The major difference
between the two examples is that when
jog_by_value
returns, the
point that was passed in has been
modified, whereas the
point that was passed in in
jog_by_val has not been changed.
Allowing passing structs by value changes previous behavior in a minor
but incompatible fashion: previously when def-foreign-call detected the passing of a
structure, it automatically and silently converted it to
pass-by-reference. Now it warns, though it still performs the
conversion. The old behavior can be reinstated globally (using the
variable
ff:*pass-structs-by-value*) or on a per call
basis, using the pass-structs-by-value keyword
argument to def-foreign-call.
Associated with passing structures by value is the variable
ff:*pass-structs-by-value* and
the pass-structs-by-value keyword argument to
ff:def-foreign-call. Follow
the links for more information. The new argument
to ff:def-foreign-call is at
the end of the table of arguments.
Here is an example:
;; The struct Point and the functions jog_by_val and ;; jog_by_ref are defined above. Here are some possible ;; definitions in Lisp: (ff:def-foreign-type Point (:struct (x :float) (y :float))) ;; Legacy pass-by-reference (warning issued by default): (ff:def-foreign-call jog_by_ref ((point Point) (incx :float) (incy :float)) :returning Point) ;; Legacy pass-by-reference (warning explicitly suppressed): (ff:def-foreign-call jog_by_ref ((point Point) (incx :float) (incy :float)) :returning Point :pass-structs-by-value nil) ;; Proper pass-by-reference (no warning, no change needed): (ff:def-foreign-call jog_by_ref ((point (* Point)) (incx :float) (incy :float)) :returning ((* Point)) :pass-structs-by-value nil) ;; New pass-by-value (no warning issued) (ff:def-foreign-call jog_by_val ((point Point) (incx :float) (incy :float)) :returning Point :pass-structs-by-value t)
Alleg
ff:*pass-structs-by-value* or, alternatively, by
the value of the pass-structs-by-value argument
to def-foreign-call.
The Lisp image catches all signals from the operating system. In particular, when an asynchronous interrupt (e.g. SIGINT on Unix) occurs, the signal handler in Lisp sets a flag and then returns. This flag is checked inside Lisp functions only when it is safe to do so. If you are executing in foreign code, any signals that are received will not be processed until some point after you return to Lisp. So if your C code gets into an infinite loop, you won't be able to interrupt it cleanly (you will be able to interrupt -- see startup.htm). This also implies that a foreign function is not interruptible by the Lisp scheduler in implementations that use the non :os-threads model of multiprocessing, see multiprocessing.htm.
If you need to be able to catch signals in foreign code, you must establish your own signal handlers when you enter the foreign code and restore the Lisp signal handlers before you return to Lisp. Perhaps the easiest way to do this is to `wrap' your foreign code in a function that takes care of these tasks. This wrapper function calls your real function, and it returns the value returned by the real function.
Here is an example of such a function, which catches interrupts (Signets). This example uses the signal() function for simplicity. (Some versions of Unix supply other, more advanced, functions.)
#include <setjmp.h> jmp_buf catch; int (*old_handler)(); int wrapper(arg1, ..., argn) ... { auto int return_value; extern int new_handler(), real_function(); if ((return_value = setjmp(catch)) != 0) return return_value; old_handler = signal(sigint, new_handler); return_value = real_function(arg1, ..., argn); signal(sigint, old_handler); return return_value; } int new_handler() { signal(sigint, old_handler); longjmp(catch, -1); }
The wrapper function first calls setjmp() to establish a C stack-frame environment for a subsequent longjmp() to return to. (This is the C equivalent of Lisp catch and throw.) The setjmp() function returns zero when the catch is established; it returns the value of the second argument to longjmp() otherwise. If the setjmp() was the target of a longjmp() (from within the interrupt handler), we return the value returned by the longjmp() (here -1 to signal an abnormal return).
The wrapper then installs the new SIGINT handler, saving the address of the old one. Once the interrupt handler is established, the real function is called and we save the return value (here an integer). Next we restore the Lisp signal handler, then return the value returned by the real function to Lisp.
If an interrupt occurs while this C code is executing, new_handler() gains control. It restores the Lisp signal handler, and then jumps to the established catch: control returns to the point of the setjmp(), which this time returns -1. Foreign code execution is interrupted, and we cleanly return to Lisp.
Note that the Lisp callback functions lisp_value() and lisp_call_address() (described in Section 9.1 Accessing Lisp values from C: lisp_value() and Section 9.2 Calling Lisp functions from C: lisp_call_address() and lisp_call()) should never be called from a foreign signal handler. If your signal handler does call lisp_value() or lisp_call_address(), failures may occur should the Lisp need to garbage collect while in the signal handler. The reason for this failure is that the system stack may not have been set up to indicate a call from Lisp to C. Also, if the signal is delivered while the garbage collector is running, then the entire Lisp heap may be inconsistent and accesses to the Lisp heap may result in attempting to follow pointers to nonexistent data.
Input and output from foreign code may require special consideration to obtain reasonable behavior.
Because foreign output operations will be interspersed with Lisp output operations, it is necessary to flush output in foreign code before returning to Lisp if it is desirable to maintain synchronous behavior. For example, if a C function writes information to the standard output, it may not be displayed contemporaneously unless fflush() is used before returning to Lisp.
When performing input and output from Fortran, it may be necessary to set up the Fortran I/O environment before performing any operations. For 4.n BSD Unix systems, using the standard AT&T Fortran compiler or one of its derivatives, the following steps are necessary to perform I/O successfully.
Suppose you want to call the following simple subroutine from Allegro CL:
subroutine fiotest write(6, '("this is some fortran output.")') call flush(6) return end
Because the above program contains an input/output statement, certain subroutines from the Fortran I/O library must be loaded. (Note that the call to subroutine fflush() is implementation-dependent.) These subroutines initialize the I/O units by `preconnecting' unit 5 to standard input, unit 6 to standard output, and unit 0 to standard error. If this initialization is not done, various errors can occur. For example, some versions of the Fortran library routines will execute an abort() if I/O has not been initialized, causing Lisp to dump core. Other versions merely ignore requests for output. Some versions will create disk files named fort.[n] where [n] is the Fortran unit number. The routine f_init() of the Fortran I/O library will perform the proper initialization. This initialization need only be done once for every Lisp session.
Assuming the compiled version of the above program is contained in the file fiotest.so, on a UNIX machine, here is a transcript of how to load the program into Allegro CL:
;; After loading the Fortran subroutine and ;; the F77 and I77 ;; libraries into the Lisp ;; we use FF:DEF-FOREIGN-CALL to create a Lisp function ;; F_INIT that points to the function _f_init() in ;; the Fortran library: USER(61): (ff:def-foreign-call f_init nil :returning :void) F_INIT ;; We define our Fortran test subroutine to Lisp. USER(62): (ff:def-foreign-call fiotest nil :convention :fortran :returning :void) FIOTEST ;; We initialize the Fortran I/O system, . . . USER(63): (f_init) nil ;; . . . then call our Fortran subroutine. USER(64): (fiotest) This is some fortran output. nil
This section describes the Allegro CL facility that permits C functions to call Lisp functions and access Lisp values. The C functions must have been loaded into Lisp, and must have been called from Lisp.
Because some Lisp objects may move in memory when a garbage collection occurs, calling out to Lisp must be used with great care on the part of the C programmer. As an example, if an array is passed to a C function which calls out to a Lisp function and a garbage collection occurs, then after the C function returns, the pointer to the array will point to nothing; the array data will have moved somewhere else. So if a C function accesses a Lisp value and calls out to Lisp, then it is recommended that the Lisp value be registered and accessed as described next.
Another alternative is to store the value in a location where they are not moved. Allegro CL supports static arrays which are guaranteed never to move once they have been allocated (they are allocated in foreign rather than Lisp space). Static arrays are discussed in gc.htm. However, static arrays are not completely general. This section covers cases where you are interested in accessing objects other than arrays or in types of arrays not supported by static arrays.
To give a particular example, let us say:
The problem only occurs if a garbage collection happens during the call to the Lisp function. What you should do is to make sure that any Lisp value you return to Lisp or work with within a C function is retrieved only after there is no possibility of calling out to a Lisp function where a garbage collection may occur. To fix the example above so it is safe, you should add another step after step 4:
4a. Retrieve the registered Lisp value again.
Other scenarios can be played out. For example, if C code changes array data using an invalid array pointer, Lisp will never see the changes, and Lisp's data space may be corrupted by the indirection through a bad pointer.
For purposes of allowing call-backs from foreign code, Allegro CL maintains two tables of Lisp objects: one is the function table and the other is the value table. The Lisp program can register functions or values by requesting that they be stored in the respective table. The size of these tables will grow dynamically as needed (in contrast to earlier releases where the function table could not be increased in size). The use of these tables is explained in the following two sections.
Before accessing a Lisp value from C, it should be registered first. When a Lisp value is registered, an index is returned as a `handle' on the Lisp object. A C function is provided that will return a pointer to the Lisp object given its index. This is preferable to passing addresses of Lisp objects to C functions, since Lisp objects will generally move with each garbage collection. If a garbage collection is triggered from C code - by calling back from C to Lisp - the addresses of Lisp objects originally passed to the C function from Lisp may become invalid. And since one will have lost one's only handle on these Lisp objects, their addresses, there will be no way to find their new addresses. If instead one were to pass the registration indices of these Lisp objects, one could readily find their new addresses using these indices following a call-back to Lisp.
Note that passing the addresses of Lisp objects to C functions is not recommended only in those cases where a garbage-collection may be triggered by the C code. Passing values of Lisp objects (converted to C values by virtue of declarations made with def-foreign-call) is not discouraged. However, some Lisp data types are passed to C as pointers (for example, double-float data). Such data types should be registered and passed to C by their indices if the C code might cause a garbage collection.
The function register-lisp-value registers Lisp values in the value table. The function unregister-lisp-value clears the registration.
Once a value is registered, the C program can obtain the value from the value table with the C function:
long lisp_value(index) int index;
where
index is the index of the registered value in
the value table in Lisp. This C function will always return the
current value at index even after a garbage collection has
occurred. The result value from lisp_value() will be a C pointer to a
Lisp object. Macros are provided to help C analyze the Lisp object and
convert it to something meaningful. These macros are found in the C
include file lisp.h, usually distributed in the
home/misc/ directory with Allegro CL.
Note that when one passes Lisp values to foreign functions that have been declared using def-foreign-call, most Lisp data types are converted to corresponding C data types automatically. When one obtains Lisp values by calling lisp_value(), the conversion must be performed explicitly by the foreign code.
The Lisp function lisp-value may be useful for debugging code. It simulates the C function lisp_value(), but it may be called from within Lisp at any time.
A Lisp foreign-callable must satisfy two requirements: (1) it must be defined using the special macro defun-foreign-callable, and (2) it must be registered via the function register-foreign-callable.
Establishing a callback from foreign code into lisp is complicated for two reasons: (1) Lisp functions observe different calling conventions than C functions, and (2) Lisp functions (and their code vectors) will generally move with each garbage collection. The defun-foreign-callable macro provides a convenient mechanism for associating formal Lisp arguments with the C arguments. Then, when a Lisp function is registered, a small `wrapper' is created that can be called from C and which will set up the arguments correctly for the Lisp function. This wrapper calls the Lisp function with a single argument, a descriptor that points to the arguments stacked by C.
register-foreign-callable returns a function pointer to the C wrapper thus generated. This function pointer is valid across both garbage collections and calls to dumplisp. Further, an index is associated with this function pointer and is also returned as a `handle' to it. A C function is provided that looks up the index and returns the function pointer associated with it.
The function register-foreign-callable registers Lisp functions in the function table. The function unregister-foreign-callable clears the entry from the table.
If the returned function address is not used directly, the C program can get a pointer to the registered Lisp function by using the C function lisp_call_address(). The form is
void *lisp_call_address(int index)
where
index is the index of the registered
function in the function table.
A C program can call a Lisp function using the syntax
(*f)(arg1, arg2, ..., argn), where f is the integer
returned by lisp_call_address(). It is important to
realize the `address' of a Lisp function returned by
lisp_call_address() is not the same as the address of
the Lisp function object associated with the Lisp symbol. It is not
possible to call a Lisp function directly from C, one must always use
the `wrapper' provided by register-foreign-callable.
See the various appendices (Appendix A Foreign Functions on Windows, Appendix B Building shared libraries on Solaris, Appendix C Building shared libraries on AIX, Appendix D Building shared libraries on Linux, Appendix E Building shared libraries on FreeBSD, Appendix F Building shared libraries on Mac OS X) for information on creating Shared Object/Library/DLL etc. files suitable for including compiled foreign code in Lisp and also see Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality.
For example, say we have loaded the compiled C file:
void c_calls_lisp(fun, index) long (*fun)(); { void (*lisp_call)(void),*lisp_call_address(); (*fun)(); /* direct call to lisp function */ lisp_call = lisp_call_address(index); (*lisp_call)(); /* call to lisp function using index */ }
and had the following session in Lisp:
USER(70): (setq called 0) 0 USER(71): (defun-foreign-callable lisp-talks () (format t "This is lisp called for ~ the ~:r time.~%" (setq called (1+ called)))) lisp-talks USER(72): (progn (multiple-value-setq (ptr index prev-ptr) (register-foreign-callable 'lisp-talks)) (list ptr index prev-ptr)) (1404302 0 nil) ;; ptr is 1404302, index 0 in ;; function table, previous function none USER(73): (ff:def-foreign-call c_calls_lisp (integer fixnum) :returning :void) C_CALLS_LISP USER(74): (c_calls_lisp ptr index) This is Lisp called for the first time. This is Lisp called for the second time.
Note: the address returned by register-foreign-callable will be valid, even after a gc or a dumplisp, so you can use that rather than lisp_call_address() if you wish. lisp_call_address() is very efficient and its main advantage is that address can be bignums and so storing and working with them rather than working with indexes (which are always fixnums) can be less efficient.
The C representation and the Lisp representation of data types are not necessarily the same. When a C function calls a Lisp function, the Lisp function needs to have its arguments declared so that it `knows' what the C arguments were and how to convert them. This declaration scheme is implemented in a macro defun-foreign-callable. For this reason it is important to properly prototype the function pointer you will be using to call the lisp callback. C uses default argument types for unprototyped, or old-style, no arglist, prototypes. Unexpected results or errors will likely occur if you do not properly prototype the function pointer.
We give an example: say that we define the following C function, compile it and load it into Lisp:
void add(x, y, index) int x, y, index; { void (*lisp_call)(int, int),*lisp_call_address(); lisp_call = lisp_call_address(index); (*lisp_call)(x, y); }
Then the following Lisp session could take place:
USER(80): (ff:def-foreign-call add (integer integer fixnum) :returning :void) ADD USER(81): (defun-foreign-callable add-two-c-args ((x :signed-long) (y :signed-long)) (setq xy (+ x y))) ;; set a global variable ;; to the sum of x and y ADD-TWO-C-ARGS USER(82): (setq index (cadr (multiple-value-list (register-foreign-callable 'add-two-c-args)))) 1 USER(83): (add 4 5 index) ;; call to the foreign function nil USER(84): xy ;; test the value of the ;; global variable xy 9
Note that in the example above, we get exactly the same result by omitting the type declarations for the function add-two-c-args - i.e. we could have defined it as:
(defun-c-callable add-two-c-args (x y) (setq xy (+ x y)))
since the default is to assume the arguments are signed-longs.
A foreign-callable (defined via defun-foreign-callable or the deprecated defun-c-callable) is meant to be called by a foreign function. The arguments are converted explicitly by the foreign-callable from foreign argument types to Lisp types, and the return value is possibly converted from Lisp to an appropriate foreign type, if specified. However, there may be times when such code needs to be debugged, and the user wishes to call the foreign-callable directly from Lisp for such purposes.
The function called lisp-call, which existed in some earlier releases, has been removed, and instead, the user can simply call the foreign-callable directly from Lisp. The foreign-callable now uses a dual-entry-point technique, which allows the Lisp call to be caught and the arguments pre-converted before making the "real" call to the body of the foreign-callable.
It is important to understand the pre-conversion technique used in a foreign-callable Lisp call; it depends not on the declared arguments in the foreign-callable, but in the actual arguments passed to the foreign-callable from Lisp. Lisp calls to foreign callables convert all arguments in the following way:
A Lisp call to a foreign-callable will never reconvert its return value, because it always returns a Lisp value (for foreign-callables that have been registered to convert their return values, it is actually the callback mechanism that performs the conversion, and not the foreign-callable itself).
Note that these conversions are not as extensive as the general conversions done in foreign calls; this mechanism is intended as only a quick debug tool. For more extensive callback testing, define an actual foreign function with explicit argument types which calls back to Lisp.
This section contains notes on foreign functions for the UNIX Allegro CL programmer porting to Windows. We recommend using Microsoft Visual C++ (MSVC++).
The steps to using foreign functions on Windows are:
Please note that you cannot create a new .dll with the same name as a .dll file already loaded into Lisp (or loaded into another program). If you have a .dll file loaded into lisp, use unload-foreign-library to unload it before recreating it and loading it again.
To access C++ functions from Lisp you must ensure C++ name mangling
does not occur. The easiest way to do this is to use a file extension
of .c instead of .cpp. If you must use the
.cpp file extension, then use the
extern
"C" linkage specification, like this:
extern "C" int foo (int a) { return (a + 1); }
Import the symbols you need from other libraries by specifying the
.lib file to the linker. There are two important entry points
in acl[version].dll ([version] changes
with each release of Allegro) which users of the foreign function
interface might need:
lisp_call_address and
lisp_value. To use these entry points from your
.dll, you must import the symbols using the linker using
acl[version].lib provided in the Allegro directory. For
example, to compile the example in
Section 9.2 Calling Lisp functions from C: lisp_call_address() and lisp_call() above,
you would need to specify acl[version].lib on your
cl command line, like this (assuming the function
c_calls_lisp is in the file foo.c):
cl -D_MT -MD -LD -Fefoo.dll foo.c [Allegro CL directory]\acl[version].lib user32.lib gdi32.lib kernel32.lib comctl32.lib comdlg32.lib winmm.lib advapi32.lib msvcrt.lib
After evaluating
(load "foo.dll") in Lisp, the
rest of the session in
Section 9.2 Calling Lisp functions from C: lisp_call_address() and lisp_call() is the same.
The general issue of cross linking is discussed in Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality above.
Export the symbols you want to be visible from Lisp by using a linker
.def file, or by using the
_declspec(dllexport) declaration:
extern "C" _declspec(dllexport) int foo (int a) { return (a + 1); }
Lastly, compile and link your C code into a .dll:
-D_MTC compiler option to compile your C code to insure the compilation will produce multi-threaded safe C code,
-MDlinker option to link your object files to insure you link with the multi-threaded safe C runtime libraries,
-LDlinker option to produce a dll instead of an exe, and
Open Watcom open source project ()
FORTRAN needs the following steps to work with Allegro CL. Watcom
allows for several calling sequences, but only one style is compatible
with the argument and return value passing that Allegro CL uses to be
compatible with Windows C++. To achieve this, the
/sc option must be used to specify stack-based
argument passing, and the following pragma must be used (starting in
column 1):
*$pragma aux floatfunc value [8087]
for each floatfunc in the source that returns a float value. Our experience is that these pragmas may cause warning messages on other architectures, but will otherwise work without causing errors.
Also, a file with a .lnk extension should be created, which specifies the link options; advanced Fortran users may want to specify link options on the command line, however. For our example, we will describe a file called bar.lnk:
system nt_dll initinstance terminstance import calltolisp 'call_lisp.dll' export floatfunc export funca export funcb file bar
Note that exports are necessary for each function that will be accessed from the Lisp side. Also, it is possible to call-back into Lisp from Fortran, though it must be done indirectly through a C function in a specified .dll file.
Once the source files are set up, the following commands can be run:
wfc386 /bd /ms /sc /fpi87 bar.f wlink @bar wlib bar +bar.dll
As on UNIX, foreign functions are defined to Lisp using def-foreign-call. The
older (and now obsolete) defforeign interface works. When
a foreign function defined with defforeign is called directly, it
acts (with respect to threads) as if it was declared with the
:release-heap :never
option. (release-heap is a keyword argument to
def-foreign-call but not to
defforeign.) Use the
:call-direct argument to def-foreign-call, as well as
other necessary arguments, to allow direct C calling to be compiled
inline by the Lisp compiler.
Windows usually expects callbacks to be declared
:stdcall. You should check your Windows
documentation carefully to verify the required calling
convention. From the Lisp side, you will need to use a declaration
to defun-foreign-callable:
(declare (:convention {:c | :stdcall | :method}) (:unwind value))
The
(:unwind value
)
declaration says that throwing past this boundary is to be performed
by returning value (not evaluated) to the C caller
after arranging when control eventually returns to the
Lisp-that-called-C-before-this-callback, the throw will be continued
from that point. This effect does not require any special action on
the part of the Lisp-to-C calling code, except that it had to have
been built with a C link (no leaf calls).
Absence of an
:unwind declaration is equivalent to
having
(declare (:unwind 0)).
Either
:c or
:stdcall can now be
used interchangeably as the value of the
convention on def-foreign-call definitions.
The stack will be properly updated, regardless of the style the
foreign function expects to be called with. Callbacks however, must be
declared with the proper convention.
C:\TMP>type foo.c #ifdef _WIN32 #define DllExport __declspec(dllexport) #else #define DllExport #endif DllExport int test_fun(int foo) { return foo + 101; }
Compile foo.c into foo.dll:
C:\TMP>cl -D_MT -MD -nologo -LD -Zi -W3 -Fefoo.dll foo.c user32.lib gdi32.lib kernel32.lib comctl32.lib comdlg32.lib winmm.lib advapi32.lib msvcrt.lib foo.c Creating library foo.lib and object foo.exp
Then, in Lisp:
user(1): (load "foo.dll") ; Foreign loading foo.dll. t user(2): (ff:def-foreign-call (test "test_fun") (a)) test user(3): (test 10) 111 user(4):
or, using the old ff:defforeign interface:
user(1): (load "foo.dll") ; Foreign loading foo.dll. t user(2): (ff:defforeign 'test :arguments '(integer) :entry-point "test_fun" :return-type :integer) t user(3): (test 10) 111 user(4):
On Solaris, you must produce .so files which are loadable into Allegro CL. The -K pic flag is optional (only on Solaris), and creates slightly different modes of sharing when used. As an example:
% cc -c -K pic foo.c % ld -G -o foo.so foo.o
Fortran is similar:
% f77 -c bar.f % ld -G -o bar.so bar.o
It is often useful to use the -z defs option to find which references are not yet resolved. If an undefined symbol is detected that should be present in the Lisp (such as lisp_call_address) then that shared-library must be included in the command line. For example, for a call to lisp_call_address() in libacl80.so:
% ld -G -o foo.so foo.o libacl80.so
The general issue of cross linking is discussed in Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality above.
lisp.h is an include file that describes the
format of Allegro CL Lisp objects. Because the Sparc has two ports,
32-bit and 64-bit, one of the flags
-DAcl32Bit or
-DAcl64Bit must be passed into the C compiler if
lisp.h is used.
Further, for the 64-bit version, you must specify
-xarch=v9 on the cc line for sparc 64-bit
and
-xarch=amd64 for amd64/x86-64/emt32. You must
also link with 64-bit libraries, which is usually done with
-L/usr/lib/sparcv9.
You must use the make_shared script supplied in the bin directory of the Allegro CL distribution to produce .so files which are loadable into Allegro CL.
Because make_shared itself calls other things in bin/, [Allegro directory]/bin must be in your path, that is it must be in the list that is the value of the PATH environment variable.
% cc -c -D_BSD -D_NO_PROTO -D_NONSTD_TYPES -D_MBI=void foo.c % bin/make_shared -o foo.so foo.o and for Fortran: % xlf -c bar.f % bin/make_shared -o bar.so bar.o -lm
lisp.h is an include file that describes the
format of Allegro CL Lisp objects. In general, when there are both
32-bit and 64-bit Lisps, one of the flags
-DAcl32Bit or
-DAcl64Bit should
be passed into the C compiler if lisp.h is
used. Use
-DAcl32Bit for the 32-bit Lisp and
-DAcl64Bit for the 64-bit.
See also the discussion of cross linking in Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality above.
The instructions for building a shared library for a 64-bit Lisp are
similar to those given just above, but the additional argument
-q64 should be added to the cc or xlf
commands when producing shared libraries for the 64-bit version (the
make_shared calls are unchanged):
% cc -q64 -c -D_BSD -D_NO_PROTO -D_NONSTD_TYPES -D_MBI=void foo.c % bin/make_shared -o foo.so foo.o and for Fortran: % xlf -q64 -c bar.f % bin/make_shared -o bar.so bar.o -lm
lisp.h is an include file that describes the
format of Allegro CL Lisp objects. Because there are two AIX ports,
32-bit and 64-bit, one of the flags
-DAcl32Bit or
-DAcl64Bit must be passed into the C compiler if
lisp.h is used. Use
-DAcl64Bit
for the 64-bit Lisp.
Again, see also the discussion of cross linking in Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality above.
On Linux (either on machines with Intel or AMD processors or machines with PowerPC processors), you must produce .so files which are loadable into Allegro CL. Compile with the -fPIC flag. As an example:
% cc -c -fPIC foo.c -o foo.o % ld -shared -o foo.so foo.o
Fortran is similar:
% f77 -c bar.f % ld -shared -o bar.so bar.o
The general issue of cross linking is discussed in Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality above.
Further, for the 64-bit version, you must specify
-m64 on the cc line. Otherwise, follow the
32-bit instructions.
On FreeBSD, you must produce .so files which are loadable into Allegro CL. Compile with the -fPIC -DPIC flags. As an example:
% cc -c -fPIC -DPIC foo.c -o foo.o % ld -Bshareable -Bdynamic -o foo.so foo.o
Fortran is similar:
% f77 -c bar.f % ld -Bshareable -Bdynamic -o bar.so bar.o
The general issue of cross linking is discussed in Section 1.9 Creating Shared Objects that refer to Allegro CL Functionality above.
On Mac OS X, you must produce a specific type of .dylib file which can be loaded into Allegro CL. These are called bundle files on Mac OS X, and are fully packaged shared libraries. Unfortunately, they may not be reused as input to ld() once they have been created, in contrast with shared-objects on other architectures. However, they may contain undefined symbols, which may be resolved lazily when the shared-objects are loaded. These may include symbols in libacl*.dylib such as lisp_value and lisp_call_address, without having to link against the library. Compile with the -dynamic flag. As an example:
% cc -dynamic -c foo.c -o foo.o % ld -bundle /usr/lib/bundle1.o -flat_namespace -undefined suppress -o foo.dylib foo.o
Fortran interfacing is not supported at this time.
The general issue of cross linking is discussed in Section 1.9.1 Linking to Allegro CL shared library on Mac OS X.
lisp.h is an include file that describes the
format of Allegro CL Lisp objects. Because the Mac OS X has two ports,
32-bit and 64-bit, one of the flags
-DAcl32Bit or
-DAcl64Bit must be passed into the C compiler if
lisp.h is used.
Further, for the 64-bit version, you must specify
-arch ppc64 on the cc line.
Copyright (c) 1998-2012, Franz Inc. Oakland, CA., USA. All rights reserved.
This page has had moderate revisions compared to the 8.1 page.
Created 2010.1.21. | http://franz.com/support/documentation/8.2/doc/foreign-functions.htm | CC-MAIN-2015-18 | refinedweb | 14,492 | 54.63 |
Has it ever occurred to you that a custom font does not look as expected in your app? like some parts of it are cut off? Look at this example, it supposed to be “Blog” but all you see is “Bl”:
Or this one in Farsi (and Arabic) which expected to be “کریم” but the last two characters are cut off completely:
The code to create it is pretty simple. I have used a third party library, FontBlaster, to load custom fonts which is available on github.
label = UILabel(frame: CGRect.zero) let font = UIFont(name: "BleedingCowboys", size: 60)! // We are in debug mode, right? label.backgroundColor = UIColor.yellow label.frame.size = CGSize.zero label.text = "Blog" let size = label.sizeThatFits(CGSize.init( width: CGFloat.greatestFiniteMagnitude, height: CGFloat.greatestFiniteMagnitude)) label.frame.size = size label.center = self.view.center self.view.addSubview(label)
It seems
sizeThatFits(:) cannot determine the size correctly for all fonts. To fix this, I found an extension to
UIBezierPath which returns a
CGPath for an attributed string, you can find it here. This is how you can the path:
let line = CAShapeLayer() line.path = UIBezierPath(forMultilineAttributedString: mutabbleAttributedString, maxWidth: CGFloat.greatestFiniteMagnitude).cgPath line.bounds = (line.path?.boundingBox)! // We gonna need it later let sizeFromPath = CGSize(width: (line.path?.boundingBoxOfPath.width)!, height: (line.path?.boundingBoxOfPath.height)!)
UIBezierPath(forMultilineAttributedString:, maxWidth:) comes from that extension I mentioned above. Now we can determine the actual size of the label frame, let’s see it in action:
It’s still not exactly what we want, the size seems to be correct but the left inset is not. To solve this last problem, let’s create a custom
UILabel class which can set custom inset while drawing the label:
import Foundation import UIKit class Custom = bounds.inset(by: textRect.inset(by: invertedInsets) } override func drawText(in rect: CGRect) { super.drawText(in: rect.inset(by: textInsets)) }
How many points we should add to left inset? the difference between the actual width and width from
sizeThatFits.First we need to replace the line in which we declared the label. Instead of the
UILabel we need to use
CustomLabel. Then:
label.textInsets = UIEdgeInsets(top: 0, left: sizeFromPath.width - size.width, bottom: 0, right: 0)
Let’s see the final result:
Nice! yeah? Thing is you might not need the inset for all troublesome fonts, check it yourself | https://maysamsh.me/2019/04/17/ios-fix-label-cuts-for-custom-fonts/ | CC-MAIN-2021-49 | refinedweb | 390 | 58.89 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How Do I Change The OpenERP Page Title? [Closed]
The Question has been closedby
Hi Folks,
I am attempting to change the page titles that get displayed by OpenERP, i.e. the wording that gets displayed next to the favicon in internet browser tabs. In standard website development, this title is controlled by the <title></title> tags.
So far, I have only found one place where I can change this:
/web/addons/web/controlles/main.py line 541 %(css)s %(js)s <script type="text/javascript"> $(function() { var s = new openerp.init(%(modules)s); %(init)s }); </script> </head> <body> <!--[if lte IE 8]> <script src="//ajax.googleapis.com/ajax/libs/chrome-frame/1/CFInstall.min.js"></script> <script>CFInstall.check({mode: "overlay"});</script> <![endif]--> </body> </html> """
Changing the title here from OpenERP to whatever I want works for the login screen, but once I access a proper page within the system it reverts to "[Page Title] - OpenERP".
I would like to know where the function is that sets this page title so I can change it to suit our needs. Does anyone know where this can be found?
Many thanks!
-Alex
There is one more file where you need to change title.
Chorme.js at this particular line you need to change.
document.title = title + sep + 'OpenERP';
Change OpenERP to anything you want.
Awesome, thank you very much Sudhir, that worked perfectly!
Most welcome!!!
I made the change, and noticed that it did not apply to the system. Then I re-started the system, and all the changes were applied. Thanks for info :).
It seems to me a bad idea hacking into core code.
I have a not hardcoded solution: It replace title both in html template and in javascript (via overwritting get_title_part function of WebClient object)
A somewhat cleaner way is to overrider the set_title function inside your module in your own js file, and not to change the chrome.js:
openerp.my_module = function(instance) {
instance.web.WebClient.include({
set_title: function(title) {
title = _.str.clean(title);
var sep = _.isEmpty(title) ? '' : ' - ';
document.title = title + sep + 'My Brand or What Have I';
},
});
};
Pay attention to the 'openerp.my_module' part: my_module should be the exact name of your module.
Same holds to changing main.py. Better add a Python file to your module with the following code inside:
from openerp.addons.web.controllers import main
main.html_template = main.html_template.replace('<title>OpenERP</title>', '<title>My Brand or What Have I</title>')
This solution should be the accepted one. It gracefully allows you to do custom modifications without having to hack core code. | https://www.odoo.com/forum/help-1/question/how-do-i-change-the-openerp-page-title-19988 | CC-MAIN-2016-50 | refinedweb | 464 | 67.15 |
Containers are a small, fast, and easy-to-set-up way to deploy and run software across different computing environments. By holding an application’s complete runtime environment, including libraries, binaries, and configuration files, platform and infrastructure are abstracted, allowing the application to run more or less anywhere. Containers are available from all the major cloud providers as well as in on-premises data centers and hybrid clouds. Plus, they can save companies a lot of money.
Using containers, developers can create “microservices,” which are essentially small, reusable components of an application. Because they are reusable, microservices can save developers time, and they are deployable across different platforms.
It’s no surprise, then, that container adoption is high. Unfortunately, security is still learning how they work and how best to lock them down. Around 80 percent of organizations with more than 500 employees now use containers, according to a recent McAfee survey of 1,500 global IT professionals. Only 66 percent have a security strategy for the containers. In fact, containers are now tied with mobile devices as the biggest security challenge for organizations, according to a March survey of 1,200 IT decision makers by CyberEdge.
There are multiple reasons why security is a challenge in the container universe. One is the speed at which containers are deployed. Another is that containers typically require applications to be broken into smaller services, resulting in increased data traffic and complex access control rules. Finally, containers often run in cloud-based environments, such as Amazon, with new kinds of security controls.
The ecosystem of container security tools is not yet mature, according to Ali Golshan, cofounder and CTO at StackRox, a Mountain View-based cloud security vendor. "It's like the early days of virtual machines and cloud," he says. "Organizations need to build proprietary tools and infrastructure to make it work, and it needs a lot of resources to implement. There are not a lot of ready-made solutions out there, and not enough solutions to cover all the use cases."
The life of a container is poorly managed and short
The traditional software development process — build, test, deploy — quickly becomes irrelevant in the age of containers. In fact, developers often grab ready-to-use images from public repositories and throw them up into the cloud.
"There's some implicit level of trust there that may or may not be warranted," says Robert Huber, chief security and strategy officer at Eastwind Networks. A container image is a convenient packaging of ready-to-go code, but providers might not have the time or interest in monitoring for security issues or publishing release notes, he says.
"Ideally, you have a process to check the versioning, but I haven't seen any organization that does that," Huber says. "Companies should continuously check that the latest versions of the containers are the ones that are being used, and that all the code is patched and up to date. But right now, it comes down to the developer, and a manual check. I do believe that organizations will move to some process that's more automated, but right now there's a gap. It's fire and forget it. You pull a container, run it, and you're done."
It's not much better when developers build their own containers. The speed of development means that there's no time for quality assurance or security testing. By the time someone notices that the containers are there, they've done their job and are gone.
"The lifecycle might be over by the time the security team can go in," says Bo Lane, head of solution architecture at Kudelski Security. "That's the challenge, and it requires a different mindset for security."
Security awareness needs to be built in early in the development process, he says, and automated as much as possible. For example, if developers are downloading an image from an external source, it needs to be scanned for vulnerabilities, unpatched code, and other potential issues before the container goes live. "And once that container goes live, how do they maintain and monitor the state of its security for something that's potentially very short lived, and interacts with other components?" he asks.
Take for example, Skyhigh Networks. The cloud security vendor has its own cloud services offerings, so it is dealing with all these challenges, says Sekhar Sarukkai, co-founder of Skyhigh Networks and VP of engineering for McAfee Cloud, which acquired Skyhigh earlier this year.
"We are deploying the latest architecture stacks, we have microservices," he says. "In fact, we can deploy into production multiple times a day. Traditionally, you'd have security testing or penetration testing — that doesn't work in a DevOps environment."
Enterprises have to find ways to automate a lot of these functions, he says. That means being able to identify all the containers that are being deployed, make sure all their elements are safe, that they're being deployed into a secure environment with application controls or application whitelisting, and then follow up with continuous monitoring.
McAfee now has a product that does just that, announced in April at the RSA conference — the McAfee Cloud Workload Security platform. "It secures Docker containers and workloads in those containers in both public and private cloud environments," says Sarukkai. That includes AWS, Azure and VMWare. "It's the first, I think, cloud workload solution that can quarantine infected workloads and containers," he says.
The product can also reduce configuration risks, by checking for, say, unnecessary administrator privileges, or unmet encryption requirements — or even AWS buckets that are set to be publicly readable. "It also increases the speed at which you can remediate," he says. "It can improve it by as much as 90 percent, from the studies that we've done with our customers."
Almost all of the container security issues he's seen so far, he says, are because they weren't configured correctly. "I think that's where the biggest risk lies," he says.
A massive web of services
Configuration management and patch management are difficult to do, and easy for attackers to exploit, but they are solvable issues. A more daunting challenge is that of the complexity created by breaking an application into a large number of smaller, interconnected services.
With traditional, monolithic applications, there's one service and just a couple of ports. "You know exactly where the bad guys are going to try and get in," says Antony Edwards, CTO at Eggplant.
That makes it easier to secure, he says. "However with microservices, you have lots of services and often many ports, so that means there are many more doors to secure. Plus, each door has less information about what’s going on, so it’s harder to identify if someone is a bad guy."
That puts the burden on ensuring that the security of the individual services is as tight as can be, he says, with principles such as least privilege, tight access controls, isolation, and auditing. "All this stuff has been around since the 1970s; we now just need to do it," Edwards says.
That's easier says than done. "Organizations are breaking their monoliths into smaller and smaller chunks, and the data flows get so much more complex within the application that it gets hard to tell what every microservice does," says Manish Gupta, co-founder and CEO at ShiftLeft.
If there's a hard-coded access credential in the mix, or an authentication token that's being leaked, the entire system becomes vulnerable. "This is a really big issue, and people don't recognize how big of a problem this is," says Gupta.
The problem is only getting bigger, he added, as more critical systems are moved to a software-as-a-service delivery model. "That means you are concentrating a lot of your data in your apps — Equifax is a great example; Uber is a great example," he says. "Now this very sensitive, important data is flowing between microservices, and few people have good visibility into it."
Leaky containers create vulnerabilities
There's another potential security challenge with containers. They run in a shared environment, which is particularly worrisome in public clouds, where customers don't know who their neighbors are. In fact, vulnerabilities in Docker and Kubernetes container management systems have been discovered over the past couple of years.
Companies running containers in a public cloud are starting the recognize this issue. "With most of the customers I speak with, they ask directly about what are the tools available to isolate the host from container escape and isolate containers from each other," says Kirsten Newcomer, senior principal product manager for Red Hat's container platform, OpenShift.
More than 70 percent of respondents run their containers on Linux, according to Portworx's 2017 container adoption survey. Features that administrators can use to make sure that containers stay isolated include making use of Linux namespaces and using Security Enhanced Linux for an additional layer of mandatory access controls, says Newcomer. "And then there's something called Linux Capabilities, which allows you to limit the different kinds of privileges within a Linux system that a process has access to."
These may be familiar concepts to Linux security experts, but they might be new to teams deploying containers — or to organizations that recently moved over from Windows. At least companies running their own container environments, whether on public or private clouds, have full control over these security settings. When they're using off-the-shelf containers, they have to trust the cloud provider to get the underlying security infrastructure right.
So far, none of the vulnerabilities that allow processes to escape containers have resulted in a major public breach. The fact that the space is dominated by just a handful of platforms — Docker and Kubernetes being the big names here — means that a single vulnerability can have very broad impact if attackers exploit it quickly, so it pays to be prepared.
This story, "Why securing containers and microservices is a challenge" was originally published by CSO. | https://www.itworld.com/article/3268922/why-securing-containers-and-microservices-is-a-challenge.html | CC-MAIN-2020-16 | refinedweb | 1,680 | 51.58 |
Hi. Another very noobie question from me but here goes. I'm trying to work through the Raspberry Pi education manual and having a problem I can't work out with the skiing game.
So, I've been using python 2.7 because it used pygame. If I try and run in 3.2 it says "ImportError: No module named pygame"
But when I run in 2.7, I get the error: "AttributeError: 'skiWorld' object has no attribute 'keyEvent'"
In the code, I think this is the root of the problem but I'm not sure why (the last line has no line breaks):
def keyEvent (self, event):
# event should be key event but we only move if the key is pressed down
# but not released up
self.keydir = (0 if event.type == pygame.KEYUP else -1 if event.key == pygame.K_LEFT else +1 if event.key == pygame.K_RIGHT else 0)
And this is the section including the line called in the error message (last line is the one called):
# check external events (key presses, for instance)
for event in pygame.event.get():
if event.type == pygame.QUIT:
world.running = False
elif (hasattr(event, 'key')):
world.keyEvent(event)
Grateful for any suggestions!
Thanks | https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=31474 | CC-MAIN-2020-34 | refinedweb | 203 | 83.76 |
I have got a difficult network to construct and I am not sure how to do it right.
I have got an input part and another part consisting of sequentially added blocks of the same type.
I draw a picture of it that you can understand it easily:
- How to I declare the output of one layer as the input of another one in the constructor of my network?
- Where should I add my parameters as class members to be updated correctly?
Here is a minimum working example:
import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.utils.data as data_utils import torch.optim as optim import torch.nn.init as init import torch.autograd as AG # Define Custom Layer class CLayer(AG.Function): def __init__(self, tmp_): super(CLayer, self).__init__() self.tmp = tmp_ def forward(self): output = self.tmp + torch.ones(self.tmp.size()) return output # Define Block class Block(nn.Module): def __init__(self, A_, x_, y_): super(Block, self).__init__() self.A = A_ self.x = x_ self.y = y_ self.customLayer = CLayer(self.x) def forward(self): tmp = self.x - torch.transpose(self.A)*self.y output = CLayer(tmp) return output # Define Network class Net(nn.Module): def __init__(self, A_, x0_): super(Net, self).__init__() self.x0 = x0_ self.A = A_ self.fc1 = nn.Linear(50, 50) self.fc1.weight.data = self.A self.block0 = Block(self.A, x0, ??) self.blockN = nn.ModuleList([Block(self.A, ??, ??) for i in range(2)]) def forward(self): y = self.fc1(x) x = self.block0(self.A, self.x0, y) for i, l in enumerate(self.ihts): x = self.blockN(self.A, x, y) return x A_ = torch.randn(m,n) x0_ = torch.zeros(1,n) model = Net(A_, x0_) | https://discuss.pytorch.org/t/difficult-network-construction/4533 | CC-MAIN-2022-21 | refinedweb | 302 | 56.11 |
If I deploy more than one webservice I get "Multiple contextRick Reumann May 14, 2007 4:43 PM
I can deploy a simple webservice and everything works fine. If II create another webservice in a similar manner and deploy it, I end up with a "Multiple context root not supported" error.
This is very frustrating since I see nothing about this issue in the users guide.
Here was an example of an initial web service I was trying to deploy (which works fine, until I try to deploy another webservice using the same approach):
@Local
@WebService
@SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.WRAPPED)
public interface Echo {
@WebMethod String testEcho(String s);
}
@Stateless
@WebService(endpointInterface="com.foobar.edsf.example.ejb.webservices.Echo")
public class EchoBean {
public String testEcho(String s) {
return s;
}
}
If I deploy a similar webservice like the above, I'll get the Multiple context root not supported error. Am I doing something unorthodox or is this a bug? My EJB3 book and the examples that come with the jboss patch here: show examples like I'm doing above.
There is something related to this in jira but I can't seem to get the concept of @WebContext working correctly (I also don't see why I should even need to use a non-standard JBoss annotation to do something that should be standard.)
WITHOUT adding a @WebContext I'll end up with a wsdl location like:
If I try to declare a @WebContext I won't get the multiple context root error, but it changes the wsdl path to a path that it can't find. For example if I give it a path:
@WebContext(contextRoot="/EDSF-tests", secureWSDLAccess=false)
I end up with a wsdl URL:
Which doesn't point to the wsdl anymore.
Can someone give me some pointers about what I'm doing wrong? There must be a reason others have not run into this as well since I don't think I'm creating my webservices in a unique way.
1. Re: If I deploy more than one webservice I getRick Reumann May 14, 2007 5:12 PM (in response to Rick Reumann)
I guess this is a bug since if I use jbossws-1.2.0.SP1 I do not get this error. The jira bug on this is still open so I guess I'll just use 1.2.0 until it is resolved.
If this isn't a bug, though, someone please let me know:)
Thanks.
2. Re: If I deploy more than one webservice I getPeter Johnson May 14, 2007 9:15 PM (in response to Rick Reumann)
If I remember correctly, the context root comes from the jar file name. Are both of your jar files named the same?
It would appear that the standards define how the default context root is defined, but always leave it up to the implementation to specify alternate context root (hence the context-root entry in the jboss-web.xml file used in war files).
Of course, I could always be wrong.
3. Re: If I deploy more than one webservice I getRick Reumann May 14, 2007 10:05 PM (in response to Rick Reumann)
The two webservices are in the same package and packaged in the same jar. It seems odd that things work fine in 1.2.0.SP1 but not 1.2.1.GA.
4. Re: If I deploy more than one webservice I getGG May 15, 2007 5:14 AM (in response to Rick Reumann)
Our jar is named "beans.jar" and adding
@WebContext(contextRoot="/beans")
to all of the classes seems to work.
I also can't understand what caused them to change the behavior in 1.2.1GA.
Does anybody know how to specify the context root in an XML file?
5. Re: If I deploy more than one webservice I getRick Reumann May 15, 2007 10:12 AM (in response to Rick Reumann)
Also, my webservices jar is locate din an ear file if that matters at all.
Secondly, I'm not sure why I would need to use @WebContext in order to get this to work in 1.2.1.GA. All the standard examples I see do not mentioning using @WebContext - which if I understand correctly is not even part of the EJB3 Spec but is a JBoss specific annotation.
If you check out the webservice example for jboss provided after downloading the examples here
(unzip and then in the docs/tutorials there is a webservice example),
You'll see there example does not use the @WebContext notation.
6. Re: If I deploy more than one webservice I getHeiko Braun May 16, 2007 7:55 AM (in response to Rick Reumann)
Thanks for the valuable discussion. I will try to explain what changes between 1.2.0 and 1.2.1 are causing the problems you encounter.
1.) For EJB3 deployments we need to create a web app for HTTP invocations (obviously)
2.) EJB's don't contain web context information, so we derive it automagically.
3.) Until 1.2.0 the context name was derived from the ear/jar name.
4.) This changed with 1.2.1 to an algorithm that derives it from the bean class name
So what's happening when you deploy a EJB3 jar that contains multiple beans?
The default algorithm derives different context names for each bean in this deployment, which in turn we cannot use to setup the HTTP endpoint and thus throw an exception.
This also explains why the following did work:
@WebContext(contextRoot="/beans")
Unfortunately this is left out in the specs and thus has been changed many times.
Until we a have a definite solution i suggest you refer to the @WebContext annotation, even though it's not the most elegant solution.
--
Heiko
7. Re: If I deploy more than one webservice I getRick Reumann May 16, 2007 9:27 AM (in response to Rick Reumann)
Thanks Heiko for the information. My only question now (and pardon if this is a very 'newbie' question) but what would I set the web context to? In other words, do I need to make sure the various webapp wars that might be in the ear all have web.xml definitions that point to the ejbs so that I can create a context for the ejbs through the web.xml? If so, this seems like a lot of extra work.
I guess I'm just confused how I know what @WebContext to use for the ejbs? I could have several wars that all have their own web context. I would think the ejb context would be independent of those contexts, but just giving it an arbitrary context doesn't help in regard to having the wsdl being located.
If someone could provide a simple example of what to set this @WebContext to I'd appreciate. I'm sure I'm missing something simple.
8. Re: If I deploy more than one webservice I getHeiko Braun May 18, 2007 8:45 AM (in response to Rick Reumann)
Regular war's are not touched by this probelm. Just make sure all EJB's within a jar point to the same web context thriugh something like:
@WebContext(contextRoot="/myEJBServices")
9. Re: If I deploy more than one webservice I getRick Reumann May 18, 2007 10:29 AM (in response to Rick Reumann)
Thanks Heiko. That worked great. Not sure what I was doing wrong earlier when I had tried that (using @WebContext) and wasn't able to find the wsdl. It's working fine now, so I'll chalk my mistakes up to typical stupid human error:)
10. Re: If I deploy more than one webservice I getGG May 20, 2007 9:22 AM (in response to Rick Reumann)
Heiko,
Is there an XML syntax that can override the WebContext or we must use the annotation?
Thanks,
Genady
11. Re: If I deploy more than one webservice I getArjan van Bentem May 22, 2007 10:19 AM (in response to Rick Reumann)
Rick, can you confirm that your question in JIRA JBWS-1622 is solved (for you, but maybe not for the reporter of that issue) by Heiko's replies?
And for the archives: @WebContext is org.jboss.ws.annotation.WebContext as found in jbossws-core.jar
Arjan.
12. Re: If I deploy more than one webservice I getArjan van Bentem May 24, 2007 6:07 AM (in response to Rick Reumann)
Even when deploying only a single web service, this error may also be caused by some left-overs after refactoring without properly cleaning up.
I changed the name of my service using Eclipse, which left the old compiled classes in various exploded archive folders. Those were then still copied to, and loaded by, JBoss. This resulted in multiple services (same implementation with different class names) within the same application, and thus yielding the "org.jboss.ws.WSException: Multiple context root not supported" error.
My stupid mistake, of course...
13. Re: If I deploy more than one webservice I getBill Pfeiffer Jul 29, 2007 12:21 PM (in response to Rick Reumann)
Can anyone provide the EJB 2.1 equivalent to:
@WebContext(contextRoot="/beans")
I've seen in the FAQ this bit of text:
Use the explicit context root from webservices/context-root
Can be set in jboss.xml and is only relevant for EJB endpoints.
Can anyone clarify which file the webservices/context-root is referring to and the exact syntax?
Any help would be appreciated.
Thanks,
Bill Pfeiffer
14. Re: If I deploy more than one webservice I getRandy Roles Sep 4, 2007 10:24 AM (in response to Rick Reumann)
Sorry I am so dense, but where IS WebContext, I would like to try and
use it to solve an issue I am having with multiple Context in deploying multiple Web Services, and cannot find it in any Web Services JBOSS jar | https://developer.jboss.org/thread/102013 | CC-MAIN-2018-39 | refinedweb | 1,653 | 69.01 |
After I learned to draw straight lines with ActionScript 3.0 it’s time to go further and to see what else can I accomplish using ActionScript 3.0 Drawing API.
There are several built-in functions for drawing simple shapes like drawCircle(), drawEllipse(), drawRect() and drawRoundRect() for rectangles. Since almost every complex shape is some combination of simple ones these functions are solid ground to start with.
One interesting thing about fills is that overlapping shapes change the way fill is displayed. If you have two overlapping circles their intersection is without fill. Take a look at this code:
import flash.display.*;
var d:Sprite = new Sprite();
d.graphics.lineStyle(1, 0x000000);
d.graphics.beginFill(0xff0000, 1);
d.graphics.drawCircle(200, 200, 100);
d.graphics.drawCircle(300, 200, 100);
d.graphics.endFill();
addChild(d);
Result is shown on image below. It seems that any area within even number of shapes is without fill and area within odd number of shapes has its fill intact.
We have to pay attention about this to avoid unwanted things, but this also can be used to create some interesting effects. Next code uses for loop to draw several overlapping circles:
import flash.display.*;
var d:Sprite = new Sprite();
var i:uint;
d.graphics.lineStyle(1, 0x000000);
d.graphics.beginFill(0xff0000, 1);
for(i= 0; i<10; i++) {
d.graphics.drawCircle(200, 200, (i*5)+50);
d.graphics.drawCircle(300, 200, (i*5)+50);
}
d.graphics.endFill();
addChild(d);
Here is the output:
As I mentioned in previous post about drawing, best practice to draw lines and shapes with ActionScript 3.0 is to use rulers or even better plain paper with Flash coordinate system. This advice is valuable even more for drawing curves because you have two extra parameters for control point, its X and Y coordinates. Here is just one example for now, more of them probably in separate post when I have more time.
import flash.display.*;
var d:Sprite = new Sprite();
d.graphics.lineStyle(2, 0x000000);
d.graphics.beginFill(0xff0000, 1);
d.graphics.moveTo(100, 100);
d.graphics.lineTo(200, 100);
d.graphics.curveTo(250, 150, 200, 200);
d.graphics.lineTo(100, 200);
d.graphics.curveTo(-50, 150, 100, 100);
d.graphics.moveTo(200, 100);
d.graphics.curveTo(150, 150, 200, 200);
d.graphics.endFill();
addChild(d);
*_*
4 comments:
The fills work that way only if you draw multiple overlapping paths within the same begin/end fill block. If you say beginFill, draw, endFill, beginFill, draw, endFill, you won't see the intersection. But as you say, you can use the behavior to your advantage.
Also, the Flash 10 drawing API additions give you a LOT more control over this type of behavior. Check out the GraphicsPathWinding class.
Keith, thanks for your comment I appreciate it. For now, Astro is still something I'm only planning to learn, basics first :) thanks
Hi Flanture,
I have a problem, please see the below steps
i)Draw two circles with filled colors
ii) Draw a line between two cicles with different color.
Here i am able to get the line between two circles, but the problem is the line is overrided by the circle graphics within the part.
my question is can i draw a line over the circle.
hi Venkat, maybe something like this:
import flash.display.*;
var d:Sprite = new Sprite();
d.graphics.lineStyle(1, 0x000000);
d.graphics.beginFill(0xff0000, 1);
d.graphics.drawCircle(150, 200, 50);
d.graphics.drawCircle(350, 200, 50);
d.graphics.endFill();
// change line style
d.graphics.lineStyle(2, 0x000000);
d.graphics.moveTo(150, 200);
d.graphics.lineTo(350, 200);
addChild(d); | http://flanture.blogspot.com/2009/09/curves-and-fills-with-as3-drawing-api.html | CC-MAIN-2013-20 | refinedweb | 606 | 62.24 |
"Params::Util" provides a basic set of importable functions that makes checking parameters a hell of a lot easier While they can be (and are) used in other contexts, the main point behind this module is that the functions both Do What You Mean, and D...ADAMK/Params-Util-1.07 (3 reviews) - 11 Mar 2012 00:40:52 GMT - Search in distribution
PSHANGOV/MooseX-Params-0.010 - 03 Feb 2012 17:39:22 GMT - Search in distribution
CGI::Expand subclass with Rails like tokenization for parameters passed during DataTables server-side processing....OLIVER/App-Netdisco-2.031010 - 25 Feb 2015 22:12:31BD::Gofer - A stateless-proxy driver for communicating with a remote DBI
- DBI::Changes - List of significant changes to the DBI
The "Bundle::" namespace has long served as the CPAN's "expansion" mechanism for module installation. A "Bundle::" module contains no code in itself, but serves as a way to specify an entire collection of modules/version pairs to be installed. The Pr...ADAMK/Task-1.04 (2 reviews) - 10 Jul 2008 08:07:15 GMT - Search in distribution
Longer description of the function... DESCRIPTION Gimp-Perl is a module for writing plug-ins, extensions, standalone scripts, and file-handlers for the GNU Image Manipulation Program (GIMP). It can be used to automate repetitive tasks, achieve a prec...ETJ/Gimp-2.31 (1 review) - 30 Jun 2014 02:28:07 GMT - Search in distribution
TIMB/WebAPI-DBIC-0.003002 - 09 Jan 2015 18:05:57
Fennec ties together several testing related modules and enhances their functionality in ways you don't get loading them individually. Fennec makes testing easier, and more useful. SYNOPSYS There are 2 ways to use Fennec. You can use Fennec directly,...EXODIST/Fennec-2.017 - 23 Apr 2014 17:09:09 GMT - Search in distribution
- Fennec::Manual::CustomFennec - Customizing Fennec for you project.
Sprocket uses a single session for each object/component created to increase speed and reduce the memory footprint of your apps. Sprocket is used in the Perl version of Cometd <> NOTES Sprocket is fully compatable with other POE Com...XANTUS/Sprocket-0.07 - 07 Oct 2007 12:53:11
- DBD::Sys::Table - abstract base class of tables used in DBD::Sys
- DBD::Sys::CompositeTable - Table implementation to compose different sources into one table
- perl5200delta - what is new for perl v5.20.0
- perl5140delta - what is new for perl v5.14.0
- perl5100delta - what is new for perl 5.10.0
perlbloat is a front end for Devel::Leak::Module. It takes a series of module names and will "require" each in turn, tracking the underlying modules, packages, and namespaces created. SUPPORT Bugs should be always be reported via the CPAN bug tracker...ADAMK/Devel-Leak-Module-0.02 - 03 Feb 2012 09:17:31 GMT - Search in distribution
Provides access to the GML definitions specified in XML. The details about GML structures can differ, and therefore you should be explicit which versions you understand and produce. If you need the <b>most recent</b> version of GML, then you get invo...MARKOV/Geo-GML-0.16 - 05 Jan 2014 18:34:16 GMT - Search in distribution
A module to aid in the genesis of Perl modules that represent OWL entities in OWL ontologies. Upgrading from a version prior to Version 0.97 For those of you upgrading from a version prior to version 0.97, you will need to regenerate your modules for...EKAWAS/OWL2Perl-1.00 - 12 May 2011 23:52:42 GMT - Search in distribution | https://metacpan.org/search?q=Params-Util | CC-MAIN-2015-11 | refinedweb | 587 | 55.13 |
I finally have a new laptop and it's about to be a new fiscal year, so I am starting off my new GIS project with a clean slate--as in I'm kicking all my kludged together maps and bloated geodatabase with dozens and dozens of orphaned items to the curb. I have been putting off using Arcade, but since I also plan on publishing more content to our portal, I want to start using it to make life easier for those online maps.
My current dilemma is that I can't figure out how to manipulate a label string the way I need to. I have a field (Office Name) that includes the City and State. To make labelling easier, I want to remove the last three characters in that field, which is a space and then the two character state abbreviation. With Python, I can do it thusly:
def FindLabel ( [OFC_NM] 😞
L = [OFC_NM]
L = L.title()
L = L[:-3]
return L
How the heck do I this (properly) with Arcade?
arcade formatting arcade language string.format
Thanks for the reply, Joe. Unfortunately those functions don't do what I need to be done, at least not as they are written/documented in the help file. I actually played around with that prior to my post (should've mentioned that in my original post.)
Although using the Left function would give me the characters to the left of the string, which is what I need, there is no consistent length to the office names.
Python deals with this type of situation by allowing you to use negative numbers so using something like
L = "New Orleans, LA"
L[:-3]
gives you "New Orleans" when the string in the field is "New Orleans, LA". I tried using the same logic with Arcade
Right('New Orleans, LA', -3)
But that does not work and the labels don't actually render.
Hi Chris Lope
Have a look at this expression based on the suggestion by jborgion
var txt = 'New Orleans, LA';
return Left(txt, Count(txt)-4);
This will return "New Orleans" (without the comma since it took off 4 characters) | https://community.esri.com/t5/arcgis-pro-questions/how-do-i-remove-x-characters-from-the-right-using/m-p/180773/highlight/true | CC-MAIN-2021-39 | refinedweb | 359 | 65.15 |
Java Map between pairs and values
I need to create a map which will cache results of a third party lookup service. The request is made up of two objects for example, time and month. The map needs to map between (time, month) and a result.
My initial idea is to make an object to wrap time and month into effectively a tuple object, so the cache is a map between this object and the result.
Is there a better way of doing this, without needing to wrap the request into the tuple object each time we need to use the cache?
Many thanks.
Answers
My initial idea is to make an object to wrap time and month into effectively a tuple object
That's the right idea. Override hashCode() and equals(Object) of your tuple to make it work with HashMap<TimeMonthTuple>, or compareTo(TimeMonthTuple) to make it work with TreeMap<TimeMonthTuple>
Is there a better way of doing this?
This is the most straightforward way, unless you have a class that can replace TimeMonthTuple with something that makes sense. For example, time and date could be combined into a Date object.
In certain cases you could make a key based on a primitive wrapper. For example, if time is expressed as the number of minutes since midnight and month is a number between 1 and 12, inclusive, you could wrap both values into an Integer, and use it as a key:
Integer makeTimeMonthKey(int time, int month) { return (time * 12) + (month - 1); }
You should take a look at Guava Tables if you want to avoid creating a wrapper object.
I'll do it this way too, but be careful when defining a Key of an HashMap: it should be immutable, since otherwise changing it could compromise the mapping and hashing, and should implement the two requirements of hash key: hashCode and equals methods. Something like this:
final class YourWrapper { private final Integer month; private final Integer time; public YourWrapper(Integer month, Integer time) { this.month = month; this.time = time; } public Integer getMonth() { return month; } public Integer getTime() { return time; } @Override public int hashCode() { return month.hashCode() ^ time.hashCode(); } @Override public boolean equals(Object obj) { return (obj instanceof YourWrapper) && ((YourWrapper) obj).month.equals(month) && ((YourWrapper) obj).time.equals(time); } }
You can create a map of maps:
Map<MonthType, Map<TimeType, Value>> map;
so you'd call:
Value value = map.get(month).get(time);
to retrieve a value (provided you have previously added a value for month).
It's not particularly nice to use directly, however, since you'd need lots of containsKey/null checks. You could wrap it up into a convenience class:
class MapOfMaps { final Map<MonthType, Map<TimeType, Value>> map = new HashMap<>(); void put(MonthType month, TimeType time, Value value) { Map<TimeType, Value> timeMap; if (map.containsKey(month)) { timeMap = map.get(month); } else { timeMap = new HashMap<>(); map.put(month, timeMap); } timeMap.put(time, value); } Value get(MonthType month, TimeType time) { if (!map.containsKey(month)) { return null; } return map.get(month).get(time); } }
If you don't want to use an Map of Maps (not very nice, but it works) you still have some choices... for example storing month and time in a String, using something like DateFormat ds = new SimpleDateFormat(MM HH:mm:s), then using to convert a Calendar Object filled with your values
Calendar cal = Calendar.getInstance(); cal.set(Calendar.MONTH, yourmonth); cal.set(Calendar.HOUR_OF_DAY, yourhours); cal.set(Calendar.MINUTE, yourminutes); cal.set(Calendar.SECOND, yoursecs); String val_to_store=ds.format(cal.getTime());
Or, maybe, you could store the calendar object.
Excellent question!
First of all, the answer of @dasblinkenlight is correct, let's say, most of the time. Constructing a single object key is the most straight forward and obvious solution. It is easy and clear to understand and quite efficient. If this cache is not the hotspot of your application, no second thought needed.
However, there are alternatives, which may yield better efficiency.
Conceptually there are two possibilities:
- Construct a single key object for the compound keys. That's a common pattern and quite typical if you use compound keys for database access
- Do a two, or multi level hierarchical lookup, e.g.: store.get(month).get(time)
For the hierarchical lookup, no additional object allocation is needed, however, you trade it in for a second hash table access. To be most memory efficient, it is important to put the key with the smallest value space first.
If this is a very central place of your application, the even better approach is to put the first lookup stage, the twelve months, in an array and initialize it on startup:
Cache<Time, Value>[] month2ValueCache = new Cache<Time, Value>[12]; { for (int i = 0; i < 12; i++) { month2ValueCache[i] = new Cache<Time, Value>(...); } } Value get(int month, Time, time) { return month2ValueCache[month].get(time); }
I did a comparison benchmark for formatting dates with the DateFromatter. This seams similar to your use case. This has actually three key components: date, format and locale. You can find it here:
My result was, that there is actually not much runtime difference between allocating an object for the compound keys, or the three level cache lookup without object allocation for the key. However, the used benchmark framework does not take into account the garbage collection correctly. I will do a more thorough evaluation after I switched to another benchmark framework.
Need Your Help
How to change the item's width and height when we use StaggeredGridLayoutManager
android android-recyclerviewI know how to use StaggeredGridLayoutManager and RecyclerView to draw a staggered grid, like this.pic For staggered grid . But in this case all the item has the same width. I want change the width ...
iPhone CALayer Stacking Order
iphone core-animation core-graphics calayer drawrectI'm using CALayers to draw to a UITableViewCell. I'm trying to figure out how layers are ordered with the content of the UITableViewCell. For instance: | http://www.brokencontrollers.com/faq/34025539.shtml | CC-MAIN-2019-26 | refinedweb | 991 | 55.13 |
- The Anatomy of a FieldTemplate.
- Your First FieldTemplate.
- An Advanced FieldTemplate.
- A Second Advanced FieldTemplate.
- An Advanced FieldTemplate with a GridView.
- An Advanced FieldTemplate with a DetailsView.
- An Advanced FieldTemplate with a GridView/DetailsView Project.
The Idea for this FieldTemplate came from Nigel Basel post on the Dynamic Data forum where he said he needed to have a GridView emended in another data control i.e. FormView so he could use the AjaxToolkit Tab control. So here it is with some explanation.
Files Required for Project
In this project we are going to convert a PageTemplate into a FieldTemplate so in your project you will need to copy the List.aspx and it’s code behind List.aspx.cs to the FieldTemplate folder. When copied rename the file to GridView_Edit.ascx and change the class name to GridView_EditField see Listings 1 & 2.
<%@ Page Language="C#" MasterPageFile="~/Site.master" CodeFile="List.aspx.cs" Inherits="List" %>
<%@ Control Language="C#" CodeFile="GridView_Edit.ascx.cs" Inherits="GridView_EditField" %>
Listing 1 – Changing the class name of the GridView_Edit.ascx file
public partial class List : System.Web.UI.Page {
...
}
to
public partial class GridView_EditField : FieldTemplateUserControl {Listing 2 - Changing the class name of the GridView_Edit.ascx.cs code behind file
...
}
Now in the GridView_Edit.ascx file trim out all the page relavent code:
i.e. remove the following tags (and their closing tags where applicable)
<asp:Content
<asp:UpdatePanel
<ContentTemplate>
<%@ Register src="~/DynamicData/Content/FilterUserControl.ascx" tagname="DynamicFilter" tagprefix="asp" %>
<asp:DynamicDataManager
Also remove from the GridView the columns tags and everything in them, and then add the following properties to the GridView:
AutoGenerateColumns="true" AutoGenerateDeleteButton="true" AutoGenerateEditButton="true"
and you should end up with something like Listing 3.
<%@ Control <asp:ScriptManagerProxy <asp:ValidationSummary <asp:DynamicValidator <asp:GridView </asp:LinqDataSource>
Listing 3 – the finished GridView_Edit.ascx file
Now we’ll trim down the GridView_Edit.ascx.cs, the first things to remove are the following methods and event handlers:
protected void Page_Load(object sender, EventArgs e)
protected void OnFilterSelectedIndexChanged(object sender, EventArgs e)
This will leave us with just the Page_Init to fill in see Listing 4 for it.
protected void Page_Init(object sender, EventArgs e) { var metaChildColumn = Column as MetaChildrenColumn; var attribute = Column.Attributes.OfType<ShowColumnsAttribute>().SingleOrDefault(); if (attribute != null) { if (!attribute.EnableDelete) EnableDelete = false; if (!attribute.EnableUpdate) EnableUpdate = false; if (attribute.DisplayColumns.Length > 0) DisplayColumns = attribute.DisplayColumns; } var metaForeignKeyColumn = metaChildColumn.ColumnInOtherTable as MetaForeignKeyColumn; if (metaChildColumn != null && metaForeignKeyColumn != null) { GridDataSource.ContextTypeName = metaChildColumn.ChildTable.DataContextType.Name; GridDataSource.TableName = metaChildColumn.ChildTable.Name; // enable update, delete and insert GridDataSource.EnableDelete = EnableDelete; GridDataSource.EnableInsert = EnableInsert; GridDataSource.EnableUpdate = EnableUpdate; GridView1.AutoGenerateDeleteButton = EnableDelete; GridView1.AutoGenerateEditButton = EnableUpdate; // get an instance of the MetaTable table = GridDataSource.GetTable(); // Generate the columns as we can't rely on // DynamicDataManager to do it for us. GridView1.ColumnsGenerator = new FieldTemplateRowGenerator(table, DisplayColumns); // setup the GridView's DataKeys String[] keys = new String[metaChildColumn.ChildTable.PrimaryKeyColumns.Count]; int i = 0; foreach (var keyColumn in metaChildColumn.ChildTable.PrimaryKeyColumns) { keys[i] = keyColumn.Name; i++; } GridView1.DataKeyNames = keys; GridDataSource.AutoGenerateWhereClause = true; } else { // throw an error if set on column other than MetaChildrenColumns throw new InvalidOperationException("The GridView FieldTemplate can only be used with MetaChildrenColumns"); } }
Listing 4 – the Page_Init ***UPDATED 2008/09/24***
protected override void OnDataBinding(EventArgs e) { base.OnDataBinding(e); var metaChildrenColumn = Column as MetaChildrenColumn; var metaForeignKeyColumn = metaChildrenColumn.ColumnInOtherTable as MetaForeignKeyColumn; // get the association attributes associated with MetaChildrenColumns var association = metaChildrenColumn.Attributes. OfType<System.Data.Linq.Mapping.AssociationAttribute>().FirstOrDefault(); if (metaForeignKeyColumn != null && association != null) { // get keys ThisKey and OtherKey into Pairs var keys = new Dictionary<String, String>(); var seperator = new char[] { ',' }; var thisKeys = association.ThisKey.Split(seperator); var otherKeys = association.OtherKey.Split(seperator); for (int i = 0; i < thisKeys.Length; i++) { keys.Add(otherKeys[i], thisKeys[i]); } // setup the where clause // support composite foreign keys foreach (String fkName in metaForeignKeyColumn.ForeignKeyNames) { // get the current pk column var fkColumn = metaChildrenColumn.ChildTable.GetColumn(fkName); // setup parameter var param = new Parameter(); param.Name = fkColumn.Name; param.Type = fkColumn.TypeCode; // get the PK value for this FK column using the fk pk pairs param.DefaultValue = Request.QueryString[keys[fkName]]; // add the where clause GridDataSource.WhereParameters.Add(param); } } // doing the work of this above because we can't // set the DynamicDataManager table or where values //DynamicDataManager1.RegisterControl(GridView1, false); }
Listing 4a – OnDataBinding event handler ***ADDED 2008/09/24***
And now we will need to implement the GridViewColumnGenerator to fill in the rows as the DynamicDataManager would have done.
public class GridViewColumnGenerator : IAutoFieldGenerator { protected MetaTable _table; public GridViewColumnGenerator(MetaTable table) { _table = table; } public ICollection GenerateFields(Control control) { List<DynamicField> oFields = new List<DynamicField>(); foreach (var column in _table.Columns) { // carry on the loop at the next column // if scaffold table is set to false or DenyRead if (!column.Scaffold) continue; DynamicField field = new DynamicField(); field.DataField = column.Name; oFields.Add(field); } return oFields; } }Listing 5 – the GridViewColumnGenerator class
In Listing 5 we have the GridViewColumnGenerator class which you can just tag onto the end the GridView_Edit.ascx.cs file as it is only used here.
[AttributeUsage(AttributeTargets.Property, AllowMultiple = false)] public class GridViewTemplateAttribute : Attribute { public String ForeignKeyColumn { get; private set; } public GridViewTemplateAttribute(string foreignKeyColumn) { ForeignKeyColumn = foreignKeyColumn; } } Listing 6 – GridViewTemplateAttribute for the above ***UPDATE 2008/08/08 *** :D
Figure 1 GridView_Edit FieldTemplate in action
[UIHint("GridView")] public object Order_Details { get; set; }
Listing 7 – Metadata
Until next time
27 comments:
Thanks for all the great information. The grid view sample helps expose even more of the capabilities of Dynamic Data and your work with the security model is right where a lot of people are thinking.
Keep up the excellent work!
Thanks Angry Tech Guy glad you like it, where are you from?
From Maine, USA.
Hello,
That is just excellent but ..
is this possible to do the same thing using Entities instead of LINQ to SQL ?
Yes it is possible I will get around to doing a post on that when we get to Beta 2 of VS2010/.Net 4.0
Steve :D
No offence, but the above is almost impossible to read, what with all the edits, deletes, etc..
if you go to the final article in the series you will find the source code
Steve :D
P.S. the article is old and had many changes and I wanted readers to see my errors.?
Any chance of post LINQ to SQL?
Cheers
"is this possible to do the same thing using Entities instead of LINQ to SQL ?
1 June 2009 16:58
Steve said...
Yes it is possible I will get around to doing a post on that when we get to Beta 2 of VS2010/.Net 4.0"
I'm sure it should but I havent tried as all my projects have been Linq to SQL.
Steve :D
P.S. with the advent of VS2010 I think all my projects will go EF :)
Where is ShowColumnsAttribute? I'm using 2010 RC, is that class no longer part of the dynamic data libraries?
Hi there, ShowColumnsAttribute was one of my custom attributes see my latest blog post for stuff on VS2010 RC and newer.
Steve :D
Instead of:
[code]
GridDataSource.ContextTypeName = metaChildColumn.ChildTable.DataContextType.Name;
[/code]
you should use:
[code]
GridDataSource.ContextTypeName = metaChildColumn.ChildTable.DataContextType.FullName;
[/code]
And in C# 4 instead of :
[code]
if (!attribute.EnableDelete)
EnableDelete = false;
if (!attribute.EnableUpdate)
EnableUpdate = false;
[/code]
you should use:
[code]
EnableDelete = attribute.EnableDelete;
EnableUpdate = attribute.EnableUpdate;
[/code]
Hi paulovila, this article was for DD1 in 2008 and is now out of date but thanks for the update I will try and fix it sometime.
Steve
Hi Steve,
Is there updated version of of this? I am having issues porting this to VS2010 EF.
Thanks,
San
PS: Love ur work, I have learned a lot Thanks
I will be doing an article for the child grid for VS2010 and EF soon
Steve.
Hi,
Thanks for all the publishing on DynamicData. Appreciate it.
When is the new child gridview article coming? Waiting for it!
I still have at least one problem with this approach, the ValidationException(s) are not caught somehow (for db errors) and the "yellow screen of death" as you intimately call it, is still disturbing me!.
If only I could tweak it...
A bit frustrating!
Hi Lener, I am working on a new version of the ChildrenList as I've now called it :) for .Net 3.5 and .Net 4 and Linq to SQL and Entity Framework. I will do a new article when work schedule allows..
Steve :)
Hello sir,
Hi Faizal, I do have somthing like that but I have made it puclic yet sorry, I hope to get it on NuGet when I get the time but stuck doing paid work at the moment.
Got to eat :D
Steve
Hi,:
"The method 'Skip' is only supported for sorted input in LINQ to Entities. The method 'OrderBy' must be called before the method 'Skip'."
Some other resources say this has to do with using the wrong type of dynamic data project. Your example is in LINQ to SQL. How can I modify your code to work with LINQ to Entities (or am I missing something and that is not the problem)?
Thanks,
Jonathan
Hi Jonathan, yes I already did that just not published it I can send you the code if you direct e-mail me
Steve
Hi,
Great post. Any idea if it's possible to insert one or more new "order_details" in the gridview before sending the whole bunch into the database via the FormView's "update" button?
Thanks,
red.
Steve
Hi,
Thanks for this great work. Do you have already published somewhere the version with Linq to Entity Framework?
Thanks,
Marco
I have but you will need to e-mail me if you want just the bare minimum sample else you can look on my open source project here
Steve | http://csharpbits.notaclue.net/2008/08/dynamic-data-and-field-templates.html?showComment=1353602692916 | CC-MAIN-2019-35 | refinedweb | 1,625 | 50.43 |
0
Kind of stuck here.
I'm porting over a section of code from Java to C#, and I'm having an issue with method permissions constructors.
Early in my code, i have this class..
public class TCard { int suit; int rank; }
and later i've got this method, which throws a bunch of errors.
Identifiers highlighted with the double asterixes throw the "is innaccessible due to its protection level" error.
I've tried rebuilding my solution, and this hasn't worked.
public void loadDeck(TCard[] deck) { int count; for (count = 1; count <= 52; count++) { deck[count] = new TCard(); deck[count].**suit** = 0; deck[count].**rank** = 0; } String lineFromFile; string currentFile = ""; currentFile = new AQAReadTextFile2014(); currentFile.openTextFile("deck.txt"); count = 1; lineFromFile = currentFile.readLine(); while (lineFromFile != null) { deck[count].**suit** = Integer.parseInt(lineFromFile); lineFromFile = currentFile.readLine(); deck[count].**rank** = Integer.parseInt(lineFromFile); lineFromFile = currentFile.readLine(); count = count + 1; } currentFile.closeFile(); }
There's probably some kind of blatant error here.
This is my first time porting from C# to Java, so forgive me if I've gotten myself confused between the two.
Edited 2 Years Ago by humorousone | https://www.daniweb.com/programming/software-development/threads/487353/class-is-inaccessible-due-to-its-protection-level | CC-MAIN-2016-50 | refinedweb | 186 | 53.58 |
If I Don't Actually Like My Users?
Posted Apr 4, 2008 10:15 UTC (Fri) by dgm (subscriber, #49227)
[Link]
I miss a case: "You will get it wrong but It will appear to work anyways".
Posted Apr 4, 2008 14:02 UTC (Fri) by nix (subscriber, #2304)
[Link]
'It tries to intelligently DWIM and does the right thing nearly all the
time, except when you really need it to, when it gets confused and does
exactly the worst possible thing'.
Intelligence (-> irregular, DWIM behaviour) is useful, especially in user
interfaces, but *you must be able to turn it off*.
Posted Apr 10, 2008 4:43 UTC (Thu) by NRArnot (subscriber, #3033)
[Link]
Or more honestly
"It tries to rather simplemindedly DWIM and does the right thing often enough to keep a dimwit
happy. Indeed, you, the dimwit programmer, will be very happy indeed, because no other
interface to the system is provided, and the hotshots who might wish to show you up will be no
more able to use it reliably than you are"
(Remind you of a certain proprietary operating system at all? )
Posted Apr 10, 2008 17:37 UTC (Thu) by nix (subscriber, #2304)
[Link]
It reminds me of entirely too much of my own code before I realised the
problems with it. (The other problematic thing: thoughtless information
hiding. Yes, reducing coupling is good, but if you have an internal
parameter affecting the behaviour of the system, *export it* somehow, if
need be by way of a separate wrapping shared library with different
interface guarantees, so you can change the implementation and eliminate
or change those parameters, breaking the interface of that wrapping
library, without breaking the interface of the 'real' library. Why? So
that testsuites, not necessarily just those you write, can peek at enough
of the library's internal state that they can guarantee that they've
exercised all its corners.)
Posted Apr 11, 2008 0:18 UTC (Fri) by nix (subscriber, #2304)
[Link]
Good grief. I'm sorry about perpetrating that horrific run-on sentence. It
just sort of... metastasized withot my realising it.
Posted Apr 4, 2008 10:27 UTC (Fri) by lokpest (subscriber, #45764)
[Link]
If your thinking about throwing real end users in the blender, get help!
...they are heavy...
()
Posted Apr 4, 2008 11:02 UTC (Fri) by JoeBuck (subscriber, #2330)
[Link]
Posted Apr 4, 2008 14:06 UTC (Fri) by nix (subscriber, #2304)
[Link]
Hm, yes. I was going to say that tmpnam() et al are impossible to get
right, but they're not: you can run them in constrained environments in
which you know you won't get attacked. You don't *need* to be attacked for
gets() to shoot you in the head.
(Why oh why were gets(), puts() and the other pre-stdio functions not
quietly retired when stdio was invented? At least gets() is rarely used in
free software, although probably not as rarely as seekdir()/telldir(),
which I've never even heard of anyone using.)
Posted Apr 4, 2008 14:38 UTC (Fri) by xbobx (guest, #51363)
[Link]
#include <stdio.h>
int main(void) {
printf("hi\n");
return 0;
}
sub $0x8,%rsp
mov $0x4005f8,%edi
callq 400430 <puts@plt>
xor %eax,%eax
add $0x8,%rsp
retq
Posted Apr 4, 2008 14:49 UTC (Fri) by nix (subscriber, #2304)
[Link]
Well, yeah, but as the analogue of gets(), entirely redundant to fputs(),
both should have gone if either do, and gets() certainly should have gone.
But it didn't.
Posted Apr 4, 2008 21:33 UTC (Fri) by njs (subscriber, #40338)
[Link]
Any use at all of gets() does cause a linker warning, at least.
Posted May 19, 2008 2:32 UTC (Mon) by TBBle (guest, #52146)
[Link]
> At least gets() is rarely used in
> free software, although probably not as rarely as seekdir()/telldir(),
> which I've never even heard of anyone using.
Samba uses it...
Mind you, I wouldn't have known Samba was using it either (and in fact it took me a little
while to wrap my head around why) before I saw that article.
Posted May 19, 2008 13:55 UTC (Mon) by nix (subscriber, #2304)
[Link]
So a really quite substantial bug (affecting perhaps 3% of all calls to
this function in nontrivial directories) persisted for *a quarter of a
century* before anyone noticed it.
I suspect that seekdir()/telldir() has exactly one user: Samba. Given how
horrible it makes filesystem implementations, and the closeness of Samba
implementors to the kernel, I'm not sure that it's worth preserving this
function for that one user (which is privileged in any case so the usual
oops-it-might-use-up-too-much-memory arguments against a naive
entirely-in-VFS implementation do not apply).
Votes to make seekdir()/telldir() root-only, anyone?
Pedantry
Posted Apr 5, 2008 3:46 UTC (Sat) by tialaramex (subscriber, #21167)
[Link]
It's not /impossible/ to get right although I agree that it provides no functionality you
would otherwise want that's not better implemented elsewhere in a modern POSIX API.
It's not impossible for the same reason that the non-thread safe functions aren't impossible
to get right, and the temporary file functions that aren't protected against races aren't
impossible to use safely. So long as you don't need that safety, they're fine. Similarly
gets() is fine so long as you don't need any protection against over-long strings.
e.g. suppose you have a socket, over which you receive instructions from a device driver in
the kernel, in this case gets() can be appropriate because the inverted privilege separation
means that crashing you with a buffer overflow achieves nothing - as a userspace process you
actually have less privileges than the kernel driver calling you. So long as there's an agreed
rule about buffer size (e.g. each instruction is the name of a file, so max pathname plus a
newline) and a mechanism ensuring that you are connected to that kernel driver as intended,
then gets() adds no new danger to your code.
Still, this is such a corner case, and there are so many better ways to approach this problem,
that the compiler/ linker is justified in complaining if you use this function in new code.
I've never found a sensible use for gets() in my own code, but I have used various other
"non-safe" functions, like tmpnam in contexts where the newer "safer" function didn't do what
I needed, and I judged that the safety issue was something I could cope with after reading
about it. I think I had to roll my own mkdtemp() some years ago for example, using "unsafe"
tmpnam and lots of careful reading of a treatise on race conditions in the filesystem.
A good example of an API that's _really_ impossible to use is one that Raymond Chen rants
about regularly, a Windows API call which purports to tell you whether a pointer is "valid" ie
whether you'd take a page fault if you tried to access it. This is simultaneously useless (it
doesn't do anything a userspace programmer should be trying to do) and dangerous (it will
itself inadvertently page in RAM if you call it on the edge of the stack for example) and
unreliable (the page may appear or vanish before control returns to your program with the
result). But having been foolishly offered to programmers in the past, Windows continues to
provide it for compatibility.
Posted Apr 5, 2008 4:05 UTC (Sat) by tialaramex (subscriber, #21167)
[Link]
Ah, I got part of that last example wrong. It's dangerous because it will disable the stack
extension, ie it doesn't cause a new page to be mapped at the edge of the stack, but rather
overrides the runtime's automatic mapping of that page, returns to you an result that it isn't
yet mapped, but doesn't restore the stack extension behaviour. So suddenly, and without
explanation, your stack can't grow.
For a POSIX example that's less dangerous but just as unreliable/ useless, try access(2) which
attempts to guess whether you'd be allowed to do something, but can't assure you whether you
will or will not be able to do it if you actually try. It's not even useful for checks like
those in ssh which wants to see if you obeyed the rules for your own safety, since ssh needs
to check access permissions for /other users/.
Posted Apr 4, 2008 15:42 UTC (Fri) by im14u2c (subscriber, #5246)
[Link]
Whoo hoo! I see fputs made the list. It's in the family of functions I complained about last week.
Some years ago, I wrote a bunch of code that treated the VT-100 display as a "frame buffer." You'd doodle on an array representing the screen, and then you'd call an update function to refresh the display. Nothing too earth shattering. My intention was to get the C code working, and then whittle it down into a tiny graphic hack as an IOCCC entry.
Well, the IOCCC entry never happened, but I found myself instead using the code for more and more things. I threw all sorts of stuff into the stew. The resulting code was a nightmare, particularly on the topic of coordinates. I had no fewer than 3 separate conventions:
Oy. Thankfully I can say that was almost half a lifetime ago for me.
Posted Apr 4, 2008 19:32 UTC (Fri) by pr1268 (subscriber, #24648)
[Link]
Oy. Thankfully I can say that was almost half a lifetime ago for me.
Darn... That happened just yesterday for me (it was with the blasted parameters to a function call).
In that same program, I experienced the below bundle of joy (and what is it with these stupid size_t types?!) Code:
size_t moonWalk(const vector<MyThingy>& stuff)
{
size_t i = h_list.size();
while(i >= 0) {
/* do something with */ stuff[--i];
}
return i;
}
Oh, so subtle! It was a long afternoon of troubleshooting before I figured out why Mr. Fault (first name: Segmentation) kept interrupting my otherwise blissful coding session. (Note to self: Be sure to add brown paper bags to shopping list.)
Problem and solution
Posted Apr 5, 2008 5:01 UTC (Sat) by man_ls (subscriber, #15091)
[Link]
Posted Apr 5, 2008 6:08 UTC (Sat) by emk (subscriber, #1128)
[Link]
It took me a second, too, because it's using a strange (incorrect) looping idiom. Here, size_t is an unsigned type, so it can never be negative. If i == 0, and you write --i, it wraps around to a huge positive value. To fix it, try:
size_t
i == 0
--i
while (i > 0) {
Thanks to the unsigned nature of size_t, it's surprisingly hard to loop backwards over STL vectors without tripping over this bug. You could choose to use reverse iterators, which are clunky but safer:
vector<MyThingy>::const_reverse_iterator iter = stuff.rbegin();
for (; iter != stuff.rend(); ++iter) {
/* do something with */ *iter;
}
Also, I have no idea why the original function returns i. Was there a break somewhere in the original loop? Without one, i would always equal 0 (assuming the loop terminated successfully).
i
break
Posted Apr 6, 2008 6:55 UTC (Sun) by olecom (guest, #42886)
[Link]
> while (i > 0) { stuff[--i];
when using predecrement, use
i=max+1; do { --i; } while(i);
Posted Apr 6, 2008 7:20 UTC (Sun) by olecom (guest, #42886)
[Link]
> > while (i > 0) { stuff[--i];
> when using predecrement, use
>
> i=max+1; do { --i; } while(i);
Also beware on input!
max -- is maximum linear address
i -- geek counter, counts downto zero including;
thus, number of loops is (max + 1)
better is to have *all* counts downto zero,
i.e. zero address access and
i = max /* number of loops is (max + 1) */
do {
/* use */ stuff[i];
} while (--i);
Posted Apr 6, 2008 9:20 UTC (Sun) by man_ls (subscriber, #15091)
[Link]
while (i>0) {--i;}
Posted Apr 6, 2008 23:11 UTC (Sun) by olecom (guest, #42886)
[Link]
> What if the array is empty?
Same thing -- check your input. `if' for check, `while' for loop/iterations.
Mixing the two isn't a good thing, worst optimization ever.
_____
Posted Apr 7, 2008 0:17 UTC (Mon) by man_ls (subscriber, #15091)
[Link]
By the way, with Java your best optimization is to write while (i != 0) instead of while (i > 0), because otherwise the JVM will perform arithmetic comparisons instead of logical. (Yes, it is pitiful.) I believe they have fixed it now, but it would be nice to know for sure.
while (i != 0)
while (i > 0)
just correct loops (Problem and solution)
Posted Apr 7, 2008 2:28 UTC (Mon) by olecom (guest, #42886)
[Link]
--
sed 'sed && sh + olecom = love' << ''
-o--=O`C
#oo'L O
<___=E M
Posted Apr 10, 2008 9:13 UTC (Thu) by DennisJ (subscriber, #14700)
[Link]
Supposing a signed type had been used for the counter, and nothing changed to the loop
condition, the last pass through the loop would have done something to stuff[-1] which is also
unlikely to be what the author wanted.
So isn't 'while(i>0)' the correct solution following an incorrect analysis?
Posted Apr 5, 2008 9:51 UTC (Sat) by nix (subscriber, #2304)
[Link]
-Wall gives you a nice helpful warning about this case.
(I use size_t (and where appropriate ssize_t) religiously for things like
array indexes, so I hardly ever see this problem. It only turns up whe you
have to interface with code that was written by people who don't
understand that the index to arrays in C is *not* a fricking int or a
long, it's a size_t... this is starting to matter now that int and long
are different sizes again on some platforms.)
Posted Apr 5, 2008 10:36 UTC (Sat) by im14u2c (subscriber, #5246)
[Link]
Erm... how often do you have 2 billion elements in a single array? It's only when you get
larger than that in a *single array* that int vs. size_t matters for an *array index*.
Using more than 2 gigs of memory isn't unheard of, and is in fact quite common. But, using
that much memory in a *single array* seems rather suspect. Actually, no, it seems rather
outrageous, unless your screen display is a 1200 dpi bitmap on a 24" screen or something.
Posted Apr 5, 2008 11:43 UTC (Sat) by nix (subscriber, #2304)
[Link]
It's not common. I'm just a correctness fiend. :)
Posted Apr 5, 2008 16:13 UTC (Sat) by im14u2c (subscriber, #5246)
[Link]
Of course, if your size_t results in bugs like the one above (downcounters that just don't
quit), I'd argue it hurts correctness.
I don't recall anything in the C standard that suggests size_t is actually an appropriate type
for array indexes. It *is* an appropriate type to pass to malloc, but that's what pops out of
sizeof(type). It says nothing about the type of the index that you'll use on the resulting
array.
Posted Apr 5, 2008 18:29 UTC (Sat) by nix (subscriber, #2304)
[Link]
The Standard implies it, and the implication is fairly obvious as these
things go. Let's follow through the logic.
size_t is the upper limit on the size of any object in C, and arrays (like
other derived types) are themselves objects (they are not functions nor
incomplete types, the other classes of type).
The smallest addressable object type in C is 'char', which by definition
occupies one byte; thus, the largest possible array is an array of char of
size (size_t)-1.
Thus, the largest possible array index is by definition always the same as
the largest possible allocated object, i.e., contained exactly within
size_t.
Use another type and it will eventually hurt you. (If your algorithms rely
on decrementing index counters below zero, I'd say they are themselves
risky and should be rethought, because if you use that index, you'll be
indexing an array before its start, which if it goes off the start of an
allocated object invokes undefined behaviour.)
(As further evidence, the Standard contains a --- non-normative ---
example of using sizeof to determine the length of an array, which
implies that the length of an array is a size_t, so its index probably is
too...)
This concludes today's ludicrous pedantry. Don't make the mistake of
thinking that any of this stuff is actually important. :)
Posted Apr 5, 2008 19:25 UTC (Sat) by im14u2c (subscriber, #5246)
[Link]
I guess you could be even more pedantic and put 'U' suffixes on your array bounds too: int array[3U][5U]; ;-)
As for down-counting loops: The counter going negative is a red herring in terms of correct array accesses. What do the following two loops have in common?
for (i = 0; i < N; i++)
do_something(array[i]);
for (i = N-1; i >= 0; i--)
do_something(array[i]);
Answer? Both leave 'i' pointing one element past the end of the array. The only difference is which end.
I personally find negative array subscripting useful. The following is legitimate C code:
/* Take a histogram of signed 8-bit values */
int histogram[256];
int *hist_mid = histogram + 128;
signed char *data;
/* ... */
for (i = 0; i < N; i++)
hist_mid[data[i]]++;
And as far as the standard goes, at least this example from the C0x standard uses int to define array bounds (in the context of the new "Variable Length Array" feature being added to C).
*shrug*
You're right, though, it doesn't matter a whole lot. Just don't take my signed integer indices away, and I'll let you keep your unsigned ones. :-)
Posted Apr 6, 2008 8:28 UTC (Sun) by jbh (subscriber, #494)
[Link]
The largest single array I've worked with lately was about 500 million 64-bit doubles (the
values array of a CRS matrix).
So I don't think 2 billion elements seems outrageous.
But I wouldn't run that on a 32-bit cpu, for obvious reasons.
Posted Apr 7, 2008 0:50 UTC (Mon) by gdt (subscriber, #6284)
[Link]
Q: Erm... how often do you have 2 billion elements in a single array?
A: When it's being exploited.
brought to you by the letters R, G and B...
Posted Apr 16, 2008 15:49 UTC (Wed) by roelofs (subscriber, #2599)
[Link]
unsigned char *pix = (unsigned char *)malloc(32768*32768*3*sizeof(unsigned char));
Not even a tiny bit far-fetched.
Greg
Posted Apr 16, 2008 15:55 UTC (Wed) by im14u2c (subscriber, #5246)
[Link]
Hmm... GIMP at least tiles that and uses a higher order tile structure. Are there really
programs that actually allocate that in a single 3GB malloc()?
Posted Apr 16, 2008 16:19 UTC (Wed) by roelofs (subscriber, #2599)
[Link]
// int, long, or other signed type: w, h, npixels, bufsize
npixels = w * h;
bufsize = 3 * npixels;
if (w <= 0 || h <= 0 || npixels/w != h || bufsize/3 != npixels) {
FAIL();
}
buf = malloc(bufsize*sizeof(whatever_buf_is_made_of));
But realistically, you're right--you absolutely don't want to use a program that tries to allocate the whole thing simultaneously, regardless of whether it's in one piece or many. And you may want to avoid certain image formats for the same reason--tiled TIFF, for example, is well suited to very large images, but many (most?) other image formats are not. PNG, for all its simplicity (well, relative to TIFF, anyway), basically requires you to decode at least two full rows simultaneously, and rows can be up to 16 GB each (2^31 - 1 pixels wide * 8 bytes deep for 64-bit RGBA). Of course, 2G x 2G x 64-bit images are still in the realm of fantasy, AFAIK...
Correction and comment replies
Posted Apr 5, 2008 14:42 UTC (Sat) by pr1268 (subscriber, #24648)
[Link]
s/h_list/stuff/ on line 3 of my code.
In reply to man_ls, as others have pointed out, a size_t is an unsigned long integer type. Integer underflow occurred here with the pre-decrement --i array index.
In reply to emk, I did indeed changing over to using a reverse iterator for this code, but I originally had some motivation not to do so, the reason which now escapes me. And to think this was just two days ago - goes to show just how fast thoughts and ideas flee through my mind when writing software!
Thanks for the replies - it's not just anywhere where I can share such embarrassing C++ code and still stimulate a nice discussion.
Posted Apr 8, 2008 8:16 UTC (Tue) by aya (guest, #19767)
[Link]
Ah, but you forget you're using an STL class.
size_t moonWalk(const vector<MyThingy>& stuff)
{
for (vector<MyThingy>::const_reverse_iterator i= stuff.rbegin();
i != stuff.rend();
++i)
{
/* do something with *i */
}
/* I don't get what you're returning, though. */
}
Posted Apr 5, 2008 9:09 UTC (Sat) by Wummel (subscriber, #7591)
[Link]
parseint()
parseInt('08')
parseInt('08', 10)
parseInt feature
Posted Apr 5, 2008 10:45 UTC (Sat) by ccyoung (subscriber, #16340)
[Link]
thanks for that reminder - nothing like getting bit in the butt
Posted Apr 7, 2008 2:28 UTC (Mon) by Los__D (subscriber, #15263)
[Link]
What's wrong with the 0 prefix giving octal? It does exactly the same in C, just as both C and
parseInt() treats 0x as hexadecimal.
Anyway, ActionScript v3 has dumped this, probably meaning it's also gone from ECMAScript, so I
guess JavaScript will pick up the changes sooner or later.
They kept the '0x' prefix for Hexadecimal though, which seems a bit strange to me, either keep
them both, or dump them both. Oh well, maybe it's just because noone really uses octal in
ECMAScript and derived scripts.
Posted Apr 7, 2008 5:09 UTC (Mon) by dvdeug (subscriber, #10998)
[Link]
What's wrong with the leading 0 making it octal is that if you have to deal with people who
aren't mathematicians or computer scientists, they probably won't know what octal is. If they
add a leading zero by accident, odds are they'll not be able to figure out what they did wrong
or why the results came out the way they did, and unless you're lucky enough to be standing
over their shoulder while they do it, their bug reports won't have enough information to
replicate it. The difference, to them, between typing in 0507 and 507 is nothing. Hence using
parseInt with a UI is going to cause rare, hard-to-trace problems in exchange for a feature
that no one cares about. (Really, even among computer geeks, few people use octal.)
Posted Apr 6, 2008 19:00 UTC (Sun) by gdt (subscriber, #6284)
[Link]
I wrote this
blog entry
in reply to Rusty's first blog entry. I'l copy it here as my web server is on my home ADSL link.
The Linux kernel API, a user's view
Rusty's
on API design is timely, as I'm struggling with the
API for Linux's Netfilter. There's no shortage of HOWTOs on the topic
and no shortage of production code to examine.
The problem is bit rot. The API to establish connection tracking
has been deprecated, but the official HOWTO on the Netfilter website
hasn't been changed. There's no documentation of the new
nf_ct_expect_alloc() at all. A reasonable QA process would have
rejected a code change which didn't update the official
documentation.
The API to register a connection tracking helper has also silently
changed. nf_conntrack_helper_register() no longer accepts a
bitmask. Again the official documentation and sample code hasn't been
updated. The entirety of the documentation is a set of obscure commit
comments and a short NetDev list discussion. Without finding those and
understanding their significance you can't understand why the
production code works when it differs from the official sample code and the large collection of older code in Patch-o-matic.
The broader networking API also has a newish function:
skb_header_pointer(). All of the original SKB manipulation
functions have documentation in the header file. Somehow this new
function appeared with no documentation in that header file..
Source code is also hard at explaining why. Why and when should
skb_header_pointer() be used in preference to direct access to
the SKB's data? The source won't tell you that unless you are already
so immersed in the kernel that you half-know the answer anyway.
Source code can also mislead. For example, looking at existing
Netfilter code in the kernel would give you the idea that a 64KB
buffer is needed for parsing incoming packets in a Netfilter modules. That's
not true at all, it's just that all of the modules which have been
accepted into the kernel have needed a packet-sized buffer to parse
for IETF-style protocol text or to decode ASN.1.
After struggling through all of this, I'll lay odds that posting
the finished module will result in at least one put-down e-mail about
some misuse of some Linux API.
A final thought. Is there a kernel API at all? Can something
permanently partially obscured be said to exist? Or is the API like
the Loch Ness Monster. In place of blurry photographs we have
Linux device drivers,
where even those who closely
track the kernel API can be misled by poor design and worse
documentation such as with
in_atomic().
Again, text processing (as in prev. discussion)
Posted Apr 7, 2008 3:41 UTC (Mon) by olecom (guest, #42886)
[Link]
>.
Seems like you have working source code, isn't that enough? I mean,
somebody did that for you (and many many other/users). Maybe after that
any kind of documentation writing wasn't in IWANTNOW list of the author?
It's open source, many who use, few who contribute. So it can be boring and
upsetting for particular authors. Оthers can be outraged.
> After struggling through all of this, I'll lay odds that posting the
> finished module will result in at least one put-down e-mail about some
> misuse of some Linux API.
Maybe also a documentation patch and willingness to improve the kernel,
everyone needs? I'm sure original author will be happy.
> A final thought. Is there a kernel API at all?
I think it's just hard and boring to maintain. After some repetition
almost anything in programming can be automated. It's not harvesting or
fruit/mushroom collecting, which is by far manual-only work (making a
suitable robot is more complicated than rocket science).
Any repetition is boring for human or dumbing down.
Maybe tool set must be upgraded (diff+patch in any form is technology
of 1980th)?
I've started to work with text processing to make some automation for
maintaining big changes, i.e not just one-liners, which can be grep'ed.
With some input from coding-style policy department and developers making
tags/clues comments in hard to textually analyse cases, something can be
done, and i think quite useful.
prev. comment
_____
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/276570/ | crawl-002 | refinedweb | 4,556 | 68.3 |
On Mar 24, 2009, at 4:51 PM, Dylan Yudaken wrote: > I apologise for being ridiculous and attaching the old file > > Dylan Yudaken wrote: >> Hi, >> >> I have attached a new LGPL'd libavcodec/fdctref.c file as per my >> summer of code small task. Since it has an IDCT too, how about renaming it to dctref.c? > /* > * Forward Double Precision Discrete Cosine Transform Maybe "Reference discrete cosine transform (double-precision)". > * Copyright (C) 2009 the ffmpeg project ffmpeg doesn't assume copyrights, so use your own name/fdctref.c > * Forward Double Precision Discrete Cosine Transform See above. > * @author Dylan Yudaken (dyudaken at gmail) > */ > > #include <math.h> > > #ifndef PI > #ifdef M_PI > #define PI M_PI > #else > #define PI 3.14159265358979323846 /* pi from math.h*/ > #endif > #endif Defined in libavutil/mathematics.h. > > #define PI_BY_8 (PI/8.0) It's only used once, probably no point in a #define. > > void init_fdct (void); > void fdct (short *block); > static double A[64]; coefficients[] instead of A[]? And maybe add ref_ to the function names (and update the callers). > void init_fdct(void){ > unsigned int i, j; > double c0; > > c0 = sqrt(0.125); > for (i = 0; i < 8; ++i){ > for (j = 0; j < 8; ++j){ > A[i*8 + j] = c0 * cos(i*PI_BY_8*(j + 0.5) ); > } > /* c0 = sqrt(0.125) only for i =0 else c0 = 0.5*/ Code doesn't match the comment. > [...] > void fdct(short *block){ > unsigned int i, j,k; > double tmp; > double out[8 * 8]; > > /* out = AX */ > [...] > /* X = outA' */ Without an equation these comments are meaningless. The second line has some trailing whitespace. > /** > * Transform 8x8 block of data with an inverse double precision > forward DCT <br> "inverse" is the same as "inverse forward". And please add a separate patch removing mentions of fdctref's license; I see it in README and doc/TODO. | http://ffmpeg.org/pipermail/ffmpeg-devel/2009-March/074102.html | CC-MAIN-2016-44 | refinedweb | 298 | 68.87 |
* p...@cpan.org [2019-01-17T06:25:04] > I'm really disappointed as you have not wrote anything about this topic > as people are already there...
I'm not dead. I am avoiding this list because I find your messages and attitude about it to be really off-putting, and whenever I come back, I feel like I'd rather go do something else. My position remains: this code works, and any change needs to be well vetted, and so far I haven't spent enough time to be satisfied that it's okay to go. Maybe I'll have a go at it again soon. When you first showed up to work on Email::MIME headers code, I said "you'll probably be happier if you just fork it," but you didn't. Now you're talking about trying to take the namespace over, which won't happen as long as I'm responsive to PAUSE admins. Which I am. I am happy to work on this project some, but when I show up and see obnoxious messages about how I owe anybody anything, or how I'm a deadbeat, it's a pretty good way to get me to say, "Hey, I'm just going to say it works well enough and also now I'm going away." I'll make another pass through the PRs next week. For now, I'm going to go do something more enjoyable. -- rjbs
signature.asc
Description: Digital signature | https://www.mail-archive.com/pep@perl.org/msg00566.html | CC-MAIN-2019-26 | refinedweb | 247 | 78.18 |
User talk:Thandruin
From Uncyclopedia, the content-free encyclopedia
Yada-yada-yada, admin being helpful and stuff... (By all means, I appreciate that)
Anyway - started working on some new project, just to prove how ridiculously abundant my spare-time is at the moment. Too Much Information Man - better wait to check it out until after finishing. It really looks like Junk at the moment --Thandruin 20:08, 13 September 2007 (UTC)
edit Welcome!
Hello, Thandru Change
Hello,
So you know for future reference; generally you do not write your article on the Pee Review page. You create an article, and then link to it from Pee review if you wish.
If you want to start a new article, the easiest way to do so is to create a link to a non existent article from an existing page - your userpage, for example. You can then click on this red link, and the wiki will ask if you want to create an article with this name.
On this occasion, I've moved the bulk of your article from Pee Review to namespace. If I get time later, I'll also review it.
--Cap'n Sir Ben GUN WotM VFH VFP 02:21, 14 June 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/User_talk:Thandruin | CC-MAIN-2014-52 | refinedweb | 205 | 61.97 |
THttpEngine implementation, based on fastcgi package.
Allows to redirect http requests from normal web server like Apache2 or lighttpd
Configuration example for lighttpd:
server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/remote_scripts/" => (( "host" => "192.168.1.11", "port" => 9000, "check-local" => "disable", "docroot" => "/" )) )
When creating THttpServer, one should specify:
THttpServer* serv = new THttpServer("fastcgi:9000");
In this case, requests to lighttpd server will be redirected to ROOT session. Like:
Following additional options can be specified
top=foldername - name of top folder, seen in the browser thrds=N - run N worker threads to process requests, default 10 debug=1 - run fastcgi server in debug mode
Example:
serv->CreateEngine("fastcgi:9000?top=fastcgiserver");
Definition at line 20 of file TFastCgi.h.
#include </home/sftnight/build/workspace/root-makedoc-master/rootspi/rdoc/src/master/net/http/src/TFastCgi.h>
normal constructor
Definition at line 314 of file TFastCgi.cxx.
destructor
Definition at line 322 of file TFastCgi.cxx.
create engine data
initializes fastcgi variables and start thread which will process incoming http requests
Reimplemented from THttpEngine.
Definition at line 343 of file TFastCgi.cxx.
Definition at line 36 of file TFastCgi.h.
Definition at line 42 of file TFastCgi.h.
Definition at line 40 of file TFastCgi.h.
Definition at line 38 of file TFastCgi.h.
Method called when server want to be terminated.
Reimplemented from THttpEngine.
Definition at line 28 of file TFastCgi.h.
! debug mode, may required for fastcgi debugging in other servers
Definition at line 23 of file TFastCgi.h.
! socket used by fastcgi
Definition at line 22 of file TFastCgi.h.
! set when http server wants to terminate all engines
Definition at line 26 of file TFastCgi.h.
! thread which takes requests, can be many later
Definition at line 25 of file TFastCgi.h.
! name of top item
Definition at line 24 of file TFastCgi.h. | https://root.cern/doc/master/classTFastCgi.html | CC-MAIN-2022-27 | refinedweb | 304 | 51.55 |
string Fname
{ get; set; }
public string Mname
{ get; set; }
public string Address
{ get; set; }
public string Email
{ get; set; }
public string Phone
{ get; set; }
public string GetFullName()
{
if(string.IsNullOrEmpty(Lname))
throw new MissingFieldException("Lname is empty");
return string.Format("{0} {1} {2}",Fname,Mname,Lname);
}
}
Step 1: Add reference to nunit.framework.dll from the bin of installed nunit application.
Step 2: Create a class to write your tests. Say you want to test a “Person” class. Name your test class as TestPerson. Best Practice: Use a separate class library. Have separate classes to test each class in your application. ().
Attribute [Test]
public void ()
Now, implement the following code in the method:
using NUnit.Framework;
namespace NunitExample
{
[TestFixture]
public class TestPerson
{
[Test]
public void TestFullName()
{
Person person = new Person ();
person.lname = "Doe";
person.mname = "Roe";
person.fname = "John";
string actual = person.GetFullName();
string expected = "John Roe Doe";
Assert.AreEqual(expected, actual,
”The GetFullName returned a different Value”);
}
}
}
Let us understand the code: The test method assigns fixed values to the properties of class Person. Now we want to check that GetFullName() functions correctly. So we compare the returned value with what we expected. If that matches, the test passes or else the test fails. at few other important and very useful attributes.
[SetUp]
public void init()
{
Person person = new Person();
}
As you can see in the code, the Setup attribute should be used for any initialization kind of functions. Any code that must be executed prior to executing a test can be put in the functions marked with [Setup]. This frees you from repeating these lines of code in each test. Please note carefully, that this code is executed, before each test.
Setup
[Setup]
This attribute is exactly opposite to Setup Attribute. This code is executed after execution of each code. In this method, for example, you can write code, to close any FileSystem Object or Database connection.
FileSystem
At times, we may want to test that a method should throw an exception in a particular scenario. We can test this using this attribute. Example:
[Test]
[ExpectedException(typeof(MissingFieldException))]
public void TestFullNameForException()
{
Person person = new Person();
person.Lname = "";
person.Mname = "Roe";
person.Fname = "John";
string actual = person.GetFullName();
}
This code does not have any Assert statement. This | http://www.codeproject.com/Articles/162041/Introduction-to-NUnit-and-TDD?fid=1612042&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3791198 | CC-MAIN-2014-10 | refinedweb | 378 | 60.61 |
I keep getting different attribute errors when trying to run this file in ipython…beginner with pandas so maybe I’m missing something
Code:
from pandas import Series, DataFrame import pandas as pd import json nan=float('NaN') data = [] with open('file.json') as f: for line in f: data.append(json.loads(line)) df = DataFrame(data, columns=['accepted', 'user', 'object', 'response']) clean = df.replace('NULL', nan) clean = clean.dropna() print clean.value_counts() AttributeError: 'DataFrame' object has no attribute 'value_counts'
Any ideas?
value_counts is a Series method rather than a DataFrame method (and you are trying to use it on a DataFrame,
clean). You need to perform this on a specific column:
clean[column_name].value_counts()
It doesn’t usually make sense to perform
value_counts on a DataFrame, though I suppose you could apply it to every entry by flattening the underlying values array:
pd.value_counts(df.values.flatten())
To get all the counts for all the columns in a dataframe, it’s just
df.count()
value_counts() is now a DataFrame method since pandas 1.1.0
value_counts work only for series. It won’t work for entire DataFrame. Try selecting only one column and using this attribute.
For example:
df['accepted'].value_counts()
It also won’t work if you have duplicate columns. This is because when you select a particular column, it will also represent the duplicate column and will return dataframe instead of series.
At that time remove duplicate column by using
df = df.loc[:,~df.columns.duplicated()] df['accepted'].value_counts()
| https://techstalking.com/programming/question/solved-attributeerror-dataframe-object-has-no-attribute/ | CC-MAIN-2022-40 | refinedweb | 252 | 50.94 |
In this article, the goal is to show how to set up a containerized application in Kubernetes with a very simple CI/CD pipeline to manage deployment using GitHub Actions and Keel.
Before we start:
Kubernetes, also known as K8s, is an open-source container orchestration system for automating deployment, scaling, and management.
Keel is a K8s operator to automate Helm, DaemonSet, Stateful & Deployment updates. It’s open-source, self-hosted with zero requirements of CLI/API, and comes with a beautiful and insightful dashboard.
GitHub Actions makes it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub.
Workflow:
There are basically two steps in the workflow.
- You will push some changes to the GitHub repo. A workflow will be triggered, which will build the docker image of our application and push the image to the Docker registry.
- Keel will get notified of the updated image. Based on the update policy, the deployment will be updated in the configured cluster.
Step 1:
In the first step, we will prepare the GitHub repo to trigger workflows.
The repo, aka-achu/go-kube contains a simple web application written in golang and a Dockerfile, which will be used to build a docker image of the application. You can maintain any number of environments for your application like Production, QnA, Staging, Development, etc. For sake of simplicity, we will be maintaining only two deployment environments of the application.
There are only two branches in the repo.
- main branch (for Production environment)
name: Stable Build on: push: tags: - "*.*.*" ... - name: Set tag in env run: echo "TAG=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV ... tags: runq/go-kube:${{ env.TAG }}, runq/go-kube:latest
The stable workflow will be triggered when a tag is pushed to GitHub. In the workflow, the tag associated with the commit will be used as the docker image tag.
- dev branch (for Development/Staging environment)
name: Development Build on: push: branches: [ dev ] ... - name: Set short commit hash in env run: echo "COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7)" >> $GITHUB_ENV ... tags: runq/go-kube:dev-${{ env.COMMIT_SHA }}
The dev workflow will be triggered when any changes are pushed to the dev branch. For development builds, instead of git tags, we will use the short commit hash of length 7, which we often see in GitHub. The docker image tag of the development build image will be dev-SHORT_COMMIT_SHA.
Both workflows aim to integrate the code, maybe run some tests, build the docker image, and update the image registry. Till this point, we have completed Continuous Integration and Continuous Delivery.
Step 2:
In this step, we will automate our deployment update. We will be using the K8s LoadBalancer service in our workflow. So, if you're using an on-premise cluster then you can use MetalLB, which is a load-balancer implementation for bare metal Kubernetes clusters.
Install keel:
Keel doesn't need a database. Keel doesn't need any persistent disk. It gets all required information from your cluster
kubectl apply -f
The above command will deploy Keel to keel namespace with basic authentication enabled and admin dashboard. You can provide an admin password while applying the manifest or you can download the manifest and replace the default password.
# Basic auth (to enable UI/API) - name: BASIC_AUTH_USER value: admin - name: BASIC_AUTH_PASSWORD value: admin - name: AUTHENTICATED_WEBHOOKS value: "false"
Keel policies:
In keel, we use policies to define when we want our application/deployment to get updated. Following semver best practices allows you to safely automate application updates. Keel supports many different policies to update resources. For now, we will use only
all and
glob policies to update our deployment.
- all: update whenever there is a version bump (1.0.0 -> 1.0.1) or a new prerelease created (ie: 1.0.0 -> 1.0.1-rc1)
- glob: use wildcards to match versions (eg:
dev-*in our scenario)
How Keel will get notified:
- Webhooks- when a new image is pushed to the registry, keel will get notified by the added webhook in DockerHub, and based on the configured update policy, the deployment will be updated.
- Polling- when an image with a non-semver style tag is used (ie: latest) Keel will monitor SHA digest. If a tag is semver - it will track and notify providers when new versions are available.
Configuring webhook:
First, we need to get the External IP address of the keel service.
kubectl get all -n keel
The output will something like this:
Now, that we have the external address of the
service/keel, we will add a webhook for our repository in DockerHub. The URL for the hook will be
http://<External-IP>:9300/v1/webhooks/dockerhub. Now pushing a new docker image will trigger an HTTP call-back.
If you don't want to expose your Keel service - the recommended solution is webhookrelay which can deliver webhooks to your internal Keel service through a sidecar container. Here is how you can set up the sidecar.
Configuring deployment manifest:
- To configure our staging deployment, which runs an image having a tag of
dev-SHORT_COMMIT_SHA, we will use the
globpolicy. We will specify our policy/update rule using annotations under the metadata of the deployment manifest. Like-
... annotations: keel.sh/policy: "glob:dev-*" ...
- To configure our production deployment, which runs an image having a semver tag, we will use the
allpolicy. This will update our production deployment when it encounters any version bump in the tag. We will specify our policy/update rule using annotations under the metadata of the deployment manifest. Like-
... annotations: keel.sh/policy: all ...
Testing the workflow:
Now, that we are done with the setup, a push to the
dev branch, will update the staging deployment and a tagged release will update the production deployment.
- Push changes to the
devbranch
- Development workflow will be triggered which will build a docker image
- A Docker image with the tag
dev-SHORT_COMMIT_SHAwill be pushed to the DockerHub registry.
- Keel will get notified by DockerHub using the webhook
- Keel will validate, whether the new image tag satisfies the policy we have specified (all tags starting with
dev-will qualify for the update process. eg:
dev-c722d00,
dev-0ca740e)
- If the tag qualifies for the update, keel will create a new replicaset with the new image. Once the pods are ready, keel will scale the number of replicas in the old replicaset to 0.
The release of a tag will trigger a similar flow of events.
Visualizing using Keel dashboard:
You can access the dashboard at External-IP:9300 or by using the NodePort. Use the same credentials which you had set while setting up the keel.
You can -
- View resources managed by keel
- Get an audit of the changes done by keel
- Change/Pause update policies of resources
- Approve updates
What's next?
- Enabling approvals for udpates
- Setting up notification pipelines
- Supported webhook triggers and polling
- Use helm templating for update
- Updating DaemonSets, StatefulSets, etc.
Here are all the workflow and deployment manifests used.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/achu1612/ci-cd-for-kubernetes-using-github-actions-and-keel-4b7c | CC-MAIN-2021-21 | refinedweb | 1,174 | 54.73 |
Debugging C++Builder 64-Bit Windows Applications
Go Up to C++Builder 64-bit Windows Application Development
Contents
- 1 C++ LLDB-based Debugger for WIN64
- 2 Formatters
- 3 C++Builder 64-Bit Windows
- 4 Call Stack Differences
- 5 Reducing Linker Memory Usage with Split DWARF
- 6 Evaluating Function Calls Such as strcmp()
- 7 See Also
C++ LLDB-based Debugger for WIN64
RAD Studio 10.4 introduces a new debugger for C++ for Win64, based on LLDB. The compiler uses the DWARF version 4 format for debug information.
The new RAD Studio LLDB-based debugger offers improved stability when debugging, and via both a new debugger technology and the debug information generated provides improved evaluation, inspection, and other debugger features. Additionally, the debugger has support for evaluating complex types such as STL collections or strings via formatters.
Formatters
A common issue when debugging C++ applications is inspecting or evaluating complex types. Evaluating the contents of a std::map (for example) requires knowledge of the layout of the type, and the array access operator [] may not be available to the debugger if it is never called in code or if it is always inlined and so not callable. Similar problems exist for other types including strings, and possibly even your own types.
This is solved through the use of formatters. A formatter is a small Python script that assists in debugging a specific type. RAD Studio ships with formatters for common types, and you can add your own for your own types if needed.
The following formatters are provided for common STL and Delphi types.
* std::string and std::wstring * String (UnicodeString), AnsiString, UTF8String, WideString. * std::vector * std::deque * std::stack * std::map * std::shared_ptr
Custom formatters
To add your own formatter, create a new python file and edit the bin\Windows\lldb\.lldbinit file to contain a line referring to your new Python script. You can find more information about writing your own formatters here:
- LLDB’s formatting documentation
- How to extend LLDB to provide a better debugging experience
- This Stack Overflow question about writing a custom data formatter
C++Builder 64-Bit Windows
C++Builder Delphi language extensions is not currently supported.
- For example, on the Debug Inspector, the Methods and Properties tabs are not displayed for C++ 64-bit Windows applications.
- Unicode, code pages, and localization are not fully supported.
- For example, Unicode is not supported in identifier names, and code pages are not supported by the C++ 64-bit Windows debugger.
- When evaluating a 64-bit Windows register, the register name must be prefixed with $, such as
$rax.
- See Evaluate/Modify.
- Function calls that throw exceptions are handled as follows:
- If a function contains a try/except/catch block, and a C++ or OS/SEH exception is raised during execution, the function call finishes correctly, but the result is undefined or 0. In this case, the internal exception block is not executed because the exception is handled directly by the debugger.
- If a function contains a try/except/catch block, and no language or OS/SEH exceptions are raised, the function call finishes fine and the result is correct, depending on the function.
Call Stack Differences
Some values might be displayed differently in the 64-bit evaluator than in the 32-bit evaluator. For example, the call stack is displayed without function parameters and values.
The call stack typically contains two copies of each constructor and destructor. For example, the call stack might contain:
:0000000000401244 ; MyClass::~MyClass
- :0000000000401229 ; MyClass::~MyClass
- :0000000000401187 ; main
:000000000040ef90 ; _startup
Reducing Linker Memory Usage with Split DWARF
RAD Studio 10.4.2 introduces a new feature in order to help reduce the amount of data the linker needs to process, especially when linking applications built in debug mode. This feature is known as Split DWARF and splits the debug information to a separate .dwo file (DWARF object) sitting parallel to the normal object file containing compiled code. The linker then only links executable code and small amounts of other information, thus reducing memory strain.
Enabling Split DWARF in the IDE or msbuild
Open your C++ project options dialog. Navigate to Building > C++ Compiler > Debugger and ensure the Target platform is set to one of the Windows 64-bit targets.
- Enable the ‘Use Split DWARF’ checkbox.
- Specify a folder for the debug information files in the 'DWO output directory' setting. This must be an absolute path, not a relative path or a path using environment variables.
For example, c:\myproject\win64debug is ok, but ..\win64debug is not.
To disable this feature, deselect the checkbox in Project Options > Building > C++ Compiler > Debugging. These settings will be used when building from within the IDE, or building with msbuild on the command line.
Enabling Split DWARF on the Command Line
To manually enable Split DWARF on a pure command-line build, such as one using BCC64:
1. Specify the Split DWARF setting on the compiler’s command line.
"-enable-split-dwarf" AND "-split-dwarf-file AFilename.dwo"
By doing this the compiler creates the AFilename.dwo file and emits the location of that .dwo file into the .o object file. Note that this step does not yet mean that the .dwo file contains the debug information.
2. Use the "-dwo-dir <directory>" to specify the directory where the compiler writes out the .dwo file. Ensure this is an absolute path.
3. Run the objcopy.exe tool on the .o file. This action removes all the dwarf sections from the normal object (.o) file and places them in a separate .dwo file. This needs to be done for each object (.o file) and needs to match the name you specify in step 1.
objcopy --split-dwo=AFilename.dwo AFilename.o
4. Finally, link the .o files in order to create your EXE or DLL as you normally would - this step is not modified. The files will be smaller than normal as there is less debug information in them, each object contains the name and the location of the .dwo. As described in step 2, this is the location for your specific machine and must be a specific absolute path.
Issues Loading Debug Information When Using Split DWARF
When using Split-Dwarf for Win64 debugging, source files that are not in the same directory as the project file may get incorrect directory information generated into the object file. This will mean that symbols from those files (types, local variables, and parameters) will not be available. You can recognize this when debugging by seeing that the blue dots in the editor indicating line numbers are visible, and you will be able to place a breakpoint, but you will not be able to evaluate or inspect expressions or symbols.
To work around this, specify an absolute path in the 'DWO output directory' field of the C++ Compiler > Debugging > Use Split Dwarf project option, or the -dwo-dir command-line option if building on the command line. See above for information on these settings.
Evaluating Function Calls Such as strcmp()
Evaluation for a function call such as strcmp(str, "ABC") could return an error as follows:
#include <system.hpp> int main() { char *str = "ABC"; return strcmp(str, "ABC"); } error: 'strcmp' has unknown return type; cast the call to its declared return type error: 1 errors parsing expression
In the Evaluate/Modify window, you need to cast the return type for strcmp():
(int) strcmp(str, "ABC");
See strcmp, _mbscmp, wcscmp.
See Also
- Overview of Debugging
- How to Use the Debugger
- Debugging Multi-Device Applications
- C++Builder 64-bit Windows Differences
- Handling Linker Out of Memory Errors
- Upgrading Existing C++ Projects to 64-bit Windows
- Stricter C++ Compilers (Clang-enhanced C++ Compilers)
- Evaluate/Modify
- File Extensions of Files Generated by RAD Studio | https://docwiki.embarcadero.com/RADStudio/Alexandria/en/Debugging_C%2B%2BBuilder_64-Bit_Windows_Applications | CC-MAIN-2022-40 | refinedweb | 1,287 | 53.61 |
Subject: Re: [OMPI devel] Missing Symbol
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-03-05 18:26:13
On Mar 5, 2010, at 6:02 PM, Jeff Squyres (jsquyres) wrote:
>).
I made a patch for exactly what I described: it comments out the preopen module's setting of FILE_NOT_FOUND. But now I'm getting foiled by the use of RTLD_LAZY. For example, if I add a bogus symbol that can't be resolved into the TCP BTL, I get this when I run ompi_info:
-----
...lots of ompi_info config output...
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
dyld: lazy symbol binding failed: Symbol not found: _jeffs_symbol_that_does_not_exist
Referenced from: /Users/jsquyres/bogus/lib/openmpi/mca_btl_tcp.so
Expected in: flat namespace
[ ompi_info aborts ]
-----
This is happening because libltdl's dlopen() is being invoked with RTLD_LAZY so the fact that a symbol can't be resolved at dlopen() time is not a problem. It becomes a fatal problem later when the component's open function is invoked and my unresolved symbol is exposed in all of its glory.
If I manually change the LT_LAZY_OR_NOW to RTLD_NOW in the libltdl/loaders/dlopen.c, then I get the behavior I was expecting:
------
...lots of ompi_info config output...
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
[rtp-jsquyres-8717.cisco.com:89384] mca: base: component_find: unable to open /Users/jsquyres/bogus/lib/openmpi/mca_btl_tcp: dlopen(/Users/jsquyres/bogus/lib/openmpi/mca_btl_tcp.so, 10): Symbol not found: _jeffs_symbol_that_does_not_exist
Referenced from: /Users/jsquyres/bogus/lib/openmpi/mca_btl_tcp.so
Expected in: flat namespace
in /Users/jsquyres/bogus/lib/openmpi/mca_btl_tcp.so (ignored)
MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.7)
MCA paffinity: darwin (MCA v2.0, API v2.0, Component v1.7)
...lots of ompi_info config output...
-----
I.e., the dlopen() fails and my patch causes us to actually get a reasonable error message from libltdl.
So:
1. Given that I'm seeing this on both Linux (RHEL4) and OSX, the LT_LAZY_OR_NOW must be resolving the RTLD_LAZY on both Linux and OSX -- so how are you getting the error message that you're getting? Is your system somehow using RTLD_NOW?
2. If OSX and Linux both use RTLD_LAZY, is my patch useful? I'm hesitant to add it if it's only partially (or not at all) useful...
--
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: | http://www.open-mpi.org/community/lists/devel/2010/03/7556.php | CC-MAIN-2015-40 | refinedweb | 385 | 59.8 |
Quarkus is, in its own words, "Supersonic subatomic Java" and a "Kubernetes native Java stack tailored for GraalVM & OpenJDK HotSpot, crafted from the best of breed Java libraries and standards." For the purpose of illustrating how to modernize an existing Java application to Quarkus, I will use the Red Hat JBoss Enterprise Application Platform (JBoss EAP) quickstarts
helloworld quickstart as sample of a Java application builds using technologies (CDI and Servlet 3) supported in Quarkus.
It's important to note that both Quarkus and JBoss EAP rely on providing developers with tools based—as much as possible—on standards. If your application is not already running on JBoss EAP, there's no problem. You can migrate it from your current application server to JBoss EAP using the Red Hat Application Migration Toolkit. After that, the final and working modernized version of the code is available in the repository inside the
helloworld module.
This article is based on the guides Quarkus provides, mainly Creating Your First Application and Building a Native Executable.
Get the code
To start, clone the JBoss EAP quickstarts repository locally, running:
$/
Try plain, vanilla
helloworld
The name of the quickstart is a strong clue about what this application does, but let's follow a scientific approach in modernizing this code, so first things first: Try the application as it is.
Deploy
helloworld
- Open a terminal and navigate to the root of the JBoss EAP directory
EAP_HOME(which you can download).
- Start the JBoss EAP server with the default profile by typing the following command:
$ EAP_HOME/bin/standalone.sh
Note: For Windows, use the
EAP_HOME\bin\standalone.bat script.
After a few seconds, the log should look like:
)
- Open a browser, and a page like Figure 1 should appear:
- Following instructions from Build and Deploy the Quickstart, deploy the
helloworldquickstart and execute (from the project root directory) the command:
$ mvn clean install wildfly:deploy
This command should end successfully with a log like this:
[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 8.224 s
The
helloworld application has now been deployed for the first time in JBoss EAP in about eight seconds.
Test
helloworld
Following the Access the Application guide, open in the browser and see the application page, as shown in Figure 2:
Make changes
Change the
createHelloMessage(String name) input parameter from
World to
Marco (my ego is cheap):
writer.println("<h1>" + helloService.createHelloMessage("Marco") + "</h1>");
Execute again the command:
$ mvn clean install wildfly:deploy
and then refresh the web page in the browser to check the message displayed changes, as shown in Figure 3:
Undeploy
helloworld and shut down
If you want to undeploy (optional) the application before shutting down JBoss EAP, run the following command:
$ mvn clean install wildfly:undeploy
To shut down the JBoss EAP instance, enter Ctrl+C in the terminal where it's running.
Let's modernize
helloworld
Now we can leave the original
helloworld behind and update it.
Create a new branch
Create a new working branch once the quickstart project finishes executing:
$ git checkout -b quarkus 7.2.0.GA
Change the
pom.xml file
The time has come to start changing the application. starting from the
pom.xml file. From the
helloworld folder, run the following command to let Quarkus add XML blocks:
$ mvn io.quarkus:quarkus-maven-plugin:0.23.2:create
This article uses the 0.23.2 version. To know which is the latest version is, please refer to, since the Quarkus release cycles are short.
This command changed the
pom.xml, file adding:
- The property
<quarkus.version>to define the Quarkus version to be used.
- The
<dependencyManagement>block to import the Quarkus bill of materials (BOM). In this way, there's no need to add the version to each Quarkus dependency.
- The
quarkus-maven-pluginplugin responsible for packaging the application, and also providing the development mode.
- The
nativeprofile to create application native executables.
Further changes required to
pom.xml, to be done manually:
- Move the
<groupId>tag outside of the
<parent>block, and above the
<artifactId>tag. Because we remove the
<parent>block in the next step, the
<groupId>must be preserved.
- Remove the
<parent>block: The application doesn't need the JBoss parent pom anymore to run with Quarkus.
- Add the
<version>tag (below the
<artifactId>tag) with the value you prefer.
- Remove the
<packaging>tag: The application won't be a WAR anymore, but a plain JAR.
- Change the following dependencies:
- Replace the
javax.enterprise:cdi-apidependency with
io.quarkus:quarkus-arc, removing
<scope>provided</scope>because—as stated in the documentation—this Quarkus extension provides the CDI dependency injection.
- Replace the
org.jboss.spec.javax.servlet:jboss-servlet-api_4.0_specdependency with
io.quarkus:quarkus-undertow, removing the
<scope>provided</scope>, because—again as stated in the documentation—this is the Quarkus extension that provides support for servlets.
- Remove the
org.jboss.spec.javax.annotation:jboss-annotations-api_1.3_specdependency because it's coming with the previously changed dependencies.
The
pom.xml file's fully changed version is available at.
Note that the above
mvn io.quarkus:quarkus-maven-plugin:0.23.2:create command, besides the changes to the
pom.xml file, added components to the project. The added file and folders are:
- The files
mvnwand
mvnw.cmd, and
.mvnfolder: The Maven Wrapper allows you to run Maven projects with a specific version of Maven without requiring that you install that specific Maven version.
- The
dockerfolder (in
src/main/): This folder contains example
Dockerfilefiles for both
nativeand
jvmmodes (together with a
.dockerignorefile).
- The
resourcesfolder (in
src/main/): This folder contains an empty
application.propertiesfile and the sample Quarkus landing page
index.html(more in the section "Run the modernized
helloworld").
Run
helloworld
To test the application, use
quarkus:dev, which runs Quarkus in development mode (more details on Development Mode here).
Note: We expect this step to fail as changes are still required to the application, as detailed in this section.
Now run the command to check if and
It failed. Why? What happened?
The
UnsatisfiedResolutionException exception refers can not be found (even if the two classes are in the very same package).
It's time to return to Quarkus guides to leverage the documentation and understand how
@Inject—and hence Contexts and Dependency Injection (CDI)—works in Quarkus, thanks to the Contexts and Dependency Injection guide. In the Bean Discovery paragraph, it says, "Bean classes that don’t have a bean defining annotation are not discovered."
Looking at the
HelloService class, it's clear there's no bean defining annotation, and one has to be added to have Quarkus to discover the bean. So, because it's a stateless object, it's safe to add the
@ApplicationScoped annotation:
@ApplicationScoped public class HelloService {
Note: The IDE should prompt you to add the required package shown here (add it manually if need be):
import javax.enterprise.context.ApplicationScoped;
If you're in doubt about which scope to apply when the original bean has no scope defined, please refer to the JSR 365: Contexts and Dependency Injection for Java 2.0—Default scope documentation.
Now, try again to run the application, executing again the
./mvnw compile quarkus:dev command:
$ .]]
This time the application runs successfully.
Run the modernized
helloworld
As the terminal log suggests, open a browser to(the default Quarkus landing page), and the page shown in Figure 4 appears:
This application has the following context's definition in the
WebServlet annotation:
@WebServlet("/HelloWorld") public class HelloWorldServlet extends HttpServlet {
Hence, you can browse to reach the page shown in Figure 5:
It works!
Make changes
Please, pay attention to the fact that the
./mvnw compile quarkus:dev command is still running, and we're not going to stop it. Now, try to apply the same—very trivial—change to the code and see how Quarkus improves the developer experience:
writer.println("<h1>" + helloService.createHelloMessage("Marco") + "</h1>");
Save the file, and then refresh the web page to check that
Hello Marco appears, as shown in Figure 6:
Take time to check the terminal output:
Refreshing the page triggered the source code change detection and the Quarkus automagic "stop-and-start." All of this executed in just 0.371 seconds (that's part of the Quarkus "Supersonic Subatomic Java" experience).
Build the
helloworld packaged JAR
Now that the code works as expected, it can be packaged using the command:
$ ./mvnw clean package
This command creates two JARs in the
/target folder. The first is
helloworld-<version>.jar, which is the standard artifact built from the Maven command with the project's classes and resources. The second is
helloworld-<version>-runner.jar, which is an executable JAR.
Please pay attention to the fact that this is not an uber-jar, because all of the dependencies are copied into the
/target/lib folder (and not bundled within the JAR). Hence, to run this JAR in another location or host, both the JAR file and the libraries in the
/lib folder have to be copied, considering that the
Class-Path entry of the
MANIFEST.MF file in the JAR explicitly lists the JARs from the
lib folder.
To create an uber-jar application, please refer to the Uber-Jar Creation Quarkus guide.
Run the
helloworld packaged JAR
Now, the packaged JAR can be executed using the standard
java command:
$ java -jar ./target/helloworld-<version>-runner.jar INFO [io.quarkus] (main) Quarkus 0.23.2 started in 0.673s. Listening on: INFO [io.quarkus] (main) Profile prod activated. INFO [io.quarkus] (main) Installed features: [cdi]
As done above, open the URL in a browser, and test that everything works.
Build the
helloworld quickstart-native executable
So far so good. The
helloworld quickstart ran as a standalone Java application using Quarkus dependencies, but more can be achieved by adding a further step to the modernization path: Build a native executable.
Install GraalVM
First of all, the tools for creating the native executable have to be installed:
- Download GraalVM 19.2.0.1 from.
- Untar the file using the command:
$ tar xvzf graalvm-ce-linux-amd64-19.2.0.1.tar.gz
- Go to the
untarfolder.
- Execute the following to download and add the native image component:
$ ./bin/gu install native-image
- Set the
GRAALVM_HOMEenvironment variable to the folder created in step two, for example:
$ export GRAALVM_HOME={untar-folder}/graalvm-ce-19.2.0.1)
More details and install instructions for other operating systems are available in Building a Native Executable—Prerequisites Quarkus guide.
Build the
helloworld native executable
As stated in the Building a Native Executable—Producing a native executable Quarkus guide, "Let’s now produce a native executable for our application. It improves the startup time of the application and produces a minimal disk footprint. The executable would have everything to run the application including the 'JVM' (shrunk to be just enough to run the application), and the application."
To create the native executable, the Maven
native profile has to be enabled by executing:
$ ./mvnw package -Pnative
The build took me about 1:10 minutes and the result is the
helloworld-<version>-runner file in the
/target folder.
Run the
helloworld native executable
The
/target/helloworld-<version>-runner file created in the previous step. It's executable, so running it is easy:
$ ./target/helloworld-<version>-runner INFO [io.quarkus] (main) Quarkus 0.23.2 started in 0.006s. Listening on: INFO [io.quarkus] (main) Profile prod activated. INFO [io.quarkus] (main) Installed features: [cdi]
As done before, open the URL in a browser and test that everything is working.
Next steps
I believe that this modernization, even of a basic application, is the right way to approach a brownfield application using technologies available in Quarkus. This way, you can start facing the issues and tackling them to understand and learn how to solve them.
In part two of this series, I'll look at how to capture memory consumption data in order to evaluate performance improvements, which is a fundamental part of the modernization process.Last updated: July 1, 2020 | https://developers.redhat.com/blog/2019/11/07/quarkus-modernize-helloworld-jboss-eap-quickstart-part-1 | CC-MAIN-2022-05 | refinedweb | 2,002 | 55.74 |
Introduction
This article describes a problem that occurs if you include the "errno.h" and "winsock.h" header files in your C++ code in Windows Embedded Compact 2013. An update is available to resolve this problem. Before you install this update, all previously issued updates for this product must be installed.
Symptoms
Assume that you use the Windows Embedded Compact 2013 SDK to create a Console project in Visual Studio 2012. When you include both the "errno.h" and "winsock.h" header files in your C++ code, and then build the project, you receive the following warning message:
Macro Redefinition
Cause
This problem occurs because the values for the error codes that are defined in the errno.h and winsock.h headers files do not match between the files.
Examples of error codes
From the errno.h header file:
#define EWOULDBLOCK 140
From the winsock.h header file:
#define WSAEWOULDBLOCK 10035L
#define EWOULDBLOCK WSAEWOULDBLOCK
The following is a code example to retrieve the error codes:
#include <errno.h>
#include <winsock.h>
int wmain(int argc, wchar_t *argv[])
{
printf("Welcome to Windows Embedded Project System \n");
return 0;
}
Software update information
Download information
Windows Embedded Compact 2013 Monthly Update (April. | https://support.microsoft.com/en-us/topic/fix-macro-redefinition-message-occurs-if-you-include-the-errno-h-and-winsock-h-header-files-in-your-c-code-in-windows-embedded-compact-2013-6215d545-c066-84f2-bd45-2fd7721005f7 | CC-MAIN-2021-04 | refinedweb | 199 | 59.6 |
Opened 18 months ago
Closed 18 months ago
Last modified 17 months ago
#22137 closed Cleanup/optimization (fixed)
Widgets class should drop the is_hidden bool property
Description
To me, it seems somewhat unnecessary, here are the reasons why:
- In every place that is_hidden = True is set in forms/widgets.py, there is also a corresponding input_type = 'hidden' set. It seems like we should use one or the other for determining if a field is hidden (keep it DRY)
- is_hiddendoesn't necessarily reflect whether the widget is actually an input of type type='hidden' or not
- is_hidden cannot be set in the __init__() method, whereas input_type can (by passing an attrs={'type' : 'hidden' } dict)
The scenario where I came across this was that I wanted to make a DateInput widget hidden in one of my ModelForms. A basic example:
# Model class MyMod(models.Model): date = models.DateField() # Form class MyModForm(forms.ModelForm): class Meta: widgets = { 'date' : forms.DateInput(format="%d/%m/%Y", attrs={'type' : 'hidden'}) # I need to use a DateInput here because a HiddenInput does not format the date properly (this is probably another bug in Django - the default date format is the US %m/%d/%y and it seems very difficult to change) } # Template {% if field.field.widget.is_hidden %} {{ field }} {% else %} {{ field.label }} // custom html stuff {{ field }}
The call in the template does not work, since the DateInput has is_hidden = True. What I have to do in my template is:
{% if field.field.widget.is_hidden or field.field.widget.input_type == "hidden" %}
Not quite as elegant. If is_hidden were to be a method, which would return something like:
def is_hidden(self): return self.input_type == "hidden"
I think it would be a more reliable way of determining if a widget is hidden or not.
Attachments (1)
Change History (16)
comment:1 Changed 18 months ago by anonymous
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 18 months ago by claudep
- Triage Stage changed from Unreviewed to Accepted
- Type changed from Uncategorized to Cleanup/optimization
- Version changed from 1.6 to master
Changed 18 months ago by claudep
comment:3 Changed 18 months ago by claudep
- Has patch set
Attached patch passes tests. Opinion?
comment:4 Changed 18 months ago by alasdair
Removing is_hidden would be backwards incompatible, so would have to go through the deprecation cycle. I like the approach of turning it into a property.
comment:5 Changed 18 months ago by claudep
Backwards incompatibility does not automatically means a deprecation period, we might also simply document it in the release notes, depending on the severity.
comment:6 Changed 18 months ago by alasdair
is_hidden is documented. My understanding of the api stability promise is that we wouldn't remove it without a deprecation cycle unless there was a good reason.
My projects use {{ field.is_hidden }}, so I would have to update my templates if we removed it. I'm -1 on removing without deprecation warnings.
comment:7 Changed 18 months ago by alasdair
- Cc alasdair@… added
comment:8 Changed 18 months ago by django@…
My thought was to create a property as I said in the original ticket.
@cached_property def is_hidden(self): return self.input_type == "hidden"
Since it's most likely to be used in templates, it won't make a difference even if it's not a property. Plus it would fail quietly if you were to remove it completely without deprecation warnings.
The patch looks good to me: backwards compatible, and does what I want ;-)
comment:9 Changed 18 months ago by alasdair
claudep's patch looks good to me, sorry if I wasn't clear before.
I saw "drop the is_hidden bool property" in the title, and thought you were suggesting "remove the attribute or turn it into a property". Re-reading the ticket, I can see that your suggested solution was always to create a method/property.
comment:10 Changed 18 months ago by claudep
Yes, but there's still a minor backwards incompatibility in that my patch doesn't allow the property to be set any longer. Thinking about it, we could even add a property setter so a deprecation warning is raised when some code tries to set the property (which was previously possible). Note also that we don't touch at all BoundField.is_hidden, but only Widget.is_hidden.
comment:11 Changed 18 months ago by claudep
comment:12 Changed 18 months ago by django@…
Thanks claudep :)
Really looking forward to Django 1.7. Especially built in migrations!
Thanks everyone for your awesome work.
comment:13 Changed 18 months ago by timo
- Triage Stage changed from Accepted to Ready for checkin
comment:14 Changed 18 months ago by Claude Paroz <claude@…>
- Resolution set to fixed
- Status changed from new to closed
Sorry a typo. I should have said (4 lines from the end) | https://code.djangoproject.com/ticket/22137 | CC-MAIN-2015-35 | refinedweb | 805 | 61.67 |
A couple of days ago I had a fantastic experience. Two ambitious developers asked me to review an open source project they are working on in a short video chat. I was flattered and happily accepted the offer.
We found ourselves talking about mocks in TypeScript. Since I started using TypeScript, I adopted a practice where I try to push as much as I can to the type system and use tools like
io-ts to hook just enough runtime validations to ensure I can trust it.
A couple of months ago I needed to mock something in one of our tests. We have a pretty big config, generated from a confirmation system, and I needed to use a property from it in my test.
The first idea was to do something like
setAppConfig({ myKey: value } as any). This worked well but it stinks from the
any, which also have a very big drawback: what if the test is implicitly using a property I did not set?
Enter
factoree. A simple factory generator function which will immediately throw an error when accessing a missing property. So, the previous example would be something like:
import { factory } from "factoree"; const createAppConfig = factory<AppConfig>(); setAppConfig(createAppConfig({ myKey: value }));
Can you see that we no longer have
as any? Instead of skipping the type system, we generate an object which will throw an error if we accessed a missing key. Instead of playing pretend — we specify rules for the computer to enforce at runtime:
import { factory } from "factoree"; const createAppConfig = factory<AppConfig>(); const config = createAppConfig({ myKey: "hello" }); config.myKey; // => 'hello' config.otherKey; // => Error: Can't access key 'otherKey' in object { myKey: 'hello' }
Why does it matter?
Imagine the following code under test:
export type User = { firstName: string; lastName: string; // more data website: string; }; export function printName(user: User): string { return `${user.firstName} ${user.lastName}`; }
And when we test it, we can use
as unknown as User to provide only the things that are in use in our function:
test(`prints the name`, () => { const userDetails = ({ firstName: "Gal", lastName: "Schlezinger", } as unknown) as User; const result = printName(userDetails); });
Now, the product manager asked us to add another feature: allow a user's name to be written in reverse. So, our code changes into:
export type User = { firstName: string; lastName: string; prefersReversedName: boolean; // more data website: string; }; export function printName(user: User): string { if (user.prefersReversedName) { return `${user.lastName} ${user.firstName}`; } return `${user.firstName} ${user.lastName}`; }
After the code change, tests should still pass. This is a problem because we access a property (
prefersReversedName) which should be a non-null
boolean, but we don't pass value into it, which is
undefined. This creates a blind spot in our test. When using factoree, this would not happen: if you forget to define a property, factoree will throw an error — ensuring you can trust your types.
This tiny library helped us make more maintainable code with easier assertions and better tests. No more
undefined errors in tests if types have changed. Let the computer do the thinking for us.
Try it, and let me know how that worked for you! | https://gal.hagever.com/posts/mocking-in-typescript-with-factoree | CC-MAIN-2021-49 | refinedweb | 522 | 61.77 |
public class TestStringImmutability { public static class PrintName { public static void main(String[] args) { String greeting = "Hello my name is "; printMyName(greeting, "Doug") ; // "Hello my name is Doug" printMyName(greeting, "Sam") ; // "Hello my name is Sam" } public static void printMyName(String g, String name) { // Does NOT change the original greeting String - which is immutable g += name ; System.out.println(g); } } /*public static void main(String[] args) { // TODO Auto-generated method stub }*/ }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
Yes that's right.
If you add "static" they will behave exactly like every other class you've ever encountered.
If you don't add "static" they will behave somewhat differently - with special problems and challenges.
So just add "static" and your life will be simpler.
Doug
Open in new window
If you make the internal class "static" it actually (oddly enough) means make this a "normal class". So you should use it 99% of the time.
If you don't include the keyword static then the inner class (PrintName here) retains a special pointer back to the outer class (TestStringImmutability).
You access this magic extra property like this:
TestStringImmutability.thi
from within the inner class.
In my opinion, this goes under the heading of "cool ideas that were introduced into Java, that they maybe regret now".
So learn to add "static" to inner class definitions and you can safely ignore this special inner class behavior.
Doug
ok i noticed now there are tow classes
can you please elaborate on this. i was not clear. are you saying it is always good idea to make inner classes as static rather than non static? | https://www.experts-exchange.com/questions/28961542/static-class.html | CC-MAIN-2018-22 | refinedweb | 297 | 65.52 |
D - error during compiling, using c dll in d code
- %u <grimduck gmail.com> Oct 06 2006
I just started trying to use D so please ignore my ignorance. What I want to end up with is a d program that uses a c dll, the C dll have function for rendering 3D objects. I converted a c header file to d via htod. When I compile test.d I get to following error : D:\d\projects\titanWrapper>dmd test.d D:\D\dmd\bin\..\..\dm\bin\link.exe test,,,user32+kernel32/noi; OPTLINK (R) for Win32 Release 7.50B1 Copyright (C) Digital Mars 1989 - 2001 All Rights Reserved test.obj(test) Error 42: Symbol Undefined _trCreateGLWindow --- errorlevel 1 Can anybody give me a tutorial on how to use C dll in D for dummies. Thanx, Glen test.d import std.c.stdio; import windowing; int main() { trCreateGLWindow(640,480,16,0); return 0; } windowing.d /* Converted to D from windowing.h by htod */ module windowing; alias int BOOL; //C #define FALSE 0 const TRUE = 1; //C #endif const FALSE = 0; // Exports //C BOOL trCreateGLWindow(int, int, int, BOOL); extern (C): int trCreateGLWindow(int , int , int , int );
Oct 06 2006 | http://www.digitalmars.com/d/archives/error_during_compiling_using_c_dll_in_d_code_29438.html | CC-MAIN-2014-52 | refinedweb | 200 | 67.65 |
- The Function
- Bringing Closure
- Summing It Up
- Stay Classy
We’ve covered a lot up to this point in the book: variables, constants, dictionaries, arrays, looping constructs, control structures, and the like. You’ve used both the REPL command-line interface and now Xcode 6’s playgrounds feature to type in code samples and explore the language.
Up to this point, however, you have been limited to mostly experimentation: typing a line or three here and there and observing the results. Now it’s time to get more organized with your code. In this chapter, you’ll learn how to tidy up your Swift code into nice clean reusable components called functions.
Let’s start this chapter with a fresh, new playground file. If you haven’t already done so, launch Xcode 6 and create a new playground by choosing File > New > Playground, and name it Chapter 4. We’ll explore this chapter’s concepts with contrived examples in similar fashion to earlier chapters.
The Function
Think back to your school years again. This time, remember high school algebra. You were paying attention, weren’t you? In that class your teacher introduced the concept of the function. In essence, a function in arithmetic parlance is a mathematical formula that takes an input, performs a calculation, and provides a result, or output.
Mathematical functions have a specific notation. For example, to convert a Fahrenheit temperature value to the equivalent Celsius value, you would express that function in this way:
The important parts of the function are:
- Name: In this case the function’s name is f.
- Input, or independent variable: Contains the value that will be used in the function. Here it’s x.
- Expression: Everything on the right of the equals sign.
- Result: Considered to be the value of f(x) on the left side of the equals sign.
Functions are written in mathematical notation but can be described in natural language. In English, the sample function could be described as:
A function whose independent variable is x and whose result is the difference of the independent variable and 32, with the result being multiplied by 5, with the result being divided by 9.
The expression is succinct and tidy. The beauty of this function, and functions in general, is that they can be used over and over again to perform work, and all they need to do is be called with a parameter. So how does this relate to Swift? Obviously I wouldn’t be talking about functions if they didn’t exist in the Swift language. And as you’ll see, they can perform not just mathematical calculations, but a whole lot more.
Coding the Function in Swift
Swift’s notation for establishing the existence of a function is a little different than the mathematical one you just saw. In general, the syntax for declaring a Swift function is:
func funcName(paramName : type, ...) -> returnType
Take a look at an example to help clarify the syntax. Figure 4.1 shows the code in the Chapter 4.playground file, along with the function defined on lines 7 through 13. This is the function discussed earlier, but now in a notation the Swift compiler can understand.
Figure 4.1 Temperature conversion as a Swift function
Start by typing in the following code.
func fahrenheitToCelsius(fahrenheitValue : Double) -> Double {
var result : Double
result = (((fahrenheitValue - 32) * 5) / 9)
return result
}
As you can see on line 7, there is some new syntax to learn. The func keyword is Swift’s way to declare a function. That is followed by the function name (fahrenheitToCelsius), and the independent variable’s name, or parameter name in parentheses. Notice that the parameter’s type is explicitly declared as Double.
Following the parameter are the two characters ->, which denote that this function is returning a value of a type (in this case, a Double type), followed by the open curly brace, which indicates the start of the function.
On line 8, you declare a variable of type Double named result. This will hold the value that will be given back to anyone who calls the function. Notice it is the same type as the function’s return type declared after the -> on line 7.
The actual mathematical function appears on line 10, with the result of the expression assigned to result, the local variable declared in line 8. Finally on line 12, the result is returned to the caller using the return keyword. Any time you wish to exit a function and return to the calling party, you use return along with the value being returned.
The Results sidebar doesn’t show anything in the area where the function was typed. That’s because a function by itself doesn’t do anything. It has the potential to perform some useful work, but it must be called by a caller. That’s what you’ll do next.
Exercising the Function
Now it’s time to call on the function you just created. Type in the following two lines of code, and pay attention to the Results sidebar in Figure 4.2.
var outdoorTemperatureInFahrenheit = 88.2
var outdoorTemperatureInCelsius = fahrenheitToCelsius(outdoorTemperature InFahrenheit)
Figure 4.2 The result of calling the newly created function
On line 15, you’ve declared a new variable, outdoorTemperatureInFahrenheit, and set its value to 88.2 (remember, Swift infers the type in this case as a Double). That value is then passed to the function on line 16, where a new variable, outdoorTemperatureInCelsius, is declared, and its value is captured as the result of the function.
The Results sidebar shows that 31.222222 (repeating decimal) is the result of the function, and indeed, 31.2 degrees Celsius is equivalent to 88.2 degrees Fahrenheit. Neat, isn’t it? You now have a temperature conversion tool right at your fingertips.
Now, here’s a little exercise for you to do on your own: Write the inverse method, celsiusToFahrenheit using the following formula for that conversion:
Go ahead and code it up yourself, but resist the urge to peek ahead. Don’t look until you’ve written the function, and then check your work against the following code and in Figure 4.3.
func celsiusToFahrenheit(celsiusValue : Double) -> Double {
var result : Double
result = (((celsiusValue * 9) / 5) + 32)
return result
}
outdoorTemperatureInFahrenheit = celsiusToFahrenheit(outdoorTemperature InCelsius)
Figure 4.3 Declaring the inverse function, celsiusToFahrenheit
The inverse function on lines 18 through 24 simply implements the Celsius to Fahrenheit formula and returns the result. Passing in the Celsius value of 31.22222 on line 26, you can see that the result is the original Fahrenheit value, 88.2.
You’ve just created two functions that do something useful: temperature conversions. Feel free to experiment with other values to see how they change between the two related functions.
More Than Just Numbers
The notion of a function in Swift is more than just the mathematical concept we have discussed. In a broad sense, Swift functions are more flexible and robust in that they can accept more than just one parameter, and even accept types other than numeric ones.
Consider creating a function that takes more than one parameter and returns something other than a Double (Figure 4.4).
func buildASentence(subject : String, verb : String, noun : String) -> String {
return subject + " " + verb + " " + noun + "!"
}
buildASentence("Swift", "is", "cool")
buildASentence("I", "love", "languages")
Figure 4.4 A multi-parameter function
After typing in lines 28 through 33, examine your work. On line 28, you declared a new function, buildASentence, with not one but three parameters: subject, verb, and noun, all of which are String types. The function also returns a String type as well. On line 29, the concatenation of those three parameters, interspersed with spaces to make the sentence readable, is what is returned.
For clarity, the function is called twice on lines 32 and 33, resulting in the sentences in the Results sidebar. Feel free to replace the parameters with values of your own liking and view the results interactively.
Parameters Ad Nauseam
Imagine you’re writing the next big banking app for the Mac, and you want to create a way to add some arbitrary number of account balances. Something so mundane can be done a number of ways, but you want to write a Swift function to do the addition. The problem is you don’t know how many accounts will need to be summed at any given time.
Enter Swift’s variable parameter passing notation. It provides you with a way to tell Swift, “I don’t know how many parameters I’ll need to pass to this function, so accept as many as I will give.” Type in the following code, which is shown in action on lines 35 through 48 in Figure 4.5.
// Parameters Ad Nauseam
func addMyAccountBalances(balances : Double...) -> Double {
var result : Double = 0
for balance in balances {
result += balance
}
return result
}
addMyAccountBalances(77.87)
addMyAccountBalances(10.52, 11.30, 100.60)
addMyAccountBalances(345.12, 1000.80, 233.10, 104.80, 99.90)
Figure 4.5 Variable parameter passing in a function
This function’s parameter, known as a variadic parameter, can represent an unknown number of parameters.
On line 36, our balances parameter is declared as a Double followed by the ellipsis (...) and returns a Double. The presence of the ellipsis is the clue: It tells Swift to expect one or more parameters of type Double when this function is called.
The function is called three times on lines 46 through 48, each with a different number of bank balances. The totals for each appear in the Results sidebar.
You might be tempted to add additional variadic parameters in a function. Figure 4.6 shows an attempt to extend addMyAccountBalances with a second variadic parameter, but it results in a Swift error.
Figure 4.6 Adding additional variable parameters results in an error.
This is a no-no, and Swift will quickly shut you down with an error. Only the last parameter of a function may contain the ellipsis to indicate a variadic parameter. All other parameters must refer to a single quantity.
Since we’re on the theme of bank accounts, add two more functions: one that will find the largest balance in a given list of balances, and another that will find the smallest balance. Type the following code, which is shown on lines 50 through 75 in Figure 4.7.
func findLargestBalance(balances : Double...) -> Double {
var result : Double = -Double.infinity
for balance in balances {
if balance > result {
result = balance
}
}
return result
}
func findSmallestBalance(balances : Double...) -> Double {
var result : Double = Double.infinity
for balance in balances {
if balance < result {
result = balance
}
}
return result
}
findLargestBalance(345.12, 1000.80, 233.10, 104.80, 99.90)
findSmallestBalance(345.12, 1000.80, 233.10, 104.80, 99.90)
Figure 4.7 Functions to find the largest and smallest balance
Both functions iterate through the parameter list to find the largest and smallest balance. Unless you have an account with plus or minus infinity of your favorite currency, these functions will work well. On lines 74 and 75, both functions are tested with the same balances used earlier, and the Results sidebar confirms their correctness.
Functions Fly First Class
One of the powerful features of Swift functions is that they are first-class objects. Sounds pretty fancy, doesn’t it? What that really means is that you can handle a function just like any other value. You can assign a function to a constant, pass a function as a parameter to another function, and even return a function from a function!
To illustrate this idea, consider the act of depositing a check into your bank account, as well as withdrawing an amount. Every Monday, an amount is deposited, and every Friday, another amount is withdrawn. Instead of tying the day directly to the function name of the deposit or withdrawal, use a constant to point to the function for the appropriate day. The code on lines 77 through 94 in Figure 4.8 provides an example.
var account1 = ( "State Bank Personal", 1011.10 )
var account2 = ( "State Bank Business", 24309.63 )
func deposit(amount : Double, account : (name : String, balance : Double)) -> (String, Double) {
var newBalance : Double = account.balance + amount
return (account.name, newBalance)
}
func withdraw(amount : Double, account : (name : String, balance : Double)) -> (String, Double) {
var newBalance : Double = account.balance - amount
return (account.name, newBalance)
}
let mondayTransaction = deposit
let fridayTransaction = withdraw
let mondayBalance = mondayTransaction(300.0, account1)
let fridayBalance = fridayTransaction(1200, account2)
Figure 4.8 Demonstrating functions as first-class types
For starters, you create two accounts on lines 77 and 78. Each account is a tuple consisting of an account name and balance.
On line 80, a function is declared named deposit that takes two parameters: the amount (a Double) and a tuple named account. The tuple has two members: name, which is of type String, and balance, which is a Double that represents the funds in that account. The same tuple type is also declared as the return type.
At line 81, a variable named newBalance is declared, and its value is assigned the sum of the balance member of the account tuple and the amount variable that is passed. The tuple result is constructed on line 82 and returned.
The function on line 85 is named differently (withdraw) but is effectively the same, save for the subtraction that takes place on line 86.
On lines 90 and 91, two new constants are declared and assigned to the functions respectively by name: deposit and withdraw. Since deposits happen on a Monday, the mondayTransaction is assigned the deposit function. Likewise, withdrawals are on Friday, and the fridayTransaction constant is assigned the withdraw function.
Lines 93 and 94 show the results of passing the account1 and account2 tuples to the mondayTransaction and fridayTransaction constants, which are in essence the functions deposit and withdraw. The Results sidebar bears out the result, and you’ve just called the two functions by referring to the constants.
Throw Me a Function, Mister
Just as a function can return an Int, Double, or String, a function can also return another function. Your head starts hurting just thinking about the possibilities, doesn’t it? Actually, it’s not as hard as it sounds. Check out lines 96 through 102 in Figure 4.9.
func chooseTransaction(transaction: String) -> (Double, (String, Double)) -> (String, Double) {
if transaction == "Deposit" {
return deposit
}
return withdraw
}
On line 96, the function chooseTransaction takes a String as a parameter, which it uses to deduce the type of banking transaction. That same function returns a function, which itself takes a Double, and a tuple of String and Double, and returns a tuple of String and Double. Phew!
Figure 4.9 Returning a function from a function
That’s a mouthful. Let’s take a moment to look at that line closer and break it down a bit. The line begins with the definition of the function and its sole parameter: transaction, followed by the -> characters indicating the return type:
func chooseTransaction(transaction: String) ->
After that is the return type, which is a function that takes two parameters: the Double, and a tuple of Double and String, as well as the function return characters ->:
(Double, (String, Double)) ->
And finally, the return type of the returned function, a tuple of String and Double.
What functions did you write that meet this criteria? The deposit and withdraw functions, of course! Look at lines 80 and 85. Those two functions are bank transactions that were used earlier. Since they are defined as functions that take two parameters (a Double and a tuple of String and Double) and return a tuple of Double and String, they are appropriate candidates for return values in the chooseTransaction function on line 96.
Back to the chooseTransaction function: On line 97, the transaction parameter, which is a String, is compared against the constant string "Deposit" and if a match is made, the deposit function is returned on line 98; otherwise, the withdraw function is returned on line 101.
Ok, so you have a function which itself returns one of two possible functions. How do you use it? Do you capture the function in another variable and call it?
Actually, there are two ways this can be done (Figure 4.10).
// option 1: capture the function in a constant and call it
let myTransaction = chooseTransaction("Deposit")
myTransaction(225.33, account2)
// option 2: call the function result directly
chooseTransaction("Withdraw")(63.17, account1)
Figure 4.10 Calling the returned function in two different ways
On line 105 you can see that the returned function for making deposits is captured in the constant myTransaction, which is then called on line 106 with account2 increasing its amount by $225.33.
The alternate style is on line 109. There, the chooseTransaction function is being called to gain access to the withdraw function. Instead of assigning the result to a constant, however, the returned function is immediately pressed into service with the parameters 63.17 and the first account, account1. The results are the same in the Results sidebar: The withdraw function is called and the balance is adjusted.
A Function in a Function in a...
If functions returned by functions and assigned to constants isn’t enough of an enigma for you, how about declaring a function inside of another function? Yes, such a thing exists. They’re called nested functions.
Nested functions are useful when you want to isolate, or hide, specific functionality that doesn’t need to be exposed to outer layers. Take, for instance, the code in Figure 4.11.
// nested function example
func bankVault(passcode : String) -> String {
func openBankVault(Void) -> String {
return "Vault opened"
}
func closeBankVault(Void) -> String {
return "Vault closed"
}
if passcode == "secret" {
return openBankVault()
}
else {
return closeBankVault()
}
}
println(bankVault("wrongsecret"))
println(bankVault("secret"))
Figure 4.11 Nested functions in action
On line 112, a new function, bankVault, is defined. It takes a single parameter, passcode, which is a String, and returns a String.
Lines 113 and 116 define two functions inside of the bankVault function: openBankVault and closeBankVault. Both of these functions take no parameter and return a String.
On line 119, the passcode parameter is compared with the string "secret" and if a match is made, the bank vault is opened by calling the openBankVault function. Otherwise, the bank vault remains closed.
Lines 127 and 128 show the result of calling the bankVault method with an incorrect and correct passcode. What’s important to realize is that the openBankVault and closeBankVault functions are “enclosed” by the bankVault function, and are not known outside of that function.
If you were to attempt to call either openBankVault or closeBankVault outside of the bankVault function, you would get an error. That’s because those functions are not in scope. They are, in effect, hidden by the bankVault function and are unable to be called from the outside. Figure 4.12 illustrates an attempt to call one of these nested functions.
Figure 4.12 The result of attempting to call a nested function from a different scope
In general, the obvious benefit of nesting functions within functions is that it prevents the unnecessary exposing of functionality. In Figure 4.12, The bankVault function is the sole gateway to opening and closing the vault, and the functions that perform the work are isolated within that function. Always consider this when designing functions that are intended to work together.
Default Parameters
As you’ve just seen, Swift functions provide a rich area for utility and experimentation. A lot can be done with functions and their parameters to model real-world problems. Functions provide another interesting feature called default parameter values, which allow you to declare functions that have parameters containing a “prefilled” value.
Let’s say you want to create a function that writes checks. Your function would take two parameters: a payee (the person or business to whom the check is written) and the amount. Of course, in the real world, you will always want to know these two pieces of information, but for now, think of a function that would assume a default payee and amount in the event the information wasn’t passed.
Figure 4.13 shows such a function on lines 130 through 132. The writeCheck function takes two String parameters, the payee and amount, and returns a String that is simply a sentence describing how the check is written.
func writeCheck(payee : String = "Unknown", amount : String = "10.00") -> String {
return "Check payable to " + payee + " for $" + amount
}
writeCheck()
writeCheck(payee : "Donna Soileau")
writeCheck(payee : "John Miller", amount : "45.00")
Figure 4.13 Using default parameters in a function
Take note of the declaration of the function on line 130:
func writeCheck(payee : String = "Unknown", amount : String = "10.00") -> String
What you haven’t seen before now is the assignment of the parameters to actual values (in this case, payee is being set to "Unknown" by default and amount is being set to "10.00"). This is how you can write a function to take default parameters—simply assign the parameter name to a value!
So how do you call this function? Lines 134 through 136 show three different ways:
- Line 134 passes no parameters when calling the function.
- Line 135 passes a single parameter.
- Line 136 passes both parameters.
In the case where no parameters are passed, the default values are used to construct the returned String. In the other two cases, the passed parameter values are used in place of the default values, and you can view the results of the calls in the Results sidebar.
Another observation: When calling a function set up to accept default parameters, you must pass the parameter name and a colon as part of that parameter. On line 135, only one parameter is used:
writeCheck(
payee :"Donna Soileau")
And on line 136, both parameter names are used:
writeCheck(
payee :"John Miller",
amount :"45.00")
Default parameters give you the flexibility of using a known value instead of taking the extra effort to pass it explicitly. They’re not necessarily applicable for every function out there, but they do come in handy at times.
What’s in a Name?
As Swift functions go, declaring them is easy, as you’ve seen. In some cases, however, what really composes the function name is more than just the text following the keyword func.
Each parameter in a Swift function can have an optional external parameter preceding the parameter name. External names give additional clarity and description to a function name. Consider another check writing function in Figure 4.14, lines 138 through 140.
func writeCheck(payer : String, payee : String, amount : Double) -> String {
return "Check payable from \(payer) to \(payee) for $\(amount)"
}
writeCheck("Dave Johnson", "Coz Fontenot", 1000.0)
Figure 4.14 A function without external parameter names
This function is different from the earlier check writing function on lines 130 through 132 in two ways:
- An additional parameter named payer to indicate who the check is coming from
- No default parameters
On line 142, the new writeCheck function is called with three parameters: two String values and a Double value. From the name of the function, its purpose is clearly to write a check. When writing a check, you need to know several things: who the check is being written for; who is writing the check; and for how much? A good guess is that the Double parameter is the amount, which is a number. But without actually looking at the function declaration itself, how would you know what the two String parameters actually mean? Even if you were to deduce that they are the payer and payee, how do you know which is which, and in which order to pass the parameters?
External parameter names solve this problem by adding an additional name to each parameter that must be passed when calling the function, which makes very clear to anyone reading the calling function what the intention is and the purpose of each parameter. Figure 4.15 illustrates this quite well.
func writeBetterCheck(from payer : String, to payee : String, total amount : Double) -> String {
return "Check payable from \(payer) to \(payee) for $\(amount)"
}
writeBetterCheck(from : "Fred Charlie", to: "Ryan Hanks", total : 1350.0)
Figure 4.15 A function with external parameter names
On line 144, you declare a function, writeBetterCheck, which takes the same number of parameters as the function on line 138. However, each of the parameters in the new function now has its own external parameter: from, to, and total. The original parameter names are still there, of course, used inside the function itself to reference the assigned values.
This extra bit of typing pays off on line 148, when the writeBetterCheck function is called. Looking at that line of code alone, the order of the parameters and what they indicate is clear: Write a check from Fred Charlie to Ryan Hanks for a total of $1350.
When It’s Good Enough
External parameter names bring clarity to functions, but they can feel somewhat redundant and clumsy as you search for something to accompany the parameter name. Actually, you may find that in certain cases, the parameter name is descriptive enough to act as the external parameter name. Swift allows you to use the parameter name as an external name, too, with a special syntax: the # character.
Instead of providing an external parameter name, simply prepend the # character to the parameter name, and use that name when calling the new function writeBestCheck, as done on line 150 in Figure 4.16. This is known as shorthand external parameter naming.
Figure 4.16 Using the shorthand external parameter syntax
The three parameter names, from, to, and total, all are prepended with #. On line 154, the parameter names are used as external parameter names once again to call the function, and the use of those names clearly shows what the function’s purpose and parameter order is: a check written from Bart Stewart to Alan Lafleur for a total of $101.
func writeBestCheck(#from : String, #to : String, #total : Double) -> String {
return "Check payable from \(from) to \(to) for $\(total)"
}
writeBestCheck(from : "Bart Stewart", to: "Alan Lafleur", total : 101.0)
To Use or Not to Use?
External parameter names bring clarity to functions, but they also require more typing on the part of the caller who uses your functions. Since they are optional parts of a function’s declaration, when should you use them?
In general, if the function in question can benefit from additional clarity of having external parameter names provided for each parameter, by all means use them. The check writing example is such a case. Avoid parameter ambiguity in the cases where it might exist. On the other hand, if you’re creating a function that just adds two numbers (see lines 156 through 160 in Figure 4.17), external parameter names add little to nothing of value for the caller.
func addTwoNumbers(number1 : Double, number2 : Double) -> Double {
return number1 + number2
}
addTwoNumbers(33.1, 12.2)
Figure 4.17 When external parameter names are not necessary
Don’t Change My Parameters!
Functions are prohibited from changing the values of parameters passed to them, because parameters are passed as constants and not variables. Consider the function cashCheck on lines 162 through 169 in Figure 4.18.
func cashCheck(#from : String, #to : String, #total : Double) -> String {
if to == "Cash" {
to = from
}
return "Check payable from \(from) to \(to) for $\(total) has been cashed"
}
cashCheck(from: "Jason Guillory", to: "Cash", total: 103.00)
Figure 4.18 Assigning a value to a parameter results in an error.
The function takes the same parameters as our earlier check writing function: who the check is from, who the check is to, and the total. On line 163, the to variable is checked for the value "Cash" and if it is equal, it is reassigned the contents of the variable from. The rationale here is that if you are writing a check to “Cash,” you’re essentially writing it to yourself.
Notice the error: Cannot assign to ‘let’ value ‘to’. Swift is saying that the parameter to is a constant, and since constants cannot change their values once assigned, this is prohibited and results in an error.
To get around this error, you could create a temporary variable, as done in Figure 4.19. Here, a new variable named otherTo is declared on line 163 and assigned to the to variable, and then possibly to the from variable assuming the condition on line 164 is met. This is clearly acceptable and works fine for our purposes, but Swift gives you a better way.
Figure 4.19 A potential workaround to the parameter change problem
With a var declaration on a parameter, you can tell Swift the parameter is intended to be variable and can change within the function. All you need to do is add the keyword before the parameter name (or external parameter name in case you have one of those). Figure 4.20 shows a second function, cashBetterCheck, which declares the to parameter as a variable parameter. Now the code inside the function can modify the to variable without receiving an error from Swift, and the output is identical to the workaround function above it.
func cashBetterCheck(#from : String, var #to : String, #total : Double) -> String {
if to == "Cash" {
to = from
}
return "Check payable from \(from) to \(to) for $\(total) has been cashed"
}
cashBetterCheck(from: "Ray Daigle", to: "Cash", total: 103.00)
Figure 4.20 Using variable parameters to allow modifications
The Ins and Outs
As you’ve just seen, a function can be declared to modify the contents of one or more of its passed variables. The modification happens inside the function itself, and the change is not reflected back to the caller.
Sometimes having a function change the value of a passed parameter so that its new value is reflected back to the caller is desirable. For example, in the cashBetterCheck function on lines 172 through 177, having the caller know that the to variable has changed to a new value would be advantageous. Right now, that function’s modification of the variable is not reflected back to the caller. Let’s see how to do this in Figure 4.21 using Swift’s inout keyword.
func cashBestCheck(#from : String, inout #to : String, #total : Double) -> String {
if to == "Cash" {
to = from
}
return "Check payable from \(from) to \(to) for $\(total) has been cashed"
}
var payer = "James Perry"
var payee = "Cash"
println(payee)
cashBestCheck(from: payer, to: &payee, total: 103.00)
println(payee)
Figure 4.21 Using the inout keyword to establish a modifiable parameter
Lines 181 through 186 define the cashBestCheck function, which is virtually identical to the cashBetterCheck function on line 172, except the second parameter to is no longer a variable parameter—the var keyword has been replaced with the inout keyword. This new keyword tells Swift the parameter’s value can be expected to change in the function and that the change should be reflected back to the caller. With that exception, everything else is the same between the cashBetterCheck and cashBestCheck functions.
On lines 188 and 189, two variables are declared: payer and payee, with both being assigned String values. This is done because inout parameters must be passed a variable. A constant value will not work, because constants cannot be modified.
On line 190, the payee variable is printed, and the Results sidebar for that line clearly shows the variable’s contents as “Cash”. This is to make clear that the variable is set to its original value on line 189.
On line 191, we call the cashBestCheck function. Unlike the call to cashBetterCheck on line 179, we are passing variables instead of constants for the to and from parameters. More so, for the second parameter (payee), we are prepending the ampersand character (&) to the variable name. This is a direct result of declaring the parameter in cashBestCheck as an inout parameter. You are in essence telling Swift that this variable is an in-out variable and that you expect it to be modified once control is returned from the called function.
On line 193, the payee variable is again printed. This time, the contents of that variable do not match what was printed on line 189 earlier. Instead, payee is now set to the value “James Perry”, which is a direct result of the assignment in the cashBestCheck function on line 183. | http://www.peachpit.com/articles/article.aspx?p=2271189&seqNum=3 | CC-MAIN-2016-50 | refinedweb | 5,411 | 62.68 |
SOAP Request Message Structure
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
If you want your SOAP client to build its own SOAP requests instead of using the proxy classes provided by Visual Studio 2005, you must use the following message formats.
The following sample shows a typical SOAP request sent to an instance of SQL Server. In the SOAP message the GetCustomerInfo operation is requested. Note that only a fragment of the HTTP header is shown.>
HTTP Header
In the previous code, the value of the SoapAction HTTP header field is the method name preceded by its namespace. This value is the same method and namespace that you added to the endpoint created by using CREATE ENDPOINT. Note that this is an optional field. The Host HTTP header field identifies the server to which the HTTP request is sent.
<soap:Envelope> Element
The details of a SOAP request are included in the <Body> element in the SOAP envelope. The previous example requests the GetCustomerInfo method. The xmlns attribute in <GetCustomerInfo> is the same namespace that is specified for this method you created the endpoint by using CREATE ENDPOINT. For more information about the stored procedure and namespace, see Sample Applications for Sending Native XML Web Services Requests. The following method parameters are passed in as child elements of the <GetCustomerInfo> element:
The <CustomerID> element that has value 1 is the input parameter
The <OutputParam> element is the output parameter.
Input Parameter Handling
Input parameters are handled in the following ways:
If a SOAP method requires an input parameter, and this parameter is not included in the SOAP request, no value is passed to the called stored procedure. The default action defined in the stored procedure occurs.
If a SOAP method requires an input parameter, and this parameter is included in the request but no value is assigned to it, the parameter is passed to the stored procedure with an empty string as its value. Note that it is not NULL.
If a SOAP operation requires an input parameter and if you want to send a NULL value for this parameter, you must set an xsi:nil attribute to "true" in the SOAP request. For example:
In Visual Studio 2005, when you pass NULL values to string variables, this generates the xsi:nil="true" attribute in the SOAP request. But when you pass NULL values for parameters of types such as integer and float (value types), Visual Studio 2005 does not generate the xsi:nil="true" attribute; instead, it provides default values for these parameters; for example, 0 for integer types, 0.0 for float types, an so on. Therefore, if you want to pass NULL values to these types of parameters, you must build the SOAP message in your application by using the xsi:nil="true" attribute. For more information, see Guidelines and Limitations in Native XML Web Services
You can provide several facets on the parameters. A table shown later in this topic lists several facets that you can specify when you request ad hoc SQL queries. In this table, all the facets that you can specify for a <Value> node can be specified on the RPC method parameter nodes.
When you send a SOAP request for ad hoc SQL query executions, you must call the sqlbatch method and pass the queries and whatever parameters may be required.
The following sample HTTP SOAP request calls the sqlbatch method. Note that only a fragment of the HTTP header is shown.
POST /url HTTP/1.1 Host: HostServerName Content-type: text/xml; charset=utf-8 Content-length: 656 SoapAction: ... <?xml version="1.0" encoding="utf-8" ?> <soap:Envelope xmlns: <soap:Body> <sqlbatch xmlns=""> <BatchCommands> SELECT EmployeeID, FirstName, LastName FROM Employee WHERE EmployeeID=@x FOR XML AUTO; </BatchCommands> <Parameters> <SqlParameter Name="x" SqlDbType="Int" MaxLength="20" xmlns=" 2001/12/SOAP/types/SqlParameter"> <Value xsi:1</Value> </SqlParameter> </Parameters> </sqlbatch> </soap:Body> </soap:Envelope>
HTTP Header
In the HTTP header, note that the SoapAction HTTP header field value is the method name (sqlbatch) that the client uses to specify SQL queries. Note that this header is optional.
<soap:Envelope> Element
The SOAP request details appear in the <Body> element. The SOAP <Body> element has only one child element (<sqlbatch>), and it identifies the method requested. The namespace identified in the element is where the sqlbatch operation is defined. This element has following child elements:
The <BatchCommands> element specifies the query, or queries separated by semicolons (;), to execute.
The <Parameters> element provides an optional list of parameters. In the previous example request envelope, there is only one parameter passed to the query. Each parameter added to the SOAP message as a <SqlParameter> child element of the<Parameters> element. In passing the parameters, you must pass at least, the parameter name (Name attribute of <SqlParameter> element) and the parameter value (<Value> child element of <SqlParameter> element).
To avoid unexpected conversions, provide as much parameter information that you can. The following table lists additional parameter facets that you can specify for the <SqlParameter> element. You can also specify some of these facets for the <Value> element.
For facets that can be specified on both the <SqlParameter> and the <Value> element, when you specify the <Value> element, the facets must be in the namespace as shown in the following example: | http://msdn.microsoft.com/en-us/library/ms190796(v=sql.105).aspx | CC-MAIN-2014-41 | refinedweb | 907 | 52.29 |
Get the highlights in your inbox every week.
How to write a Python web API with Pyramid and Cornice | Opensource.com
How to write a Python web API with Pyramid and Cornice
Use Pyramid and Cornice to build and document scalable RESTful web services.
Opensource.com
Subscribe now
Python is a high-level, object-oriented programming language known for its simple syntax. It is consistently among the top-rated programming languages for building RESTful APIs.
Pyramid is a Python web framework designed to scale up with an application: it's simple for simple applications but can grow for big, complex applications. Among other things, Pyramid powers PyPI, the Python package index. Cornice provides helpers to build and document REST-ish web services with Pyramid.
This article will use the example of a web service to get famous quotes to show how to use these tools.
Set up a Pyramid application
Start by creating a virtual environment for your application and a file to hold the code:
$ mkdir tutorial
$ cd tutorial
$ touch main.py
$ python3 -m venv env
$ source env/bin/activate
(env) $ pip3 install cornice twisted
Import the Cornice and Pyramid modules
Import these modules with:
from pyramid.config import Configurator
from cornice import Service
Define the service
Define the quotes service as a Service object:
QUOTES = Service(name='quotes',
path='/',
description='Get quotes')
Write the quotes logic
So far, this only supports GETing quotes. Decorate the function with QUOTES.get; this is how you can tie in the logic to the REST service:
@QUOTES.get()
def get_quote(request):
return {
'William Shakespeare': {
'quote': ['Love all, trust a few, do wrong to none',
'Some are born great, some achieve greatness, and some have greatness thrust upon them.']
},
'Linus': {
'quote': ['Talk is cheap. Show me the code.']
}
}
Note that unlike in other frameworks, the get_quote function is not changed by the decorator. If you import this module, you can still call the function regularly and inspect the result.
This is useful when writing unit tests for Pyramid RESTful services.
Define the application object
Finally, use scan to find all decorated functions and add them to the configuration:
with Configurator() as config:
config.include("cornice")
config.scan()
application = config.make_wsgi_app()
The default for scan is to scan the current module. You can also give the name of a package if you want to scan all modules in a package.
Run the service
I use Twisted's WSGI server to run the application, but you can use any other WSGI server, like Gunicorn or uWSGI, if you want:
(env)$ python -m twisted web --wsgi=main.application
By default, Twisted's WSGI server runs on port 8080. You can test the service with HTTPie:
(env) $ pip install httpie
...
(env) $ http GET
HTTP/1.1 200 OK
Content-Length: 220
Content-Type: application/json
Date: Mon, 02 Dec 2019 16:49:27 GMT
Server: TwistedWeb/19.10.0
X-Content-Type-Options: nosniff
{
"Linus": {
"quote": [
"Talk is cheap. Show me the code."
]
},
"William Shakespeare": {
"quote": [
"Love all,trust a few,do wrong to none",
"Some are born great, some achieve greatness, and some greatness thrust upon them."
]
}
}
Why use Pyramid?Pyramid is not the most popular framework, but it is used in some high-profile projects like PyPI. I like Pyramid because it is one of the frameworks that took unit testing seriously: because the decorators do not modify the function and there are no thread-local variables, functions are callable directly from unit tests. For example, functions that need access to the database will get it from the request object passed in via request.config. This allows a unit tester to put a mock (or real) database object in the request, instead of carefully setting globals, thread-local variables, or other framework-specific things.
If you're looking for a well-tested library to build your next API, give Pyramid a try. You won't be disappointed. | https://opensource.com/article/20/1/python-web-api-pyramid-cornice | CC-MAIN-2020-34 | refinedweb | 652 | 64.2 |
controls
- camera movement
- physics
- GUI Textures selecting your platform of choice.
3. Devices
The first thing we need to do after selecting the target platform is choosing the size of the artwork we'll be using in the game. This will help us select a proper size for the 3D textures and 2D GUI without making the artwork blurry or use textures that are too large for the target device. For example, the artwork needs to have a higher resolution if you're targeting an iPad with a retina display than a Lumia 520.
Android
Because Android is an open platform, there's a wide range of
Windows Phone & BlackBerry
- Blackberry Z10: 720px x 1280px, 355 ppi
- Nokia Lumia 520: 400px x 800px, 233 ppi
- Nokia Lumia 1520: 1080px x 1920px, 367 ppi
Note that the code we'll write in this tutorial can be used to target any of the platforms.
4. Export Graphics
Depending on the devices you're targeting, you may need to convert the artwork to the recommended size and pixel density. You can do this in your favorite image editor. I've used the Adjust Size... function under the Tools menu in OS X's Preview application.
5. Unity User Interface
You can modify the resolution that's being displayed in the Game panel.
6. Game Interface
The interface of our game will be straightforward. The above screenshot gives you an idea of the artwork we'll be using and how the final game interface will end up looking. You can find the artwork and additional resources in the tutorial's source files on GitHub. playonloop.com and Freesound.
9. 3D Models
To create our game we first need to get our 3D models. I recommend 3docean for high quality models, textures, and more, but if you're testing or learning then a few free models are good as well. The models in this tutorial were downloaded from SketchUp 3D Warehouse where you can find a good variety of models of all kinds.
Now, as Unity doesn't recognize the SketchUp file format we need to convert it to something Unity can import. We first need to download the free version of SketchUp, which is called SketchUp Make.
Open your 3D model in SketchUp Make, select Export > 3D Model from the File menu, and choose Collada (*.dae).
Choose a name and location, and click the save button. This will create a file and a folder for the 3D model. The file holds the data for the 3D object while the folder contains the model's textures. You can. We'll also use Unity native 3D primitive objects to create the level as shown in the next steps.
12. Setup Camera
Let's first position our Main Camera a little bit higher to achieve the view we want. Select it from the Hierarchy panel and adjust the Transform values in the Inspector to match the ones shown below.
Don't worry if you don't see any changes. We haven't created anything for the camera to see yet. Next, use the Inspector to set the Background color to RGB: 0, 139, 252.
13. Background
Our platform level will be floating above a background, which will be a representation of a sea. It will be created using Unity primitives, a simple Plane with a texture applied to it.
While Unity can work with 3D objects of any type created by other programs, it is sometimes easier and/or more convenient to use primitives for prototypes.
To create the sea, for example, select Create Other > Plane from the GameObject menu and adjust the Transform values in the Inspector to match the ones shown below.
You should a square in the Scene panel. We'll use it to detect when the player falls from the platform, ending the game.
It's worth mentioning that these primitive objects already have a Mesh Collider attached to them, which means that they will automatically detect collisions or trigger events when they come in contact with a RigidBody.
14. Texture Material
To apply a texture to the sea plane, we need to create a Material. A Material is used to define how a GameObject looks and it is essential to add a texture to a GameObject.
Select Create > Material from the Assets menu to create one, find it in the Assets panel, and use the Inspector to select the texture you want to use as your sea. These are the settings I've used:
You'll notice a message in the material section stating that it's recommended to use Mobile/Diffuse as a shader, because the default white color doesn't do anything. Changing it to Mobile/Diffuse will also help with performance.
15. Adding Light
You may have noticed that the sea is a bit darker than it should be. To fix this, we need to add a Light to our scene. Select Create Other from the GameObject menu and select Directional Light. This will create an object that produces a beam of light. Change its Transform values as shown in the following screenshot to make it illuminate the sea.
This looks much better.
16. Creating Platforms
The platforms are parts of our level and are used by the player to move the ball to the portal on the other side of the sea.
Create a Plane as you did for the sea and adjust the Transform values in the Inspector as shown below. This will create and put the first platform in place.
We can now use Unity's Move and Rotation tools to create the other platforms. They're all of the same size so we can use them vertically or horizontally by duplicating them using Command+D on OS X and Control+D on Windows.
17. Platform Texture
Create a new Material like we did in Step 14 and apply the texture to it. Mine looks like this:
Adjust the x and y tiling until you're happy with the result.
18. Border Cylinders
We need to create a border to prevent our player from falling off too easily. To do this, we'll use a new type of primitive, a Cylinder.
Select Create Other > Cylinder from the GameObject menu and adjust the Transform values in the Inspector as shown below.
This will add a small border to the edge of the first platform. Create a new Material and change its color in the Inspector to RGB: 255, 69, 0.
The result should look like this:
Use Command+D (Control+D on Windows) to duplicate the border and the Scale tool to change its size. Position the duplicates at the platforms' edges using Unity's tools.
19. Portal
The portal is the goal line of the game. The player will use the accelerometer to control the ball and take it to this point while picking up items and avoiding falling off the platform. The portal is a 3D model, which we imported in Step 10.
Drag and drop it on the Scene or Hierarchy panel and change its Transform values to the following:
This will position it at the end of the platforms.
20. Portal Collider
Because imported 3D models don't have a collider by default, we need to attach one. Since we only need to test if the ball hits the blue area of the portal, we'll attach the collider to it.
Take a look at the portal model in the Hierarchy view and you'll notice a small triangle to the left of its name. Click the triangle to expand the portal's group and select the first item. I've added the -Collider suffix for clarification.
Click the Add Component button in the Inspector and choose Physics > Mesh Collider. This will add a collider using the shape of the model's selected area.
21. Portal Audio Source
To provide feedback to the player, we'll play a sound effect when the ball touches the portal's collider. Because we'll be triggering the event using the previously created collider, we need to add the audio source to that same object.
Select it from the Hierarchy panel, click the Add Component button in the Inspector panel, and select Audio Source from the Audio section.
Uncheck Play on Awake and click the little dot on the right, below the gear icon, to select the sound you want to play.
22. Adding Islands
The islands are nothing more than decorative elements to make the level less empty. I've used an imported 3D model and a Cylinder to make them. I won't go into detail creating the islands since they're not essential to the game. With what you've learned so far, you should be able to create them yourself.
23. Adding Bananas
As in Monkey Ball, the player will be able to collect bananas during the game. Start by dragging the model from the Assets panel to the Scene. Don't worry about its location just yet, because we'll convert it to a Prefab later since we'll be reusing it multiple times.
24. Banana Mesh Collider
As I mentioned earlier, imported models don't have a collider by default, so we need to attach one to the banana. Click the Add Component button in the Inspector and choose Physics > Mesh Collider. This will add a collider using the model's shape. Make sure to check the Trigger checkbox, because we want to detect collisions, but we don't want the ball to react with the banana.
24. Adding the Player
It's time to create our game character, which will be a simple Sphere primitive. Select Create Other > Sphere from the GameObject menu to create the primitive and modify the Transform values in the Inspector as shown below.
This will create the sphere and position it at the start of our level.
To make the sphere semi-transparent, we need to change its Shader options. Open the Inspector and change the shader to Transparent/Diffuse.
25. Player RigidBody
To detect a collision with the player, we need to attach a RigidBody to it. To add one, select Add Component from the Inspector panel, followed by Physics > RigidBody. You can leave the settings at their defaults.
26. GUI Textures
To display the game's user interface, we'll use Unity's GUI Textures. Unity's documentation provides a clear explanation of GUI Textures:
GUI Textures are displayed as flat images in 2D. They are made especially for user interface elements, buttons, or decorations. Their positioning and scaling is performed along the x and y axes only, and they are measured in Screen Coordinates, rather than World Coordinates.
By default, images imported to the Assets folder are converted to Textures that can be applied to 3D objects. We need to change this to GUI Texture for the images we want to use in the game's user interface.
Select the images you want to convert in the Assets panel and open the Inspector, click on the Texture Type drop-down menu and select GUI.
You can now drag and drop the images to the Scene. The images will always appear in front of every object on the stage and will be treated as 2D elements.
27. GUI Text
Inside each GUI element, we'll display a number indicating the number of bananas the player has collected and the time the player has left.
Select Create Other > GUI Text from the GameObject menu to create a text object, place it at the center of the GUI element, and change the text in the Hierarchy panel to 0. Do the same for the time on the right. I've set the default time to 30 seconds.
You can use a custom font for the text by adding the font to the Assets folder and then changing the Font property of the text in the Inspector.
28. Adding Scripts
It's time to write some code. With the user interface in place, we can start writing the necessary code to add functionality to our game. We do this by means of scripts. Scripts are attached to different game objects. Follow the next steps to learn how to add interaction to the level we've just created.
29. Move Scene
We'll start by making use of the device's accelerometer. Moving the player using the accelerometer is fairly simple in Unity. There's nothing to set up and it's easy to understand.
Select the stage, click the Add Component button in the Inspector panel, and choose New Script. Name the script
MoveScene and don't forget to change the language to C#. Open the newly created file and add the following code snippet.
using UnityEngine; using System.Collections; public class MoveScene : MonoBehaviour { void Update() { transform.rotation *= Quaternion.Euler(Input.acceleration.y/6, -Input.acceleration.x/3, 0); } }
We use the
Update method to request data from the accelerometer in every frame using the
Input.acceleration property, which measures the device's movement in a three-dimensional space. This allows us to get the x, y, and z values, and use them to control the player's position.
We then apply the obtained values to the
transform.rotation property of the level by invoking
Quaternion.Euler, which returns the rotation values. Note that we divide the accelerometer's values to avoid that the player moves too fast, making the gameplay difficult.
We only modify the level's x and y values, because we only need it to tilt and not to move closer to or farther from the camera.
30. Camera Follow
The following script is attached to the Main Camera. It calculates the space between the camera and the player and maintains it while the ball moves.
using UnityEngine; using System.Collections; public class FollowPlayer : MonoBehaviour { public GameObject player; private Vector3 playerOffset; // Use this for initialization void Start() { playerOffset = transform.position - player.transform.position; } // Update is called once per frame void Update() { transform.LookAt(player.transform); transform.position = player.transform.position + playerOffset; } }
The script uses two variables that are worth explaining:
player: This is a reference to the player in the Scene. You can set this in the Inspector.
playerOffset: This is the distance between the camera and the player. Because we maintain the same distance between camera and player, the camera follows the player as it moves. The offset is calculated in the
Startmethod.
We direct the camera to the player and set its position to the player's position plus the value of
playerOffset. Because we do this in the
Update method, the camera position is calculated and updated in every frame. The result is that the camera follows the player. This is a simple yet effective strategy to create a camera that follows the player.
31. Picking Bananas
The following script is attached to the banana and handles any interactions with it. We start by getting references to the corresponding sound and the text displaying the number of collected bananas, which we'll need to play the sound and increase the counter in the top left when the player collides with a banana. Once you've declared the variables in the script, you need to set these references in the Inspector.
using UnityEngine; using System.Collections; public class PickBanana : MonoBehaviour { public AudioClip bananaSound; public GUIText bananaText; void OnTriggerEnter(Collider other) { AudioSource.PlayClipAtPoint(bananaSound, transform.position); int score = int.Parse (bananaText.text) + 1; bananaText.text = score.ToString(); Destroy(gameObject); } }
Next, we call a method that detects when the ball collides with a banana. When this happens, we play the sound and increase the counter.
To modify the counter, we create a variable using the value of the GUI Text and use the
int.Parse method to convert the string to a number and increment the number by 1. We then set the value to the GUI Text, first converting the number to a string by invoking the
toString method. Finally, we invoke
Destroy to remove the banana game object.
32. Falling Off the Platform
The following class is used to detect when the player falls off the platform into the sea. Attach the script to the sea game object.
using UnityEngine; using System.Collections; public class Lose : MonoBehaviour { void OnCollisionEnter() { audio.Play(); Invoke("Reload", 1.59f); } void Reload() { Application.LoadLevel(Application.loadedLevel); } }
This simple class uses the
OnCollisionEnter method to detect when the ball collides with the sea, which means the player has fallen off the platform. When this happens, we play the sound attached to the sea and use the
Invoke method to call the
Reload method, which restarts the game by reloading the current scene.
The second parameter of the
Invoke method defines the delay with which the
Reload method is invoked. This is necessary as we first want the sound to finish before we start a new game.
33. Monitoring Time
The next class,
Timer, is attached to the time GUI in the top right. It reduces the time and ends the game when the counter reaches 0.
using UnityEngine; using System.Collections; public class Timer : MonoBehaviour { public GUIText timeText; void Start() { InvokeRepeating("ReduceTime", 1, 1); } void ReduceTime() { int currentTime = int.Parse(timeText.text) - 1; timeText.text = currentTime.ToString(); if (currentTime == 0) { audio.Play(); Invoke("Reload", 1.59f);//waits until sound is played to reload Destroy(timeText); } } void Reload() { Application.LoadLevel(Application.loadedLevel); } }
We keep a reference to the text in the
timeText variable to make modifying the user interface easy. In the
Start method, we call the
InvokeRepeating method, which repeatedly invokes the
ReduceTime method repeatedly.
To update the text in the user interface, we create a variable to convert the text to a number, just like we did earlier, and subtract one second and update the user interface with the result.
When the counter reaches 0, the appropriate sound is played and we destroy the counter text. We invoke the
Reload method with a delay to restart the game when the sound has finished playing.
34. Level Complete
The last class, EndLevel, is used to detect when the player reaches the portal. When the player passes through the portal, we display a message on screen and destroy destroy the ball. We do this to prevent the ball from falling in the sea.
using UnityEngine; using System.Collections; public class EndLevel : MonoBehaviour { public GameObject complete; public GameObject player; void OnTriggerEnter(Collider other) { audio.Play(); Invoke("Restart", 2); GameObject alert = Instantiate(complete, new Vector3(0.5f, 0.5f, 0), transform.rotation) as GameObject; Destroy(player.rigidbody); } void Restart() { Application.LoadLevel(Application.loadedLevel); } }
The
Instantiate method is used to create an instance of the message that is displayed to the player. It lets us use the GUI element from the project's Assets instead of having it on the scene. Finally, we restart the game with a delay of two seconds._46<<
These settings are application specific data that includes the creator or company, app resolution and display mode, rendering mode (CPU, GPU), device OS compatibility, etc. Configure the settings according to the devices you're targeting and the store or market where you plan to publish the app.
37. Icons and Splash Images
Using the graphics how to use the accelerometer to control the movement of the player, GUI Textures, primitives, and other aspects of game development in_48<< | https://code.tutsplus.com/tutorials/develop-a-monkey-ball-inspired-game-with-unity--cms-21416 | CC-MAIN-2017-43 | refinedweb | 3,229 | 63.9 |
One “Edge,” the leading-edge version of Rails that’s something of a proving ground for new features, and some merit a sneak peek and early adoption. Let’s quickly set up an Edge Rails environment and give the new bells and whistles a go for the rest of the week. Today, let’s look at conveniences for validation and the integration of a state machine into ActiveRecord.
Walking on the Edge
Edge Rails is surprisingly easy to establish in its own sandbox, tucked safely away from your other projects. The whole process requires three commands:
$ rails playground
$ cd playground
$ rake rails:freeze:edge
cd vendor
Downloading Rails from
Unpacking Rails
rm -rf rails
rm -f rails.zip
rm -f rails/Rakefile
rm -f rails/cleanlogs.sh
rm -f rails/pushgems.rb
rm -f rails/release.rb
touch rails/REVISION_ef935240582ef6a7d47a9716e8269db817c91503
cd -
Updating current scripts, javascripts, and configuration settings
Assuming you have a recent version of Rails on your machine (the system used for this article was Mac OS X Leopard and Rails 2.3.3), the commands shown create a new Rails application, place a standalone copy of Edge in vendor/rails, and update the application accordingly, leaving the application, (here, playground) based on Edge. Since Edge is a moving target, you can update simply by re-running the latter rake command.
Better Validations
Validations are a fundamental part of a typical Rails application; hence it’s befitting that a number of recent enhancements bolster the features. One of the best is the new validates_with. You can now validate a model a separate class. Here’s a (somewhat contrived) example.
validates_with
class Wheels < ActiveRecord::Validator
def validate
number_of_wheels = options[ :number_of_wheels ] || 4
if record.number_of_wheels != number_of_wheels
record.errors[ :number_of_wheels ] << "Your #{record.class} won't run"
end
end
end
class Engine < ActiveRecord::Validator
def validate
# Valid?
end
end
class Weight < ActiveRecord::Validator
def validate
# Valid?
end
end
class Vehicle < ActiveRecord::Base
attr_accessor :number_of_wheels
end
class Car < Vehicle
validates_with Wheels, :number_of_wheels => 4
validates_with Engine, Weight
end
class Motorcycle < Vehicle
validates_with Wheels, :number_of_wheels => 2
end
validates_with can list one or more subclasses of ActiveRecord::Validator and each is called in turn to substantiate an instance of a model. Within each validator, record is the model and options contains the parameters of validates_with. You set errors as you would in a model’s own validate method.
ActiveRecord::Validator
record
options
validate
To test this code, drop into the Rails console.
$ ./script/console
>> c = Car.new
=> #<Car id: nil, created_at: nil, updated_at: nil>
>> c.number_of_wheels = 4
=> 4
>> c.valid?
=> true
>> c.number_of_wheels = 2
=> 2
>> c.valid?
=> false
>> c.errors
=> {:number_of_wheels=>["Your Car won't run"]}
validates_with encapsulates rules and makes those rules reusable akin to any other class. I can imagine modules and gems full of validator classes for email, telephone numbers, and common formats that are best written once and shared among many applications. Thankfully, I was able to reuse some code — Wheels — even in this limited example.
Wheels
Much like other validation rules, validates_with respects :on to specify when to validate in the instance lifecycle, and accepts :if and :unless to validate conditionally.
:on
:if
:unless
Oddly, the generate script does not yet create validators, meaning there is no convention (yet) to store the classes. For the moment, I create mine in app/models/validators and load the classes from environment.rb with config.load_paths += %W( #{RAILS_ROOT}/app/model/validators ). You may choose to keep your validators in lib.
config.load_paths += %W( #{RAILS_ROOT}/app/model/validators )
One shortfall of validates_format_of was addressed recently, too. Up untikl last week, the rule checked for conformity of a string with the aptly named :with, but if you wanted to assert non-conformity, you typically had to write your own code using either validates_each or a plain old validate. For example, to exclude email addresses from the domain example.com, I might write:
validates_format_of
:with
validates_each
class EmailAddress < ActiveRecord::Base
attr_accessor :domain
validate :acceptable_domain
def acceptable_domain
errors.add_to_base( "Invalid domain") unless
self.domain.match(/example\.(org|com|net|biz)$/).nil?
end
end
Yuck. (By the way, the attr_accessor :domain and the similar code used in the previous example are shortcuts to add a property in the models without defining actual fields in the migrations.) However, with a recent patch, the task becomes much more succinct.
attr_accessor :domain
class EmailAddress < ActiveRecord::Base
attr_accessor :domain
validates_format_of :domain, :without => /example\.(org|com|net|biz)$/
end
Mates of State
Many problems can be represented by a state machine. For example, in an online store, an order can transition between many states as it winds it way to a customer, where each transition requires some processing. The order might start as unpaid, a kind of limbo. Next, the order is deemed paid, which generates a pick list for the warehouse. Once all the items are collected, the order might be marked complete, which generates a shipping label, and so on.
Several plugins provide state machines for Rails, but the feature is so fundamental, the best features of available solutions were integrated into the core to create ActiveRecord::StateMachine. To add a state machine to any class, you simply include it and create a state string field in your table. Here’s an example of a state machine to implement some fictional order processing rules.
ActiveRecord::StateMachine
state
class Order < ActiveRecord::Base
include ActiveRecord::StateMachine
state_machine do
state :placed # In limbo
state :paid
state :assembled
state :packaged
state :shipped
state :received
state :exception # Uh oh!
event :advance_order do
transitions :to => :paid, :from => [ :placed ],
:on_transition => :pick_list
transitions :to => :assembled,:from => [ :paid ],
:on_transition => :shipping_label
transitions :to => :packaged, :from => [ :assembled ],
:on_transition => :ups_pickup
transitions :to => :shipped, :from => [ :packaged ],
:on_transition => :send_tracking_code
transitions :to => :received, :from => [ :shipped ],
:on_transition => :book_income
end
event :exception do
transitions :to => :exception,
:from => [ :paid, :assembled, :packaged, :shipped ],
:on_transition => :flag
end
end
def pick_list
puts "Go get it"
end
def shipping_label
puts "Send it here"
end
def ups_pickup
puts "Put it on the truck"
end
def send_tracking_code
puts "Watch it go"
end
def book_income
puts "Kaching!"
end
def flag
puts "Uh oh!"
end
end
After running this model’s migration to create the orders table, you can drop into the console again to test the code.
$ ./script/console
>> o = Order.create
=> #
Nifty! In addition to states and transitions, you can also query if the model is in a particular state with the boolean method state?, where state is any of the states you defined.
More To Come
This only scratches the surface of Edge. Tomorrow, we'll look at database seeding made easy and more. Until then, happy tinkering!
Nifty! In addition to states and transitions, you can also query if the model is in a particular state with the boolean method state?, where state is any of the states you defined.
state?
More To Come
This only scratches the surface of Edge. Tomorrow, we'll look at database seeding made easy and more. Until then, happy tinkering!
The problem with Rails is that it is an overbearing framework ( just like EVERY java framework ). A framework is supposed to be just that ( think of the frame of a house ).
If you want to try a ( IMO ) true, thin, framework, try Ramaze. (Sinatra is a similar framework ).
Good post but I was wanting to know if you could write a litte more on this topic? I’d be very grateful if you could elaborate a little bit further. Appreciate it!
Experience is a comb which nature gives to men when they are bald. | http://www.linux-mag.com/id/7478/ | CC-MAIN-2018-05 | refinedweb | 1,245 | 55.74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.