text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include <linux/module.h>
create_module() attempts to
create a loadable module entry and reserve the kernel memory
that will be needed to hold the module. This system call
requires privilege.
On success, returns the kernel address at which the module
will reside. On error, −1 is returned and
errno is set appropriately.
A module by that name already exists.
name is
outside the program's accessible address space.
The requested size is too small even for the module header information.
The kernel could not allocate a contiguous block of memory large enough for the module.
create_module() is not
supported in this version of the kernel (e.g., the
kernel is version 2.6 or later).
The caller was not privileged (did not have the
CAP_SYS_MODULE
capability).
This system call is present on Linux only up until kernel 2.4; it was removed in Linux 2.6.) | http://manpages.courier-mta.org/htmlman2/create_module.2.html | CC-MAIN-2017-17 | refinedweb | 146 | 61.43 |
#include "l_bitmap.h"
L_INT pEXT_CALLBACK YourFunction(node, userData)
Callback to receive the SVG elements (nodes) from the L_SvgEnumerateElements function.
Handle for the SVG element being enumerated. the L_SvgEnumerateElements function.)
Keep in mind that this is a void pointer, which must be cast to the appropriate data type within your callback function.
The L_SvgEnumerateElements function calls your callback function for each SVG node in the specified SVG document.
To continue to the next element (node) if any, return SUCCESS. To stop enumerating, return 0.
Required DLLs and Libraries
For an example, refer to L_SvgEnumerateElements.
Direct Show .NET | C API | Filters
Media Foundation .NET | C API | Transforms
Media Streaming .NET | C API | https://www.leadtools.com/help/leadtools/v20/svg/api/l-svgenumeratecallback.html | CC-MAIN-2019-47 | refinedweb | 110 | 53.07 |
Hey everyone,
I'm having this serious problem with this linke dlist implementation. It is susposed to be part of a flight reservtion system (basic introductory problem in C) - but everytime the execution of initiateNode begins, the program crashes, saying that "The instruction at xxxxxx referenced memory at xxxxxxx. The memory could not be written"
I understand that this is a problem with reserved memory trying to be used, but I don't known how to fix it -
Heres the parts of the code that are most likely the culprits!
-------------------Code------------------------------
//-------------------------- initiateNode --------------------------//
/* This function initialises a node, allocates memory for the node,
and returns a pointer to the new node. Name, Phone and Flight number
are added at this stage. Uses add function to establish node
on the actual linked list.*/
struct node * initiateNode(char *name, char *flight_no, int phone)
{
struct node *ptr;
ptr = (struct node *) malloc( 1, sizeof(struct node ) );
//In case of an error, ptr set to NULL, so...
if(ptr == NULL)
return (struct node *) NULL;
//Otherwise take passenger details and.
else
{
strcpy(ptr->name, name);
ptr->phone = phone;
strcpy(ptr->flight_no, flight_no);
return ptr;
//This is pointer to the new node.
}
}
//-------------------------- add --------------------------//
/* This adds a node to the end of the list (doesn't fill in detials).*/
void add(struct node *newNode)
{
if( head == NULL )
{
//If this is the first node...
head = newNode;
}
(*end).next = newNode;
(*newNode).next = NULL;
// Set next field to signify the end of list
end = newNode;
// Make end to point to the last node
}
This is the implementation part - not all of it!
case 2:
printf("\nEnter name: ");
scanf("%s", &name);
printf("\nEnter phone number: ");
scanf("%d", &phone);
printf("\nCurrent Flights Available: ");
for(i = 0; i < number; i++)
{
printf("%s\n", current_flight[i].flight_no);
}//Ignore this part
printf("\nEnter Flight number for passenger: ");
scanf("%s", &flight_no);
ptr = initiateNode(name, flight_no, phone);
add(ptr);
break;
There is also two global variables
struct node *head = NULL;
struct node *end = NULL;
_________________end code_________________
I know its probably too much to ask, but does anyone have any ideas??? | https://cboard.cprogramming.com/c-programming/3798-memory-problem-i-think.html | CC-MAIN-2017-26 | refinedweb | 341 | 60.04 |
Stylesheets
- Steven_DAntonio
Hi,
I was reading up on the use of stylesheets for setting a background pic on a form. I am interested in trying this on the splash screen on my first project.
I can't find a reference as to whether I need to include any precompile directives (#include things or anything like that)
I want the pic to be only on the splash screen, the standard gray is fine for the actually working form.
Other than some code like this is there anything special I need to do?
QSplashscreen splash;
splash.setStyleSheet("background-image:url(./file/left.png)");
splash.show();
Thanks,
Steven
- Chris Kawa Moderators
You don't need a stylesheet to set a splash screen image. QSplashScreen has a setPixmap method for that. Better yet - you can pass a pixmap directly in the constructor:
#include <QSplashScreen> #include <QPixmap> ... QSplashScreen splash(QPixmap("./file/left.png")); splash.show();
- Steven_DAntonio
Hi Chris,
Thanks. That would be much easier. I'm looking specifically for a background image so I can write over it, but it might be better to just create a static pic with my writing on top of it (as part of the JPEG itself) and just include it as one pixmap as you suggest.
- Chris Kawa Moderators
QSplashScreen has a showMessage method for writing text over the image. It's commonly used to display dynamic things like product version or a loading progress. Also QSplashScreen is just a widget like any other so you can override the
paintEventand overdraw whatever you want on it using QPainter. | https://forum.qt.io/topic/67553/stylesheets | CC-MAIN-2018-39 | refinedweb | 259 | 70.84 |
Step 1:
Set up an account on tropo. I'm assuming if you're reading this tutorial you have the computer savvy to set up an account on a website so I won't waste your time with that.
Step 2: Writing the Script
Now that you have your account all ready to use, log in and go to 'Your Hosted Files' and click 'Create New File.' (Note: It may be [much] easier to write out these files in your favorite Python writing program than to enter them directly onto tropo) Name the file what you want to and move down to the area named 'File Text'. First step is import the modules we need. After which we'll create a dictionary that we'll use later to convert the abbreviated days of the week to full names of the week and then a function that will do that work. Enter this. . .
import re import urllib2 from xml.dom import minidom, Node day_of_week_dic = { "Mon": "Monday", "Tue": "Tuesday", "Wed": "Wednesday", "Thu": "Thursday", "Fri": "Friday", "Sat": "Saturday", "Sun ": "Sunday ", "mph":"Milesperhour " } def replace_words(text, word_dic): rc = re.compile('|'.join(map(re.escape, word_dic))) def translate(match): return word_dic[match.group(0)] return rc.sub(translate, text)
This is pretty self explanatory. The function has a built in function which substitutes and returns the value that we want in the dictionary. Now we'll move on to my favorite part, which is scraping the web page.
def weather(): result = ask("Enter or say your five digit zip code",{"choices":"[5 DIGITS]", "timeout":30, "attempts":3, "onBadChoice": lambda event : say("I'm sorry, I didn't understand that.")}) urlRead = urllib2.urlopen(''%result.value) xml = minidom.parse(urlRead) if xml: for channelNode in xml.documentElement.childNodes: if channelNode.nodeName == 'channel': for itemNode in channelNode.childNodes: if itemNode.nodeName == 'item': for yWeatherNode in itemNode.childNodes: if yWeatherNode.nodeName == 'yweather:forecast': day = replace_words(yWeatherNode.getAttribute('day'), day_of_week_dic) low = yWeatherNode.getAttribute('low') high = yWeatherNode.getAttribute('high') condition = yWeatherNode.getAttribute('text') say("For "+day+", there is a low of "+low+" degrees and a high of "+high+" degrees. The condition is "+condition+".") weather() hangup()
This may be confusing at first glance but I assure you it isn't hard to understand. The first part that may look unfamiliar is the 'ask' command. That command is a built-in tropo command. The first option is the question that we're going to ask, the next three are pretty self-explanatory. Next, what we're doing is taking the xml document (urlRead), looking for a node called 'channel', then a node called 'item', then a node called 'yweather:forecast'. When we get to yweather:forecast we grab the day, low, high, and condition from it then use tropo's 'say' command to have our program output whatever we want it to.
At the bottom we call weather and then tell our phone to hangup once weather is done executing.
Step 3: Getting the phone number and testing it!
After entering all that, please don't forget to click save file or update file at the bottom of the text box you entered the code in! Ok, now that you have it saved, click on 'Your Applications'. Click 'Create New Application', name your app whatever you'd like to and click 'Hosted File' link and then click the 'Map existing file to this application' link. From there just select map next to the script we created in the last step. After that, click 'Create Application'.
Now you can use the numbers given or add your own and call up the number. If all goes as planned, it should work (I had to wait about 5 minutes for mine to become ready). | https://www.dreamincode.net/forums/topic/305319-using-tropo-python-to-have-the-phone-tell-us-the-weather/ | CC-MAIN-2019-43 | refinedweb | 618 | 66.44 |
1.2: Understanding Preprocessing
- Page ID
- 29006
C/C++ Preprocessors
As the name suggests Preprocessors are programs that process provide preprocessors directives which tell the compiler to preprocess the source code before compiling. All of these preprocessor directives begin with a ‘#’ (hash) symbol. The ‘#’ symbol indicates that, whatever statement start with #, is going to preprocessor program, and preprocessor program will execute this statement. Examples of some preprocessor directives are: #include, #define, #ifndef etc. Remember that # symbol only provides a path that it will go take a brief look at each of these preprocessor directives.
Macros
- In C++ macros are always proceeded by #define. Macros are a piece of code which is given some name. Whenever this name is encountered by the preprocessor the preprocessor replaces the name with the actual piece of code. The ‘#define’ directive is used to define a macro. Let us now understand the macro definition with the help of a program:
#include <iostream> #define MAX 15 std::cout << "The max value is " << MAX << std::endl;
The word ‘MAX’ in the macro definition is called a macro template and ‘15’ is macro expansion. In the above program, when the pre-processor part of the compiler encounters the word MAX it replaces it with 15. It is as if you typed the code with the number 15 instead of the word MAX. The idea is that if you change your code, you only have to make a single edit, you do NOT have to go through your code and find every place where you have used that value. This may not seem like that big of a deal - until you have a coding project with 100,000 lines of code - you really don't want to sit and go through our code or even do a find-and-replace operation.
Note: There is no semi-colon(‘;’) at the end of macro definition. Macro definitions do not need a semi-colon to end.
Macros with arguments: We can also pass arguments to macros. Macros defined with arguments works similarly as functions. In the following example there are 2 macros - one for AREA which takes 2 arguments and multiplies them. The other macro is DOUBLE_IT, which takes 2 arguments and multiplies them together, but this macro is used to show how macros can look straight forward, but actually be broken.
#include <iostream> // macro with parameter #define AREA(l, b) (l * b) #define DOUBLE_IT(arg) (arg * arg) int main() { int l1 = 10, l2 = 5, area, bad_result; area = AREA(l1, l2); std::cout << "Area of rectangle is: " << area << std::endl; // The compiler sees this as: // int bad_result = (4 + 1 * 4 + 1); // due to mathematical order of precedence this gives wrong answer bad_result = DOUBLE_IT(4 + 1); std::cout << "Bad Results: " << bad_result << std::endl; return 0; } main();
As you can see in the second part, DOUBLE_IT causes a problem - the input arguments are not evaluated properly and produces an incorrect answer. The solution to this particular problem is to add a set of parenthesis in the macro definition:
#define DOUBLE_IT(arg) ((arg) * (arg))
One more mention of macros - they can be a bit complicated and even extend over multiple lines - BEWARE - this makes it much more difficult to debug
#include <iostream> #define PRINT(i, limit) while (i < limit) \ { \ std::cout <<"CIS 31A Quiz " << std::endl; i++; \ } int main() { int i = 0; PRINT(i, 3); return 0; }
Adapted from: "C/C++ Preprocessors" by Harsh Agarwal, Geeks for Geeks is licensed under CC BY-SA 4.0 | https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/01%3A_Building_and_Running_C_Code/1.02%3A_Understanding_Preprocessing | CC-MAIN-2021-31 | refinedweb | 581 | 54.36 |
I could use some advice on this situation. I know this looks like a
lot of code, but It is a straightforward implementation question, I am
not stuck or anything. I have what I need working but it involved two
ajax server requests and Im not sure it is the best way. I have a
list of appointments, each with dates and an edit link. This edit
link is a link_to_remote function that makes a call to the server and
renders a partial that replaces the div that contains the appointment
information with editable fields:
I want to make this a ajax request, b/c this is all in an ajaxed popup
and I would like to speed them up by not loading all the editable
field html for every appointment for each popup. Is this a safe
assumption and solution?
The edit_appt action looks like this:
page.replace_html “appt_#{params[:day]}_#{params[:id]}”, :partial =>
‘edit_appt’, :object => @appt_to_edit, :locals => {:day => params
[:day]}
Now if anyone is still with me(and I REALLY appreciate if you are!)
then my question is about placing a cancel link/button in this
edit_appt partial that will load the original appointment. The way it
is set up now is that it makes another ajax request, passes the id
again, performs a search and then renders another partial:
def reload_appt
@appointment = Appointment.find(params[:id])
respond_to do |format|
format.js do
render :update do |page|
page.replace_html “appt_#{params[:day]}_#{params
[:id]}”, :partial => ‘individual_appt’, :object => @appointment,
:locals => {:day => params[:day]}
end
end
end
end
My question is if this is the best way of doing it? Im wondering if
there is some way to use hide/show html or something, b/c the second
database query seems redundant, seeing as how the appointment html
existed in the first place. Thanks for any advice. | https://www.ruby-forum.com/t/hiding-and-showing-html-with-ajax/158138 | CC-MAIN-2021-31 | refinedweb | 309 | 60.45 |
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
(Bug was originally about two different classes. I've split of part of this into 6387579).
The javax.swing.text.ParagraphView.Row package-private class is referenced to from the following method of public javax.swing.text.ParagraphView class:
protected void adjustRow(javax.swing.text.ParagraphView.Row r,
int desiredSpan,
int x)
EVALUATION
This method could not have been overridden. While it might be called (with null), it is empty, so there's no reason to call it. It is neither called nor overridden inside Swing, too. This method was also marked as internal, and its use was discouraged.
That said, let's remove it.
EVALUATION
###@###.### >
> In your case the method
> --------------
> protected void adjustRow(javax.swing.text.ParagraphView.Row r,
> int desiredSpan,
> int x)
> --------------
> could never be neither called nor overridden in user applications, so
> from this point there seem to be no risk to make it private or remove.
I am afraid I have to disagree with you. This method could have been called.
-- ParagraphViewTest.java
import javax.swing.text.*;
public class ParagraphViewTest {
public static void main(String[] args) {
Document doc = new DefaultStyledDocument();
new MyPargraphView(doc.getRootElements()[0]);
}
static class MyPargraphView extends ParagraphView {
public MyPargraphView(Element elem) {
super(elem);
adjustRow(null, 100, 0);
}
}
}
--
This code comiples and runs. If we are to remove ParagraphView.adjustRow this test case will not compile.
> There still presents another choice - to make
> javax.swing.text.ParagraphView.Row public.
It would be a bad choice from the API prospective. ParagraphView.Row is not meant to be public.
There are three choices here:
1. remove ParagraphView.adjustRow
2. make ParagraphView.Row public
3. close the bug as will not fix
From my prospective first two choices are unacceptable.
EVALUATION
see 6387579 Usage of package-private class as parameter of a method (javax.swing.tree.DefaultTreeSelectionModel) for more details.
I'm closing out as will not fix.
EVALUATION
I am not sure what we should do in this case.
This method should not have been created in a first place. But I am not sure if we can remove this method. Will see what the descion will be for 6387579 [Usage of package-private class as parameter of a method (javax.swing.tree.DefaultTreeSelectionModel)]
It would be much easer if compiler would issue warning on that.
Visibility of the argument types should not be stricter than the visibility of the method itself.
EVALUATION
This was file a long time ago as 4350413, at which time I simply documented that it's for internal use. We should lobby to remove this now. It could never have been called. | http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6387360 | CC-MAIN-2017-09 | refinedweb | 443 | 68.87 |
Resource Filtering with Gradle
Resource Filtering with Gradle
Join the DZone community and get the full member experience.Join For Free
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
My team has recently started a new Java web application project and we picked gradle as our build tool. Most of us were extremely familiar with maven, but decided to give gradle a try.
Today I had to figure out how to do resource filtering in gradle. And to be honest it wasn't as easy as I thought it should be; as least coming from a maven background. I eventually figured it out, but wanted to post my solution to make it easier for others.
What is Resource Filtering?:
application.version=${application.version}. With resource filtering the ${application.version} value gets replaced with 1.0.0 during assembly, then my application can load config.properties and display the application version.
It's an extremely valuable and powerful feature in build tools like maven and one that I took advantage of often.
Resource Filtering in Gradle:
<properties> <application.version>1.0.0</application> </properties>):
application.version=1.0.0 one seemed to be the best approach. So you will need to add the following to your build.gradle file:
import org.apache.tools.ant.filters.* processResources { filter ReplaceTokens, tokens: [ "application.version": project.property("application.version") ] }
Next you need to update your resource file. So put a config.properties file under src/main/resources and add this:
application.version=@application.version@
Note, the use of @ instead of ${}. This is because gradle is based on ant, and ant by default uses the @ character as the token identifier whereas maven uses ${}.
Finally, if you build your project you can look under build/resources/main and you should see a config.properties file with a value of 1.0.0. You can also open up your artifact and see the same result.
Dot notation.
Overriding:
gradle assemble -Papplication.version=2.0.0
If you want to override it for all projects you can add the property in your gradle.properties file under /user_home/.gradle.
Also, if you are overriding the value via the command line and your property value contains special characters like a single quote, you can wrap the value with double quotes like the following to get it to work:
gradle assemble -Papplication.version="2.0.0'6589"
Summary
Well I hope this helps and if anyone from the gradle community sees a better way to perform resource filtering I'd love to hear about it. I'd also like to see something as important as resource filtering becoming easier to perform in gradle. I think it's crazy having to add an import statement to perform something so simple.
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
Published at DZone with permission of James Lorenzen , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/resource-filtering-gradle | CC-MAIN-2018-30 | refinedweb | 528 | 51.14 |
Recently me and my colleague Mehani, got a chance to work on the Data Masking Requirement. As per the scenario w are getting customer demographic feed file into S3 Bucket. File contains some sensitive information which should not be routed to the downstream system. Client want the PII data inside the file should be masked and protects it from unauthorized access.
We can leverage Snowflake data masking feature; allows you assign role-based access control (RBAC) dynamically. We can define the masking policies at the column level which restrict access to data in the column of a table. Authorized roles view the column values as original, while other roles see masked values. But Client want the PII data should be mask at S3 Bucket itself and they do not want this information to be routed at Snowflake level. Once the Data is mask at S3 bucket , we will consume this updated file in Snowflake with no masking.
We have used AWS Glue service to mask the data inside the file and create a new updated file in S3 bucket.
At very first, we have created a Crawler in AWS Glue.
Run the Crawler:
Now create the GLUE JOB:
Now change the GLUE script as follows and run the program:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
def mask(dynamicRecord):
dynamicRecord['phone'] = '**********'
return dynamicRecord
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "maskdb", table_name = "customer_invoice_csv", transformation_ctx = "datasource0")
masked_dynamicframe = Map.apply(frame=datasource0, f=mask)
bucket_name = "collectionorgdelta"
datasink4 = glueContext.write_dynamic_frame.from_options(frame = masked_dynamicframe, connection_type = "s3",
connection_options = {"path": f"s3://{bucket_name}/"}, format = "csv", transformation_ctx = "datasink4")
job.commit()
After the execution ,we can see new File is available in bucket.
We can verify the masked data inside the file:
2 thoughts on “AWS Glue : Data Masking in S3 Bucket –> Snowflake”
Hey, thanks for this.
Couple of questions:
1. Since we are masking data in S3 itself, I’ll not be able to see the actual data in snowflake which is the requirements here but what will happen if I need to use the actual value of column in some ETL operation in snowflake?
2. Do you have a such usecase? I was thinking of using external data masking/encryption through protegrity software, any post for that?
Hi Naveen,Thanks for the question
For this use case if you noticed we are generating the new masked FEED file and will upload into another S3 bucket. There Lambda function would be triggerd and in return it will call the Glue job to connect with snowflake. This way we are consuming the masked file only for snowflake and the original file will remain same in original bucket whcih can be consumed by downstream systems.
Another approach is that we can use the KMS service to encrypt the file at Bucket level,but this will encrypt the complete file while our requirement was masking only PII data | https://cloudyard.in/2022/01/aws-glue-data-masking-in-s3-bucket-snowflake/ | CC-MAIN-2022-33 | refinedweb | 518 | 56.55 |
Import issue904244 Sep 4, 2013 6:44 AM
Hi Gurus,
Here is my requirement.I am exporting(full export of level0) file then defrag db ,again building the dimensions then refresh fnally want to load exported(level0) file back.for that i used the following script.
To Export:
SET DATAEXPORTOPTIONS
{
DataExportLevel "LEVEL0";
DataExportDecimal 6;
DataExportOverwriteFile ON;
DataExportRelationalFile ON;
};
DATAEXPORT "File" "," "E:\HYPERION\logs\export.err";
I am getting exported file.and i would like to import again.that time i am getting warning given below.
import database 'APPNAME'.'DBNAME' data from local text data_file 'E:\HYPERION\DATA\export.txt' using server rules_file 'loadplan' on error write to 'E:\HYPERION\logs\export.err';
statement executed with warnings
No data values modified by load of this data file
there were errores in E:\HYPERION\logs\export.err
can you pls suggest.
1. Re: Import issueSh!va Sep 4, 2013 6:59 AM (in response to 904244)
What is the error in .err file... Are you sure outline is in sync between both applications.
Why don't you try free export by right click on database (export option) and load it to target application.
Cheers!
Sh!va
2. Re: Import issue904244 Sep 4, 2013 7:03 AM (in response to Sh!va)
.err file has nothing.Yes outline is in sync.to automate the process i used maxl query
3. Re: Import issueSh!va Sep 4, 2013 7:20 AM (in response to 904244)1 person found this helpful
Can you use below maxl command:
and check ... next step is manually copy otl and then they to import..
Cheers!
Sh!va
4. Re: Import issue904244 Sep 4, 2013 9:17 AM (in response to Sh!va)
Shiva,
thanks for your update.
and here i got an answer.FYI. i have created a new rul for exported file and then i imported.it worked now.
Thanks a ton!
5. Re: Import issueVasavya Chowdary Sep 4, 2013 9:46 AM (in response to 904244)
Y don't you use CDF function by oracle its free
look for CDF_Export.zip
register and run the calc it would take less than 5 mins to extract but again depands on ur cube size
6. Re: Import issueCelvin Kattookaran Sep 4, 2013 7:45 PM (in response to 904244)
You are not clearing the database and you are importing the same level0 data to the cube. Why are you expecting it to modify the data which is already present. The message is telling you that there was nothing to modify. (all the data values are the same).
Regards
Celvin
7. Re: Import issue904244 Sep 5, 2013 4:13 AM (in response to Celvin Kattookaran)
I am clearing database using
alter database APP.DB reset;
and then importing again that exported data back to cube using
import database 'APPNAME'.'DBNAME' data from local text data_file 'E:\HYPERION\DATA\export.txt' using server rules_file 'loadplan' on error write to 'E:\HYPERION\logs\export.err';
8. Re: Import issueAnthony Dodds Sep 5, 2013 7:56 AM (in response to 904244)1 person found this helpful
I don't understand why you are even using a calc script to export all level 0 data! Just use the Export Data maxl command as Shiva suggested. The benefit of this approach is that you wont need a rules file to load the data back in.
We also don't really need to be getting into CDF world here with this requirement! This requirement to export, defrag, update outline and reload is a fairly common and standard process that can be achieved very easily using MAXL for the Export, shell to copy the outline and then MAXL to load the exported data again.
You should only really be using a calc script to export data if you need to restrict your data export to a 'slice' of the data. You are creating more work for yourself
Thanks
Anthony | https://community.oracle.com/message/11176285 | CC-MAIN-2016-50 | refinedweb | 652 | 67.15 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Getting Ready for Gravatar Images5:31 with Ben Jakuben
Now we are ready to add user images to our project. We are going to use Gravatar images from the web, which is a service created by WordPress where users associate email addresses with profile images (we use it on the Treehouse site). In this video we'll start constructing the request we need to send to the Gravatar site.
Related Links
- Gravatar
- Gravatar Developer Resources
- MD5Util The class we use to generate an MD5 hash
- 0:00
Implementing a custom user profile image isn't exactly hard.
- 0:04
All we need to do is create a parsed
- 0:05
class that relates that user ID with a user image.
- 0:08
And uploading or viewing it will be very similar
- 0:11
to what we've done regarding photo message, simple enough.
- 0:13
But, with so many different user profiles available,
- 0:16
it's much better for users if we can
- 0:18
tie into an existing solution, like pulling a
- 0:21
Facebook photo, or some other readily available image.
- 0:25
One such service is called Gravatar.
- 0:27
Users create Gravatar profiles and
- 0:29
associate e-mail addresses with profile images.
- 0:32
That's actually what we used on the tree house profile and forum, for example.
- 0:35
It's pretty easy to hook into so let's take a look.
- 0:38
If we visit Gravatar.com we can learn a little
- 0:41
more about what Gravatars are and how they work.
- 0:43
So if we come up here and click on how to use
- 0:45
Gravatar, we can learn more about how to make requests from our app.
- 0:49
One of the nice things about Gravatar is this first sentence right here.
- 0:53
Gravatar APIs require no authentication and are
- 0:56
all based around simple HTTP get requests.
- 0:59
So basically what we need to do is take an email address.
- 1:02
Create a hash value using the MD5 algorithm, don't worry, it's
- 1:06
not that hard, and then request the image using that hash.
- 1:10
Okay, one step at a time.
- 1:11
For the email address, we had users input an email address when they signed up.
- 1:15
They're stored on parse in the user class in a field called email.
- 1:19
And where is each friend adapted for the view?
- 1:21
That's right, the user adapter class that we just added,
- 1:25
and the display is set in the get view method.
- 1:28
Let's start by adding back that image view in our view holder class.
- 1:31
So let's un-comment it and we're going to change the name, we'll refactor it.
- 1:37
Refactor, rename, and let's call it user image view.
- 1:42
[BLANK_AUDIO]
- 1:44
Hit enter.
- 1:44
We're gonna add some imports.
- 1:47
And now we can go up and use it here, and get view method.
- 1:52
Oh, and unfortunately the refactor didn't work, because things were commented out,
- 1:56
that's okay, we can just change the name here, user image view.
- 2:00
Okay we also need to change the ID, [SOUND] and
- 2:03
we call this, user image view, in our custom layout.
- 2:06
Okay, next we need, the users email address.
- 2:09
So here after we get the user, let's
- 2:11
set a string, [SOUND] email to user.getemail, look
- 2:15
at that, there's another handy method from the
- 2:18
parse user class, that get's us that default value.
- 2:21
So email is gonna be an empty string if the user didn't supply an email address.
- 2:25
We can check for that and we only want to try
- 2:28
to get a Gravitar image if the email is not empty.
- 2:32
So let's add if statement if email.equals and we'll pass in a blank stream.
- 2:40
And if it is empty then we want to use the default image that we're already setting.
- 2:46
Now we're the default image in the grid view.
- 2:48
But as we're are using this adapter we may
- 2:51
be replacing image view that already had an image loaded.
- 2:54
Remember, the way that grid views and list views be
- 2:57
used is that the views themselves are recycled as they scroll.
- 3:00
Makes it a much faster scrolling and much more efficient use of memory.
- 3:04
So we need always remember to reset things,
- 3:07
we can never assume things will stay the same.
- 3:09
Anyhow let's type holder.userImgageView.
- 3:13
And we'll say setImageResource and we can paste in the ID R.drawable.avatar_empty.
- 3:20
There we go.
- 3:21
Okay let's the else condition and set that Gravatar.
- 3:26
Let's revisit the documentation to see what we need.
- 3:29
The first thing we need is the hash value based on the user's email address.
- 3:34
Reading the hash is, well it's a little technical
- 3:37
and you can read about it here if you want.
- 3:39
But there is a useful utility available for Java elsewhere.
- 3:42
So, let's go back and if we click
- 3:44
on Java here under the Gravatar Image Code Samples.
- 3:48
[SOUND] Then we find a sample class, that creates, MD5 hash values,
- 3:54
all we needed to do is copy this code, and paste it into a new class, in our project.
- 4:00
So let's add the new class, and it's gonna be
- 4:02
another utility class, so we'll right click on the util's package.
- 4:05
Select new, class, and we want to
- 4:10
call it ND 5 in all caps and then util.
- 4:15
Click finish, and we can copy and paste that code from the website.
- 4:20
So we'll select everything here including the
- 4:22
import statements, copy, come back here and
- 4:26
paste, and the only thing we want to leave is our own custom package.
- 4:29
Okay so if organize our imports and Save, then
- 4:34
this class should be ready for us to use.
- 4:36
Now let's go back to the documentation and check it out.
- 4:40
Here's a sample of how to use this new class.
- 4:43
Notice the reminder to make sure that the email address is in all lower case first.
- 4:48
So we'll do that.
- 4:49
We'll convert it to all lower case.
- 4:52
So let's go to our user adapter in here, in the Ls condition and here why
- 4:58
don't we just set the email to lower case when we get it from the user object.
- 5:02
There's another method we chain, called to lower case.
- 5:05
There we go.
- 5:06
That's a helpful string utility.
- 5:08
And now in the else condition, we can create the hash.
- 5:11
String hash equals, and then we use our new class, MD5.Util, and then we want the
- 5:19
MD5Hex method, and we pass in the message
- 5:22
which in this case is the email address, email.
- 5:26
Okay, let's save and pause for a moment and then
- 5:28
we'll come back and verify that we can get Gravatar images. | https://teamtreehouse.com/library/implementing-designs-for-android/customizing-a-gridview-for-friends/getting-ready-for-gravatar-images | CC-MAIN-2017-09 | refinedweb | 1,327 | 80.82 |
From: Douglas Paul Gregor (gregod_at_[hidden])
Date: 2004-06-05 09:47:52
On Fri, 4 Jun 2004, Nicolas Desjardins wrote:
> I've submitted a patch to your sourceforge patch section - item # 966723.
> The patch is also attached to this mail.
>
> Summary:
> function.hpp missing include guard patch
> function.hpp does not have an #ifndef/#endif include guard.
>
> This patch adds #ifndef BOOST_FUNCTION_HPP at the top
> of the file and #endif at the bottom.
function.hpp isn't actually supposed to have include guards. The code it
contains will never actually be included twice, but you can include
function.hpp with BOOST_FUNCTION_MAX_ARGS set to different values to
create support for greater numbers of arguments.
Doug
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/06/66443.php | CC-MAIN-2021-25 | refinedweb | 137 | 61.43 |
I'm using Git to manage my project but I had a problem with role of branches. Any branch in Git can push code of them to remote/master branch, so it make me confuse when I merge to stable versions. Because may be a developer pushed to my master branch.
In addition of basic branch management, you can:
make sure the remote doesn't accept non-fast-forward merge: so at least if a developer directly pushes new commits to
origin/master, it would be commits easily merged to the new branch (that forces a developer to first rebase his/her work on top of
origin/master before attempting a push).
See "What's a “fast-forward” in Git?" and "Why does git use fast-forward merging by default?".
you can more easily separate dev branches by creating them in their own namespace:
username/master instead of
master, keeping
master for being an (untouched) mirror image of
origin/master.
you can add a description to a branch, leaving one more clue as to what that branch is for: see "Branch descriptions in git":
git branch --edit-description. That information will be pushed along to a remote repo, for others to see.
finally, you can choose and follow a git workflow (like git-flow) in order to manage the convention around branch usage.
Similar Questions | http://ebanshi.cc/questions/4396179/how-to-manage-role-of-branch-in-git | CC-MAIN-2017-47 | refinedweb | 225 | 58.62 |
SharePoint Framework – Identify SharePoint Page Type(Modern/Classic/Local Workbench)
Scenario – We might need to find a way on identifying if current SharePoint page in which SPFx webpart is added is a classic page, a modern page or a local workbench page.
This would be very quick demo or code snippet on how to identify is page is modern/classic or local workbench page in SharePoint Framework. SP core library npm package has 2 attributes
export declare enum EnvironmentType { /** * Indicates that the SharePoint Framework is running inside a test harness, e.g. a unit test. * There may be no user interaction at all, and network access in general may be disabled. */ Test = 0, /** * Indicates that the SharePoint Framework is running on a page from a "localhost" web server, * for example the SharePoint Workbench when hosted via "gulp serve". SharePoint REST API calls * will not be available, and in general network access may not authenticate properly. * Certain page context information may be using mock values for testing. */ Local = 1, /** * Indicates that the SharePoint Framework is running on a modern SharePoint web page, * with full framework functionality. This is the normal usage scenario. */ SharePoint = 2, /** * Indicates that the framework was hosted by a classic server-rendered SharePoint page. * Some functionality may be limited, e.g. various extensibility APIs may not be supported. */ ClassicSharePoint = 3, }
Let us see how it can be used in SPFx webpart.
Step 1 – Create a HelloWorld SPFx webpart using steps mentioned at link
Step 2 – Import library in your Webpart.ts file
import { Environment, EnvironmentType} from '@microsoft/sp-core-library';
Step 2 – Modify render method to below to check page type.
public render(): void { if(Environment.type == EnvironmentType.ClassicSharePoint){ this.domElement.innerHTML ="I am classic SharePoint"; }else if(Environment.type === EnvironmentType.SharePoint){ this.domElement.innerHTML ="I am Modern SharePoint"; }else if(Environment.type === EnvironmentType.Local){ this.domElement.innerHTML ="I am Workbench"; } }
Step 3 – Run gulp serve to test on local workbench first.
Once it open local workbench you should see below output.
Step 4 – Test in your SharePoint workbench
Keep gulp serve command running and open your SharePoint local workbench
We can see below output
Step 5 – Test in your SharePoint classic page,
a. Package the solution using command ‘gulp package-solution’.
b. Package file (*.sppkg) file should be created at ‘sharepoint\solution‘.
c. Deploy app in app catalog.
d. Install app in targeted site collection, Go to all site content -> Add App -> Select your app
e. Once installed, create a new classic SharePoint Page. Go to Site Pages library, create new web part page.
f. Add webpart in page edit mode, select your webpart.
g. Save the page and refresh, you should see similar output.
That’s it, this way we can apply different conditional logic based on SharePoint Page type.
Post Inspiration answer by @GautamdSheth at this link.
Hope this helps..Happy Coding!!!!
For any queries/help, just a tweet a away… Follow me on @siddh_me for such blogs on Office 365/SharePoint. | https://siddharthvaghasia.com/2019/07/20/sharepoint-framework-identify-sharepoint-pagetype-modern-classic-local-workbench/ | CC-MAIN-2020-40 | refinedweb | 497 | 57.57 |
+1 for the merging.
On Fri, Feb 28, 2014 at 10:58 AM, Chris Nauroth <cnauroth@hortonworks.com>wrote:
> +1 for the merge.
>
> I just got caught up on the current state of the branch, and it looks good.
> End user documentation is in place. I deployed a cluster built from the
> branch, and then I used the documentation to test various scenarios of
> rolling upgrade, downgrade and rollback. Everything worked as expected.
> This looks ready to merge.
>
> Nice work, everyone!
>
> Chris Nauroth
> Hortonworks
>
>
>
>
> On Thu, Feb 27, 2014 at 7:02 PM, Kihwal Lee <kihwal@yahoo-inc.com> wrote:
>
> > +1
> >
> > > On Feb 25, 2014, at 3:42 PM, "Tsz Wo Sze" <szetszwo@yahoo.com> wrote:
> > >
> > > Hi hdfs-dev,
> > >
> > > We propose merging the HDFS-5535 branch to trunk.
> > >
> > > HDFS Rolling Upgrade is a feature to allow upgrading individual HDFS
> > daemons. In Hadoop v2, HDFS supports highly-available (HA) namenode
> > services and wire compatibility. These two capabilities make it feasible
> to
> > upgrade HDFS without incurring HDFS downtime. We make such improvement
> in
> > the HDFS-5535 branch.
> > >
> > > The HDFS-5535 branch is ready to be merged to trunk. As this being
> > written, there are 48 subtasks in HDFS-5535; 44 subtasks are already
> > completed. The core developments including feature development, unit
> tests
> > and user doc, are already done. The merge patch posted a few ago already
> > passed Jenkins. I will post a updated patch to trigger Jenkins again for
> > the latest code base.
> > >
> > > The remaining JIRAs are:
> > >
> > > HDFS-3225: Revist upgrade snapshots, roll back, finalize to enable
> > rolling upgrades (assigned to Sanjay)
> > > HDFS-6000: Avoid saving namespace when starting rolling upgrade
> > (assigned to Jing)
> > > HDFS-6013: add rollingUpgrade information to latest UI (assigned to
> > Vinay)
> > > HDFS-6016: Update datanode replacement policy to make writes more
> robust
> > (assigned to Kihwal)
> > >
> > > HDFS-6000 will be committed soon. All other issues are further
> > improvements which can be done after merge.
> > >
> > > The other remaining works are:
> > > - Revise the design doc
> > > - Post a test plan (Haohui is working on it.)
> > > - Execute the manual tests (Haohui and Fengdong will work on it.)
> > >
> > > The work was a collective effort of Nathan Roberts, Sanjay Radia,
> Suresh
> > Srinivas, Kihwal Lee, Jing Zhao, Arpit Agarwal, Brandon Li, Haohui Mai,
> > Vinayakumar B, Fengdong Yu, Chris Nauroth and Tsz-Wo Nicholas Sze, who
> have
> > proposed the design, worked on the code, reviewed patches, tested the
> > features and authored documentation. We thank everyone that who has gave
> > us valuable comments and feedback on the feature.
> > >
> > > The vote runs for 7 days. Here is my +1 on the merge.
> > >
> > > Thanks.
> > > Tsz-Wo
> >
>
> --
>. | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201402.mbox/%3CCABGkNE-07Jj9D=jpYxd3vKr2LTkkDYyh-oshzW7Mb=OMNcZdaw@mail.gmail.com%3E | CC-MAIN-2018-30 | refinedweb | 426 | 65.83 |
Introduction: Arduino & Neopixel Totally Derivative Fake TV
A maker named Jonathan Bush (JonBush) created an AT Tiny 85
powered Fake TV - Burglar Deterrent. This used Neopixel RGB LEDs and programming on the AT Tiny 85 chip to create a light show that, when seen through drapes or blinds at night, creates a nice illusion of someone being home watching TV. Please see his Instructable here.
Since my 7-yo son is interested in both electronics and programming, this seemed like a good simple project we could build together. So we quickly breadboarded one out using an Arduino Trinket and a strip of 8 Neopixels, all sourced from AdaFruit.com. JonBush's code compiled and ran perfectly on the Trinket.
Then we actually used the Fake TV when we went camping, leaving the thing standing guard in a bedroom window. Although the flickering light of the Fake TV was very realistic, I thought a few things could be improved, or if not improved, at least made ridiculously over complicated. So I set some goals...
Step 1: Goals
- Brighten it up. The original Fake TV (hereafter known as the oFTV) had a potentiometer (pot) to adjust the brightness. I had omitted that and hardcoded the brightness to the maximum. Still, with only 6 Neopixels in my version, it seemed a little dim.
- Add a timer function. When I ran the oFTV while on vacation, I put it on plugin timer. Seemed ridiculous given that the Arduino has a clock (as it turns out, it doesn't actually have a real-time clock, just a timer - learned something there).
- Change the "cut length" (time between lighting changes) to something more realistic than a linear distribution. The random() function returns a linear distribution, which seemed "unnatural". As it turn out there is significant interest in various directors' cut lengths and what distributions they best fit - scholarly papers on the subject even. I had assumed a normal Gaussian distribution (the famous bell curve), but most film scholars seem to think it's more of a lognormal distribution (although that remains contentious).
- Add an occasional fade-to-black, again to increase the realism.
- Stick some more controls on it. Mostly just because, you know, complications are fun.
This Instructable is my resulting dFTV (derivative Fake TV).
Step 2: Parts
The oFTV used an AT Tiny 85 controller, which is literally just a bare 8 pin chip. No little board, no LEDs, no buttons, no headers, just that itty bitty chip and a sparse few components – a beautifully minimalistic design. In my dFTV redesign I went through an Arduino Uno, then a Trinket, and finally a Pro Trinket 5 volt. Everything about my dFTV is both more expensive and more complex.
- Trinket Pro 5v, $9.95 from adafruit.com
- Perma-Proto Quarter-sized Breadboard, $8.50 for a 3-pack from adafruit.com.
- 16 RGB LED Neopixel compatible ring ($4.00). I did all my testing with a 12 LED Neopixel ring from adafruit.com, but my final version used this a 16 LED ring from China. Ordering from China takes weeks, but you cannot beat the price. You can order it from here.
- 5.5 x 2.1 mm female panel mount power jack. I had this on-hand, but should cost about a buck.
- 100-200 µF electrolytic capacitor. The exact value doesn’t matter, you just want a fat little capacitor there to provide some filtering on the power rails. Does need to be a higher voltage rating than your power supply. Had on-hand, <$1.
- 10kΩ resistor. This could actually be 20k, 30k, or higher. 10kΩ is traditional for a pull-down resistor. Had on-hand, <$1.
- 470Ω resistor. You could go as low as 100Ω here and be fine I think. Had on-hand, <$1.
- Panel mount red LED. Had on-hand, ~$1.
- Three 100K pots. 10K would work fine as well. I got 10 for $4.12 from China here.
- Small panel mount push button. Had on-hand, ~$1.
- 5 – 16v power supply w. 5.5mm barrel connector. I cut the B side end off of a junker printer USB cable and soldered a barrel connector onto it. That way I could just use any available 5 volt USB phone/tablet charger as my power supply.
- Project box to hold it. I found a lovely plastic project box with a clear top, again from China, $2.69 here.
- Assorted hookup wire and some solder. Because I’m doing this with a 7-yo, I used lead-free solder. Got to protect those brain cells!
Step 3: About the Circuit
The heart and brains of the dFTV is the Pro Trinket 5v ($9.95) from adafruit.com. I feel that this board is cool enough that it deserves a few words. Initially I started out with the original Trinket ($6.95) but I quickly got into trouble with it and the project became unstable, but when I upgraded to the Pro Trinket all that went away.
The Pro Trinket is almost an Arduino Uno (same chip), but on a tiny little finger-tip sized PCB. It has a micro-USB connector that can be used to both, power the board, and to program it. About the only things you lose with the Pro Trinket over the Uno is: serial output through the USB port, and pins #2 and #7 support.
One feature that the Pro Trinket has that I used in this project is a built-in 5v power regulator (150mA), so this project can run on anything from 5 – 16v. As it happens, I’m running it from a 5v USB wall-wart, but I could run it off of something else, including batteries, if I wanted to.
You do have to make some minor changes to the Arduino IDE to program the Pro Trinket. Excellent documentation and tutorials on the Pro Trinket are here on the AdaFruit website, those folks do an amazing job.
Above is the fritzing drawing of my final project. I did all my testing and debugging on an Arduino Uno with a standard breadboard. Once I had everything working like I wanted, I replaced the Uno with the Pro Trinket, moving the wires one-for-one, pin-for-pin.
Then I squished all the parts together until it fit on just ¼ of the breadboard. That let me use the AdaFruit Perma-Proto Quarter-sized PCB for the final version. I love these little PCBs (also available in ½ and full size) because they exactly replicate a standard breadboard. All you have to do is move your components over and solder them down just as they were on the breadboard.
The fritzing drawing shows the AdaFruit 12 LED Neopixel ring, and I did actually use that during prototyping and testing, but the final version uses a 16 LED Neopixel compatible ring from China. For $4 you can’t beat the price, and since I was ordering a bunch of stuff from there anyway, why not?
The 100kΩ pots are wired together point to point off of the PCB. Basically I’m using wires to extend the power rails to the three pots. This lets me reduce the number of wires snaking back and forth. The pushbutton also catches the 5v rail this way. As I mentioned, 10kΩ pots would also work fine here. I used the 100k ones because they have less current leakage across the rails.
The center connections on the pots are the wipers, and are connected to analog input pins A1, A2 and A3 respectively. The pots are acting as variable voltage dividers. The analog input pins will see a varying voltage, going from 0 – 5v, as each pot is turned.
R1 on the diagram is a 10kΩ pull-down resister connected to digital input pin 4 on the Pro Trinket, with its other side connect to the ground rail. The pushbutton is also connected to pin 4, with its other side connected to the 5v rail. If the button is not pushed, pin 4 “sees” the ground rail through the pull-down resister and reads that as logic 0. If the button is pushed, pin 4 see the full 5v from the power rail (minus a tiny bit that leaks through the pull-down resister) and reads that as 1. If the pull-down resister were not there, pin 4 would be “floating” when the button wasn’t pushed and randomness would result.
Interestingly, Arduinos have built-in pull-up resistors that can be activated by the INPUT_PULLUP mode parameter to the pinMode() function. Using that would have eliminated the external pull-down resistor. In that case, the button would be wired to the ground rail instead of the 5v rail, and its state would be reversed (1 when not pushed, 0 when pushed).
Either way of handling the button would be correct. I chose to use a pull-down resister just because I was trying to learn and understand the whole pull-up/down thing.
R3 is a current limiting resistor for the red LED activity light. The LED will light whenever pin 13 goes high. The Pro Trinket and the Uno already have a built-in LED connected to pin 13, which will light too - making my external LED both redundant and optional. The reason I have it there it to bring the LED outside of the project case so it’s easily visible to the user.
The electrolytic capacitor is there to provide some filtration on the power rails. Presumably as the Neopixel LEDs are flashing away, their current draw is fluctuating the voltage on the rails. The capacitor is a reservoir of charge that can smooth those fluctuations out. I’ve actually run this project without that capacitor, and most of the Neopixel examples on the Adafruit website also omit it. Still, having it there is a good idea. Watch the polarity! Electrolytic capacitors tend to explode when you plug them in backwards.
The 5.5mm power jack is wired to the BAT (battery) and the G (ground) pins on the Pro Trinket. This means that power goes through the built-in voltage regulator. Input voltage can be anything from 5 – 16v. The power rails on the Perma-Proto board get their regulated 5volts from the G and 5v pins. The output of the Pro Trinket is rated at 150mA. I don’t know what the Neopixel ring is drawing, but nothing seems to be getting hot on the Pro Trinket, no magic blue smoke, so I guess everything’s okay.
Step 4: The Code
This code is pretty heavily modified from what JonBush initially published. Major changes are:
- Eliminated the use of the delay() function. In the original code the time between each cut (lighting change) was spent in a delay() state. This causes a problem reading the sensors (pots and button) since the Arduino won’t detect any events or changes until the delay() expires. It’s not such a problem for the pots, but for the button you might have to hold it down for 4 seconds before it got picked up. This code spends all of its time looping as fast as possible looking for changes in sensors, or time counters.
- Used the multiMAP() function (from rob.tillaart@removethisgmail.com) to convert the linear distribution generated by the random() function into something else. I tried several distributions: Gaussian, lognormal, and some I just made up. I left all of them there in the comments. In the end, I’m not sure any of them made much difference, linear was probably just fine.
- Added a timer to end the light show after a certain amount of time. Controlled by a pot.
- Added a cut speed multiplier, controlled by a pot.
- Added a lightshow abort, controlled by the pushbutton.
- Added a soft reboot function after 24 hours (86,400,000 ms). This causes all the counters and the light show to restart at the same time every day.
- Added an Activity LED on pin 13.
- Initialized all of the various program parameters to reasonable values so that any or all of the pots, and the switch, can be eliminated.
- The routine softReset() forces a program jump to address 0. This has the effect of resetting all the counters and restarting the code from the beginning. It’s a trick to simulate pushing the reset button. It wouldn’t compile for the Trinket, but works on the Pro Trinket and Uno. I don’t know what other Arduinos it might or might not work on.
- Fade-to-black. This I never got around to implementing. Fail.
I also commented the code pretty completely, so read through it if you have questions.
Step 5: Code Source Listing
//DIY Fake TV
//Keep burglars at bay while you are away
//Created by Jonathan Bush //Last Updated 3/24/2015 //Runnig on ATTiny 85
//Modified 8/4/2015 by Mark Werley for Trinket Pro //Rewritten to avoid the use of the delay() function //Initialized MAXBRIGHT to 255, the maximum //Added the use of the multiMAP function to convert a uniform distribution to something nonuniform //Added an auto-off feature so the LEDs go dark after a certain amount of time //Added a soft reboot feature after 24 hours, so the show starts up again every day at the same time //Added a blink on the builtin LED, so user can tell program is still running when neopixels are off
#include <Adafruit_NeoPixel.h>
#define PIN 3 //PIN 3 "runs" the NeoPixels. Works on Uno or Trinket Pro #define ledPin 13 //PIN 13 has built-in LED for Uno or Trinket Pro
#define buttonPin 4 //PIN 4 has a button attached through a 10k pullup resister
int ledState = LOW; //Keep track of the state of the built-in LED int buttonState = 0; //Keep tract of the state of the button int endShow = false; //True when the show is over
int POTPIN = A1; //1st analog pot pin, used for adjusting brightness int POTPIN2 = A2; //2nd analog pot pin, used for adjusting light show cut speed int POTPIN3 = A3; //3rd analog pot pin, used for adjust the runtime of the show
//Neopixel library provided by Adafruit, change 1st parameter to number of LEDs in your neopixels Adafruit_NeoPixel strip = Adafruit_NeoPixel(16, PIN, NEO_GRB + NEO_KHZ800);
int BRIGHTNESS = 0; int RED = 0; int BLUE = 0; int GREEN = 0; int TIMEDEL = 0; int mapTIMEDEL = 0; int MAXBRIGHT = 255; //set MAXBRIGHT to 255, the max, if no brightness pot int speedDivider = 1; //set speedDivider to 1, if no cut speed pot int potval = 0; int potval2 = 0; int potval3 = 0;
unsigned long runTimeMillis = 0; //How long the Fake TV light show will run in milliseconds int runTime = 120; //How long the Fake TV light show will run in minutes, 2 hours if no runTime pot unsigned long startMillis = 0; //The sampled starting time of the program, ususally just 0 or 1 milliseconds unsigned long previousMillis = 0; //Remember the number of milliseconds from the previous cut trigger unsigned long rebootTimeMillis = 0; //How long the program will run before a soft reset/reboot happens (24 hrs = 86,400,000 ms) unsigned long currentMillis = 0; //How long the program has run so far
int in[] = {200, 520, 840, 1160, 1480, 1800, 2120, 2440, 2760, 3080, 3400, 3720, 4040}; //This is just linear
// int out[] = {200,392,968,2120,3272,3848,4040,3848,3272,2120,968,392,200}; //normal distribution // int out[] = {200,392,2120,3848,4040,3848,3560,3272,2696,2120,968,392,200}; //LnNormal-ish // int out[] = {200,250,300,400,600,1200,4040,1200,600,400,300,250,200}; //made up int out[] = {200, 250, 300, 350, 400, 500, 600, 700, 800, 1200, 2000, 3000, 4040}; //made up #2
void setup() //Initialize everything { //Initialize the NeoPix strip.begin(); strip.show(); // Initialize all pixels to 'off' pinMode(PIN, OUTPUT); //set the neopixel control pin to output
pinMode(ledPin, OUTPUT); //set the onboard LED pin to output
pinMode(buttonPin, INPUT); //set the button pin to input
//Initialize the serial com, used for debugging on the Uno or Trinket Pro (w FTDI cable), comment out for production Serial.begin(9600); Serial.println("--- Start Serial Monitor SEND_RCVE ---"); Serial.println("Serial is active"); Serial.println(); rebootTimeMillis = 24ul * 60ul * 60ul * 1000ul; //hardcode reboot in 24 hours
startMillis = millis(); //sample the startup time of the program }
void loop() //Start the main loop { currentMillis = millis(); //sample milliseconds since startup
if (currentMillis > rebootTimeMillis) softReset(); //When rebootTimeMillis is reached, reboot // Let's read our sensors/controls
potval = analogRead(POTPIN); //Reads analog value from brightness Potentiometer/voltage divider, comment out if not using MAXBRIGHT = map(potval, 0, 1023, 11, 255); //Maps voltage divider reading to set max brightness btwn 11 and 255, comment out if not using
potval2 = analogRead(POTPIN2); //Reads analog value from cut speed Potentiometer, comment out if not using speedDivider = map(potval2, 0, 1020, 1, 8); //Maps the second pot reading to between 1 and 8, comment out if not using potval3 = analogRead(POTPIN3); //Reads analog valuse from show lenth Potentiometer, comment out if not using runTime = map(potval3, 0, 1020, 15, 480); //Maps the third pot to between 15 and 480 minutes (1/4 to 6 hours), comment out if not using runTimeMillis= long(runTime) * 60ul * 1000ul; Serial.print("potval3="); Serial.print(potval3); Serial.print(" runTime="); Serial.print(runTime); Serial.print(" runTimeMillis="); Serial.println(runTimeMillis); buttonState = digitalRead(buttonPin); //Sample the state of the button if (buttonState == HIGH) endShow = true; //Button was pressed, time to end tonight's show
if ((currentMillis - previousMillis) > long(mapTIMEDEL)) //Test to see if we're due for a cut (lighting change) { BRIGHTNESS = random (10, MAXBRIGHT); //Change display brightness from 20% to 100% randomly each cycle RED = random (150 , 256); //Set the red component value from 150 to 255 BLUE = random (150, 256); //Set the blue component value from 150 to 255 GREEN = random (150, 256); //Set the green component value from 150 to 255 TIMEDEL = random (200, 4040); //Change the time interval randomly between 0.2 of a second to 4.04 seconds
mapTIMEDEL = multiMap(TIMEDEL, in, out, 13); //use the multiMap function to remap the delay to something non-uniform mapTIMEDEL = mapTIMEDEL / speedDivider; //Divide by speedDivider to set rapidity of cuts
if ((currentMillis - startMillis) > runTimeMillis) endShow = true; //runTimeMillis has expired, time to end tonight's show
if (endShow) //Show's over for the night, aw... strip.setBrightness(0); else //The show is on! strip.setBrightness(BRIGHTNESS);
colorWipe(strip.Color(RED, GREEN, BLUE), 0); //Instantly change entire strip to new randomly generated color
if (ledState == HIGH) //toggle the ledState variable ledState = LOW; else ledState = HIGH;
digitalWrite(ledPin, ledState); //Flip the state of (blink) the built in LED
previousMillis = currentMillis; //update previousMillis and loop back around } }
// Fill the dots one after the other with a color void colorWipe(uint32_t c, uint8_t wait) { for (uint16_t i = 0; i < strip.numPixels(); i++) { strip.setPixelColor(i, c); strip.show(); } }
// Force a jump to address 0 to restart sketch. Does not reset hardware or registers void softReset() { asm volatile(" jmp 0"); }
//multiMap is used to map one distribution onto another using interpolation // note: the _in array should have increasing values //Code by rob.tillaart@removethisgmail.com int multiMap(int val, int* _in, int* ]; }
Step 6: Assembly
I used the Adafruit Perma-Proto for this project because it makes it so easy to move from breadboard to final version. Of course, you could also use any regular perf board, and that would be cheaper.
My project box came from China and has a transparent top, which is great for letting the Neopixel light out into the room. With an opaque box you would have to mount the Neopixels on the outside of the box.
Holes are drilled through the sides of the box for the: pots, pushbutton, activity LED, and power jack. These are connected back to the Perma-Proto board with hookup wire.
I just taped the Neopixel ring to the inside of the lid. I should think of something better than that.
Step 7: Controls and Usage
Yeah, that’s a lot of knobs. My Grandpa’s shortwave receiver had fewer dials on it. But they all do something useful(ish), and, as I said, I like complications.
- Pot1: Adjusts brightness. Varies between 11 and 255. Is initialized to the max 255 if pot is omitted.
- Pot2: Adjusts cut speed. Varies between 1x (RomCom) and 8x (ActionAdventure). Is initialized to 1x if no pot.
- Pot3: Adjusts the length of the light show. Varies between 15 minutes and 6 hours. Is initialized to 2 hours if no pot.
- Pushbutton: Aborts the current light show, but doesn’t end the program or change any settings. Useful for when the wife says, “Honey, can you turn that thing off now?”, but you don’t want to unplug it.
- Activity LED, continues to blink after the light show ends, to let you know the program hasn’t crashed.
Once the code is loaded on your Arduino, usage is easy. Just plug it in to power and the light show will start. This establishes the start time. Turn the Brightness knob to adjust the average brightness of the show.
Turn the Cut Rate pot to change how often the Neopixels change. Set all the way up (8x), they get quite frantic; very Michael Bay.
As the dFTV is running, it’s counting down to the end of the light show. When the time for the end of the light show is reached, the Neopixels go dark, but the program continues to run and the activity LED continues to blink. Turn the Show Length pot to adjust when the show will end.
Let’s say you want the light show to run from 8PM to 10PM, just plug it in at 8 and that sets the start time. Then at 10PM you can turn back the Show Length pot until the Neopixels just go out, and that sets the end time. 24 hours after the dFTV was initially plugged in, the software resets and the light show starts up again.
Step 8: Future Upgrades
One of the things I like about this project is that the pushbutton and pots are all defined by the software, as is the Neopixel display. Want to have a tint knob? Just change the code. Want to turn the whole thing into a disco light show? Change the code again.
Adding a backup battery to keep the program running if there’s a power flicker would be a good addition.
Normally an Arduino doesn’t have a real-time clock, which is why all of my code is based on counting down from the moment power is applied. Adding a RT clock module would make programming and setting the thing much more intuitive.
Each Neopixel is individually addressable. Here they all show the same thing at the same time, but it doesn’t have to be that way. You could, for example, implement a clock face on a Neopixel ring that counts down till when the light show starts again.
Implementing a fade-to-black, which I failed to do, would also be a nice thing. Please share your code if you do.
Recommendations
We have a be nice policy.
Please be positive and constructive.
7 Comments
Nice build. For my uses I'd probably do away with the show timer function, then add wireless to it and control it via my home automation/security system. I'd be too worried that a random reset would mess up the timing while away for a week and have the Fake TV off at night and on during the day, something to think about. I'd probably also turn the button into a on/off toggle instead of a kill switch. Thanks for posting!
hi can you please explain the multiMap function? I wanted to remove so I didn't need the cutspeed pot, but then neopix changes colour constantly. Thanks and great work!
I have built this circuit and the button seems to turn everything off and not back on again? is this meant to be the case?,
Error in the code
#include
should be
#include <Adafruit_NeoPixel.h>
other than that works great
That's weird, cut/paste seems to drop anything in angled brackets. Corrected, and thanks.
great job, make the first version and I'll try this. I want to try also with an Arduino clock module, if I get publishes the code. a greeting | http://www.instructables.com/id/Arduino-Totally-Derivative-Fake-TV/ | CC-MAIN-2018-13 | refinedweb | 4,077 | 71.55 |
Blur guess game in AS3 part 1
In this tutorial you will learn how to create a Blur guess game in Actionscript 3. In this game you will see blurry images and you need guess the name object in the blurry image. For this tutorial you will need logo images, I have used famous brand logo which can be downloaded at: brandsoftheworld.com/
Update: Blur guess game part 2
Blur guess game in Actionscript 3
Step 1
Open a new AS3 file and import your images into the stage by selecting File > Import > Import to stage. Before you import your images, make sure they are the same width and height. My images dimensions are 100x100 pixels. You can use Photoshop or a similar program to edit the dimensions of the images.
Step 2
Arrange the images on the stage, so that there are in 3 columns like below. And convert them into movie clips with the instance names: youtube_mc, mastercard_mc, ibm_mc, canon_mc, shell_mc and bmw_mc.
Step 3
Select the Text tool with input text and drag a text field on the stage. Give the instance name: input_txt. Then drag another text field, but this time with dynamic text and give the instance name: found_txt. You will need to embed the numerals and a ‘/’ (forward slash) glyphs.
Step 4
Open up a new AS3 class file save the file name as ‘BlurGuess’ and add the following code.
package { import flash.filters.BlurFilter; import flash.display.*; import flash.events.MouseEvent; import flash.events.Event; import flash.utils.setTimeout; public class BlurMatch extends MovieClip { private var wordsMcArray:Array = new Array ; private var strArray:Array = new Array ; private var mcCheckArray:Array = new Array ; private var counter:uint = 0; private var totalWords:uint; private var cols:uint = 3; private var xOffset:int = 133; private var yOffset:int = 122; public function BlurMatch() { init(); } private function init():void { wordsMcArray = [youtube_mc,mastercard_mc,ibm_mc,canon_mc,shell_mc,bmw_mc]; strArray = ["youtube","master","ibm","canon","shell","bmw"]; mcCheckArray = []; counter = 0; totalWords = strArray.length; //Displays the number of correct words answered and set the focus to the input text field. found_txt.text = String(counter) + "/" + String(totalWords) + "correct"; stage.focus = input_txt; for (var j:int = 0; j < wordsMcArray.length; j++) { //Adds a blur and outline to each of the image wordsMcArray[j].filters = [new BlurFilter(15,15,3)]; var outLine:Shape = new Shape ; outLine.graphics.lineStyle(2,0x000000); outLine.graphics.drawRect(wordsMcArray[0].x + xOffset * j % cols,wordsMcArray[0].y + yOffset * int(j / cols),wordsMcArray[0].width + 2,wordsMcArray[0].height + 2); outLine.graphics.endFill(); addChild(outLine); } //Adds the change event to the input text field. input_txt.addEventListener(Event.CHANGE,detectKeys); } private function detectKeys(e:Event):void { for (var i:int = 0; i < strArray.length; i++) { if (strArray[i] == input_txt.text.toLowerCase()) { trace('correct'); //If the correct word is typed then the counter gets incremented and the //found displayed is updated to show a word has been found. counter++; found_txt.text = String(counter) + "/" + String(totalWords) + "correct"; //This adds the correct word into the mcCheckArray. mcCheckArray.push(wordsMcArray[i]); //This removes the correct word from wordsMcArray and the strArray.; wordsMcArray.splice(wordsMcArray.indexOf(wordsMcArray[i]),1); strArray.splice(strArray.indexOf(strArray[i]),1); //Removes the movie clip blur filter. mcCheckArray[mcCheckArray.length - 1].filters = []; //This clear the text field after a half second delay setTimeout(function(){ input_txt.text = ""; },500); } } } } }
Step 5
In the document class add the name: ‘BlurGuess’ then export the movie Ctrl + Enter. You should now have a blur guess game in Actionscript 3.
You can download the source files here.
Update: Blur guess game part 2
4 comments:
I would like to know, if you have any tutorial for shall game in as3/as2. I appreciate your help.
Thanks,
Siva
@Sivereddy
Please clarify what you mean by 'Shall game'?
I did exactly like you say but It won't work...
@Ega
Please read the steps carefully. I assume you have copied and pasted my code without actually understanding the code and reading through the tutorial. | http://www.ilike2flash.com/2011/04/blur-guess-game-in-as3-part-1.html | CC-MAIN-2017-04 | refinedweb | 663 | 58.48 |
Play Framework Cookbook
The Play framework is the new kid on the block of Java frameworks. By breaking the existing standards it tries not to abstract away from HTTP as with most web frameworks, but tightly integrates with it. This means quite a shift for Java programmers. Understanding the concepts behind this shift and its impact on web development with Java are crucial for fast development of Java web applications.
The Play Framework Cookbook starts where the beginner’s documentation ends. It shows you how to utilize advanced features of the Play framework—piece by piece and completely outlined with working applications!
also read:
The reader will be taken through all layers of the Play framework and provided with in-depth knowledge with as many examples and applications as possible. Leveraging the most from the Play framework means, learning to think simple again in a Java environment. Think simple and implement your own renderers, integrate tightly with HTTP, use existing code, and improve sites’ performance with caching and integrating with other web 2.0 services. Also get to know about non-functional issues like modularity, integration into production, and testing environments. In order to provide the best learning experience during reading of Play Framework Cookbook, almost every example is provided with source code. Start immediately integrating recipes into your Play application.
What This Book Covers
Chapter 1, Basics of the Play Framework, explains the basics of the Play framework. This chapter will give you a head start about the first steps to carry out after you create your first application. It will provide you with the basic knowledge needed for any advanced topic.
Chapter 2, Using Controllers, will help you to keep your controllers as clean as possible, with a well defined boundary to your model classes.
Chapter 3, Leveraging Modules, gives a brief overview of some modules and how to make use of them. It should help you to speed up your development when you need to integrate existing tools and libraries.
Chapter 4, Creating and Using APIs, shows a practical example of integrating an API into your application, and provides some tips on what to do when you are a data provider yourself, and how to expose an API to the outside world.
Chapter 5, Introduction To Writing Modules, explains everything related to writing modules.
Chapter 6, Practical Module Examples, shows some examples used in productive applications. It also shows an integration of an alternative persistence layer, how to create a Solr module for better search, and how to write an alternative distributed cache implementation among others.
Chapter 7, Running Into Production, explains the complexity that begins once the site goes live. This chapter is targeted towards both groups, developers, as well as system administrators.
Appendix, Further Information About the Play Framework, gives you more information about where you can find help with Play.
Using Controllers
In this chapter, we will cover:
- URL routing using annotation-based configuration
- Basics of caching
- Using HTTP digest authentication
- Generating PDFs in your controllers
- Binding objects using custom binders
- Validating objects using annotations
- Adding annotation-based right checks to your controller
- Rendering JSON output
- Writing your own renderRSS method as controller output
Introduction
This chapter will help you to keep your controllers as clean as possible, with a well defined boundary to your model classes. Always remember that controllers are really only a thin layer to ensure that your data from the outside world is valid before handing it over to your models, or something needs to be specifically adapted to HTTP. The chapter will start with some basic recipes, but it will cover some more complex topics later on with quite a bit code, of course mostly explained with examples.
URL routing using annotation-based configuration
If you do not like the routes file, you can also describe your routes programmatically by adding annotations to your controllers. This has the advantage of not having any additional config file, but also poses the problem of your URLs being dispersed in your code.
You can find the source code of this example in the examples/chapter2/annotationcontroller directory.
How to do it…
Go to your project and install the router module via conf/dependencies.yml:
require: - play - play -> router head
Then run playdeps and the router module should be installed in the modules/ directory of your application. Change your controller like this:
@StaticRoutes({ @ServeStatic(value="/public/", directory="public") }) public class Application extends Controller { @Any(value="/", priority=100) public static void index() { forbidden("Reserved for administrator"); } @Put(value="/", priority=2, accept="application/json") public static void hiddenIndex() { renderText("Secret news here"); } @Post("/ticket") public static void getTicket(String username, String password) { String uuid = UUID.randomUUID().toString(); renderJSON(uuid); } }
How it works…
Installing and enabling the module should not leave any open questions for you at this point. As you can see in the controller, it is now filled with annotations that resemble the entries in the routes.conf file, which you could possibly have deleted by now for this example.
However, then your application will not start, so you have to have an empty file at least. The @ServeStatic annotation replaces the static command in the routes file. The @StaticRoutes annotation is just used for grouping several @ServeStatic annotations and could be left out in this example.
Each controller call now has to have an annotation in order to be reachable. The name of the annotation is the HTTP method, or @Any, if it should match all HTTP methods. Its only mandatory parameter is the value, which resembles the URI—the second field in the routes. conf. All other parameters are optional. Especially interesting is the priority parameter, which can be used to give certain methods precedence. This allows a lower prioritized catchall controller like in the preceding example, but a special handling is required if the URI is called with the PUT method. You can easily check the correct behavior by using curl, a very practical command line HTTP client:
curl -v localhost:9000/
This command should give you a result similar to this:
> GET / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: Play! Framework;1.1;dev < Content-Type: text/html; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=0c7df945a5375480993f51914804284a3bb ca726-%00___ID%3A70963572-b0fc-4c8c-b8d5-871cb842c5a2%00;Path=/ < Cache-Control: no-cache < Content-Length: 32 < <h1>Reserved for administrator</h1>
You can see the HTTP error message and the content returned. You can trigger a PUT request in a similar fashion:
curl -X PUT -v localhost:9000/ > PUT / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 200 OK < Server: Play! Framework;1.1;dev < Content-Type: text/plain; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=f0cb6762afa7c860dde3fe1907e8847347 6e2564-%00___ID%3A6cc88736-20bb-43c1-9d43-42af47728132%00;Path=/ < Cache-Control: no-cache < Content-Length: 16 Secret news here
As you can see now, the priority has voted the controller method for the PUT method which is chosen and returned.
There’s more…
The router module is a small, but handy module, which is perfectly suited to take a first look at modules and to understand how the routing mechanism of the Play framework works at its core. You should take a look at the source if you need to implement custom mechanisms of URL routing.
Mixing the configuration file and annotations is possible
You can use the router module and the routes file—this is needed when using modules as they cannot be specified in annotations. However, keep in mind that this is pretty confusing. You can check out more info about the router module at.
Basics of caching
Caching is quite a complex and multi-faceted technique, when implemented correctly. However, implementing caching in your application should not be complex, but rather the mindwork before, where you think about what and when to cache, should be. There are many different aspects, layers, and types (and their combinations) of caching in any web application. This recipe will give a short overview about the different types of caching and how to use them. You can find the source code of this example in the chapter2/caching-general directory.
Getting ready
First, it is important that you understand where caching can happen—inside and outside of your Play application. So let’s start by looking at the caching possibilities of the HTTP protocol. HTTP sometimes looks like a simple protocol, but is tricky in the details. However, it is one of the most proven protocols in the Internet, and thus it is always useful to rely on its functionalities.
HTTP allows the caching of contents by setting specific headers in the response. There are several headers which can be set:
- Cache-Control: This is a header which must be parsed and used by the client and also all the proxies in between.
- Last-Modified: This adds a timestamp, explaining when the requested resource had been changed the last time. On the next request the client may send an If-Modified-Since header with this date. Now the server may just return a HTTP 304 code without sending any data back.
- ETag: An ETag is basically the same as a Last-Modified header, except it has a semantic meaning. It is actually a calculated hash value resembling the resource behind the requested URL instead of a timestamp. This means the server can decide when a resource has changed and when it has not. This could also be used for some type of optimistic locking.
So, this is a type of caching on which the requesting client has some infl uence on. There are also other forms of caching which are purely on the server side. In most other Java web frameworks, the HttpSession object is a classic example, which belongs to this case. Play has a cache mechanism on the server side. It should be used to store big session data, in this case any data exceeding the 4KB maximum cookie size. Be aware that there is a semantic difference between a cache and a session. You should not rely on the data being in the cache and thus need to handle cache misses.
You can use the Cache class in your controller and model code. The great thing about it is that it is an abstraction of a concrete cache implementation. If you only use one node for your application, you can use the built-in ehCache for caching. As soon as your application needs more than one node, you can configure a memcached in your application.conf and there is no need to change any of your code.
Furthermore, you can also cache snippets of your templates. For example, there is no need to reload the portal page of a user on every request when you can cache it for 10 minutes. This also leads to a very simple truth. Caching gives you a lot of speed and might even lower your database load in some cases, but it is not free. Caching means you need RAM, lots of RAM in most cases. So make sure the system you are caching on never needs to swap, otherwise you could read the data from disk anyway. This can be a special problem in cloud deployments, as there are often limitations on available RAM.
The following examples show how to utilize the different caching techniques. We will show four
different use cases of caching in the accompanying test. First test:
public class CachingTest extends FunctionalTest { @Test public void testThatCachingPagePartsWork() { Response response = GET("/"); String cachedTime = getCachedTime(response); assertEquals(getUncachedTime(response), cachedTime); response = GET("/"); String newCachedTime = getCachedTime(response); assertNotSame(getUncachedTime(response), newCachedTime); assertEquals(cachedTime, newCachedTime); } @Test public void testThatCachingWholePageWorks() throws Exception { Response response = GET("/cacheFor"); String content = getContent(response); response = GET("/cacheFor"); assertEquals(content, getContent(response)); Thread.sleep(6000); response = GET("/cacheFor"); assertNotSame(content, getContent(response)); } @Test public void testThatCachingHeadersAreSet() { Response response = GET("/proxyCache"); assertIsOk(response); assertHeaderEquals("Cache-Control", "max-age=3600", response); } @Test public void testThatEtagCachingWorks() { Response response = GET("/etagCache/123"); assertIsOk(response); assertContentEquals("Learn to use etags, dumbass!", response); Request request = newRequest(); String etag = String.valueOf("123".hashCode()); Header noneMatchHeader = new Header("if-none-match", etag); request.headers.put("if-none-match", noneMatchHeader); DateTime ago = new DateTime().minusHours(12); String agoStr = Utils.getHttpDateFormatter().format(ago. toDate()); Header modifiedHeader = new Header("if-modified-since", agoStr); request.headers.put("if-modified-since", modifiedHeader); response = GET(request, "/etagCache/123"); assertStatus(304, response); } private String getUncachedTime(Response response) { return getTime(response, 0); } private String getCachedTime(Response response) { return getTime(response, 1); } private String getTime(Response response, intpos) { assertIsOk(response); String content = getContent(response); return content.split("\n")[pos]; } }
The first test checks for a very nice feature. Since play 1.1, you can cache parts of a page, more exactly, parts of a template. This test opens a URL and the page returns the current date and the date of such a cached template part, which is cached for about 10 seconds. In the first request, when the cache is empty, both dates are equal. If you repeat the request, the first date is actual while the second date is the cached one.
The second test puts the whole response in the cache for 5 seconds. In order to ensure that expiration works as well, this test waits for six seconds and retries the request. The third test ensures that the correct headers for proxy-based caching are set. The fourth test uses an HTTP ETag for caching. If the If-Modified-Since and If-None-Match headers are not supplied, it returns a string. On adding these headers to the correct ETag (in this case the hashCode from the string 123) and the date from 12 hours before, a 302 Not-Modified response should be returned.
How to do it…
Add four simple routes to the configuration as shown in the following code:
GET / Application.index GET /cacheFor Application.indexCacheFor GET /proxyCache Application.proxyCache GET /etagCache/{name} Application.etagCache
The application class features the following controllers:
public class Application extends Controller { public static void index() { Date date = new Date(); render(date); } @CacheFor("5s") public static void indexCacheFor() { Date date = new Date(); renderText("Current time is: " + date); } public static void proxyCache() { response.cacheFor("1h"); renderText("Foo"); } @Inject private static EtagCacheCalculator calculator; public static void etagCache(String name) { Date lastModified = new DateTime().minusDays(1).toDate(); String etag = calculator.calculate(name); if(!request.isModified(etag, lastModified.getTime())) { throw new NotModified(); } response.cacheFor(etag, "3h", lastModified.getTime()); renderText("Learn to use etags, dumbass!"); } }
As you can see in the controller, the class to calculate ETags is injected into the controller. This is done on startup with a small job as shown in the following code:
@OnApplicationStart public class InjectionJob extends Job implements BeanSource { private Map<Class, Object>clazzMap = new HashMap<Class, Object>(); public void doJob() { clazzMap.put(EtagCacheCalculator.class, new EtagCacheCalculator()); Injector.inject(this); } public <T> T getBeanOfType(Class<T>clazz) { return (T) clazzMap.get(clazz); } }
The calculator itself is as simple as possible:
public class EtagCacheCalculator implements ControllerSupport { public String calculate(String str) { return String.valueOf(str.hashCode()); } }
The last piece needed is the template of the index() controller, which looks like this:
Current time is: ${date} #{cache 'mainPage', for:'5s'} Current time is: ${date} #{/cache}
How it works…
Let’s check the functionality per controller call. The index() controller has no special treatment inside the controller. The current date is put into the template and that’s it. However, the caching logic is in the template here because not the whole, but only a part of the returned data should be cached, and for that a #{cache} tag used. The tag requires two arguments to be passed. The for parameter allows you to set the expiry out of the cache, while the first parameter defines the key used inside the cache. This allows pretty interesting things. Whenever you are in a page where something is exclusively rendered for a user (like
his portal entry page), you could cache it with a key, which includes the user name or the session ID, like this:
#{cache 'home-' + connectedUser.email, for:'15min'} ${user.name} #{/cache}
This kind of caching is completely transparent to the user, as it exclusively happens on the server side. The same applies for the indexCacheFor() controller . Here, the whole page gets cached instead of parts inside the template. This is a pretty good fit for non personalized, high performance delivery of pages, which often are only a very small portion of your application. However, you already have to think about caching before. If you do a time consuming JPA calculation, and then reuse the cache result in the template, you have still wasted CPU cycles and just saved some rendering time.
The third controller call proxyCache() is actually the most simple of allrhaps it would be useful to calculate the ETag. It just sets the proxy expire header called Cache-Control. It is optional to set this in your code, because your Play is configured to set it as well when the http.cacheControl parameter in your application.conf is set. Be aware that this works only in production, and not in development mode.
The most complex controller is the last one. The first action is to find out the last modified date of the data you want to return. In this case it is 24 hours ago. Then the ETag needs to be created somehow. In this case, the calculator gets a String passed. In a real-world application you would more likely pass the entity and the service would extract some properties of it, which are used to calculate the ETag by using a pretty-much collision-safe hash algorithm. After both values have been calculated, you can check in the request whether the client needs to get new data or may use the old data. This is what happens in the request.isModified() method.
If the client either did not send all required headers or an older timestamp was used, real data is returned; in this case, a simple string advising you to use an ETag the next time. Furthermore, the calculated ETag and a maximum expiry time are also added to the response via response.cacheFor().
A last specialty in the etagCache() controller is the use of the EtagCacheCalculator. The implementation does not matter in this case, except that it must implement the ControllerSupport interface. However, the initialization of the injected class is still worth a mention. If you take a look at the InjectionJob class , you will see the creation of the class in the doJob() method on startup, where it is put into a local map. Also, the Injector.inject() call does the magic of injecting the EtagCacheCalculator instance into the controllers. As a result of implementing the BeanSource interface, the getBeanOfType() method tries to get the corresponding class out of the map. The map actually should ensure that only one instance of this class exists.
There’s more…
Caching is deeply integrated into the Play framework as it is built with the HTTP protocol in mind. If you want to find out more about it, you will have to examine core classes of the framework.
More information in the ActionInvoker
If you want to know more details about how the @CacheFor annotation works in Play, you should take a look at the ActionInvoker class inside of it.
Be thoughtful with ETag calculation
Etag calculation is costly, especially if you are calculating more then the last-modified tamp. You should think about performance here. Perhaps it would be useful to calculate the ETag after saving the entity and storing it directly at the entity in the database. It is useful to make some tests if you are using the ETag to ensure high performance. In case you want to know more about ETag functionality, you should read RFC 2616.
You can also disable the creation of ETags totally, if you set http.useETag=false in your
application.conf.
Use a plugin instead of a job
The job that implements the BeanSource interface is not a very clean solution to the problem of calling Injector.inject() on start up of an application. It would be better to use a plugin in this case.
See also
The cache in Play is quite versatile and should be used as such. We will see more about it in all the recipes in this chapter. However, none of this will be implemented as a module, as it should be. This will be shown in Chapter 6, Practical Module Examples. | http://www.javabeat.net/using-controllers-in-play-framework/3/ | CC-MAIN-2014-42 | refinedweb | 3,491 | 54.63 |
class B { public: B() { // Object of class C is not created yet, so vtable points to B's vtable // But virtual functions don't work in constructors, so instead of // direct call we call non virtual method non_virtual(); } void non_virtual() { // This function does not know that it is called from constructor // so it makes virtual call and, as far as B's vtable is active... virt(); // pure virtual method called // terminate called without an active exception } virtual void virt() = 0; }; class C : public B{ public: virtual void virt() {} }; int main(int argc, char** argv) { C c; return 0; }
void __cxa_pure_virtual(void) { // We might want to write some diagnostics to uart in this case std::terminate(); } void __cxa_deleted_virtual(void) { // We might want to write some diagnostics to uart in this case std::terminate(); }
int counter(int start) { static int cnt = start; return ++cnt; }
static <type> guard; if (!guard.first_byte) { if (__cxa_guard_acquire (&guard)) { bool flag = false; try { // Do initialization. flag = true; __cxa_guard_release (&guard); // Register variable for destruction at end of program. } catch { if (!flag) __cxa_guard_abort (&guard); } } }
/* The generic C++ ABI says 64-bit (long long). The EABI says 32-bit. */ static tree arm_cxx_guard_type (void) { return TARGET_AAPCS_BASED ? integer_type_node : long_long_integer_type_node; }
namespace { // guard is an integer type big enough to hold flag and a mutex. // By default gcc uses long long int and avr ABI does not change it // So we have 32 or 64 bits available. Actually, we need 16. inline char& flag_part(__guard *g) { return *(reinterpret_cast<char*>(g)); } inline uint8_t& sreg_part(__guard *g) { return *(reinterpret_cast<uint8_t*>(g) + sizeof(char)); } } int __cxa_guard_acquire(__guard *g) { uint8_t oldSREG = SREG; cli(); // Initialization of static variable has to be done with blocked interrupts // because if this function is called from interrupt and sees that somebody // else is already doing initialization it MUST wait until initializations // is complete. That's impossible. // If you don't want this overhead compile with -fno-threadsafe-statics if (flag_part(g)) { SREG = oldSREG; return false; } else { sreg_part(g) = oldSREG; return true; } } void __cxa_guard_release (__guard *g) { flag_part(g) = 1; SREG = sreg_part(g); } void __cxa_guard_abort (__guard *g) { SREG = sreg_part(g); }
void* operator new(std::size_t numBytes) throw(std::bad_alloc);allocates block of numBytes size. On failure throws std::bad_alloc exception
void* operator new(std::size_t numBytes, const std::nothrow_t& ) throw();allocates block of numBytes size. On failure returns nullptr
inline void* operator new(std::size_t, void* ptr) throw() {return ptr; }placement new, places object where it is told to. Mostly used for containers implementation
void* operator new(std::size_t numBytes) throw(std::bad_alloc), which occasionally returns 0 on failure. This leads to undefined behaviour because nobody checks returned value.
There are void* operator delete(std::size_t numBytes)and
void* operator delete[](std::size_t numBytes). You can overload them for other arguments, but you can't call these overloads because C++ does not have syntax for that. There is only one case when these overloads can be called. Imagine that during object creation in dynamic memory operator new has successfully allocated memory, but constructor of the object has thrown an exception. Your code hasn't received the pointer yet, so it can't free memory. So compiler has to call delete itself. But what would happen if memory has been "allocated" with placement new? You can't free it with regular delete. So in this case compiler calls overloaded version of delete with the same arguments as new has been called. That's why standard library provides three versions of operator delete and three versions of operator delete[]. | http://kibergus.su/en/comment/reply/92 | CC-MAIN-2018-17 | refinedweb | 581 | 50.97 |
Introduction:.
Populating the GridView Control:
The first task is to populate the GridView control. We will be using the LINQ to SQL Classes to populate the GridView but you can use any data container that you like.
private void BindData() { NorthwindDataContext northwind = new NorthwindDataContext(); gvReport.DataSource = GetProducts(); gvReport.DataBind(); }
// since the product list is long I am only selecting three products private List<Product> GetProducts() { NorthwindDataContext northwind = new NorthwindDataContext(); return (from p in northwind.Products select p).Take(3).ToList<Product>(); }
The database is the Northwind database and we are using the Products table of the database. The GetProducts method returns the top three products from the Products table (You can return all the rows it does not really matter).
Here is the ASPX part of the code:
<asp:GridView <Columns> <asp:TemplateField <ItemTemplate> <%# Eval("ProductName") %> <asp:TextBox </ItemTemplate> </asp:TemplateField> <asp:TemplateField <ItemTemplate> <asp:DropDownList </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView>
The first column displays the “ProductName”. If the ProductName is not available then a TextBox is created which is used to enter a new ProductName.
The GetCategories method is used to populate the DropDownList in the second column of the GridView control. Here is the implementation of the GetCategories method.
protected List<Category> GetCategories() { NorthwindDataContext northwind = new NorthwindDataContext(); return northwind.Categories.ToList<Category>(); }
Adding New Rows to the GridView Control:
Now, let’s see how to add new rows to the GridView control. The rows are added using the “Add” Button control. Here is the implementation of the add button click.
// adds the new row protected void Button1_Click(object sender, EventArgs e) { Count += 1;
var list = GetProducts(); // add empty elements at the end of the list list.AddRange(new Product[Count]); gvReport.DataSource = list; gvReport.DataBind(); }
Let’s first talk about how we are going to add empty rows to the GridView control. Each time a button is clicked the postback is triggered. So, we need a way to know how many empty rows have to be created. We will use ViewState to store the number of rows that have to be created and then add the rows in the product list as empty products.
The Count property in the button click code is used to store the number of empty rows to be created. Here is the implementation of the Count property.
public int Count { get { if (ViewState["Count"] == null) return 0;
return (int) ViewState["Count"]; } set { ViewState["Count"] = value; } }
The list.AddRange(new Product[Count]); line is used to append the rows to the product list.
The effect is shown in the GIF Animation below:
I have also used UpdatePanel to eliminate the server postback.
I hope you liked the article, happy coding! | http://www.gridviewguy.com/Articles/374_Adding_Multiple_Rows_in_the_GridView_Control.aspx | crawl-001 | refinedweb | 447 | 57.06 |
Developers encourage VB6ers to delve into .NET code (whether VB.NET or C#) and take the plunge into the latest languages. The move isn't trivial, but it'll be worth they effort..
Make the Leap to .NET
As a programmer with 23 years of experience and a longtime VB developer (since VB 2.0), I must take exception to Jeff Jones' recent letter, where once again the flames of the language wars were fanned [Letters to the Editor, "Discouraging Moves From VB6 to C#," October 2005]. It has been my personal experience and that of the team I work with that while moving from VB6 to C# isn't a trivial experience, it has more to do with learning the namespaces and the IDE than languages. If you're a good programmer, then the language you use is incidental. I've written effective software in Cobol, C, Clipper, and, of course, VB. Of these, VB allows me to be most productive, but I've always hated the necessity of having my hand held by the language and needing to always delve into the Win32 API to do things that VB alone could not.
When I began looking at .NET, I found that the years of VB habits caused me to trip up over syntax that was similar but not the same. It got so frustrating that I turned to C# so that I could focus exclusively on the namespaces and the IDE. Far from "ancient," I find C# to be an elegant language, devoid of needless "dongles" that hide what I need to know to do my job. I much prefer "{" to the more verbose "End If." In fact, I much prefer the terseness of the syntax and the fact that it forces me to delve into the namespaces to do what I need to do.
And as far as RAD goes, I find C# every bit as quick to develop with as VB6. I recently wrote a Windows service that watches a folder tree on a production server awaiting scanned documents, wraps them in an e-mail message, and mails them to the recipient. I went from concept to development to testing to production in 30 hoursthat's pretty RAD. It saved manpower by automating a manual processthat's money to my employers.
So use what you like, but I would encourage VB6 coders (especially advanced ones) to look at C#. Our team loves it.
Larry W. Seals, Mebane, N.C.
I've been programming in Visual Basic since version 1.0 Professional for DOS. I've also managed to squeeze in self-taught .NET applications since it was in beta. I've just finished reading Jeff Jones' letter and feel compelled to add a few thoughts to the mix.
I peruse message boards and articles looking for that killer snippet of code and am constantly overwhelmed with two requests: Can you convert that to VB for me? Can you convert that to C# for me?
I don't mean to sound arrogant, but if these programmers would dabble for about 15 minutes, they would realize that the languages are nearly identical. I've had my share of converting "End While" to "}" and my ";" key is now all rubbed off, but I can confidently say I am fluent in both C# and VB.
I've also noticed that C# jobs are posting higher pay rates. Thus, I've now changed my focus to being a C# developer. Let's face it, we go where the money goes. (It's not like the jobs are different, anyway.)
I would love to write more, but I'm busy converting some VB to C# and back again to pass the time, because I can.
Christopher Schipper, Roseville, Calif.
Correction
In the article, "Editors Choice Awards Inspire and Innovate" [Visual Studio Magazine: Buyers Guide & Product Directory 2005], we mistakenly used the term MB instead of GB in the portion of the article on SQL Server 2005 Express Edition. The effected sentence should read, "SQL Server 2005 Express Edition also doubles the maximum database size it can work with from 2 GB to 4 GB, and there is no limit to the number of databases per instance." Eds.
About the Author
This story was written or compiled based on feedback from the readers of Visual Studio Magazine.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy. | https://visualstudiomagazine.com/articles/2005/11/01/make-the-leap-to-net.aspx | CC-MAIN-2018-51 | refinedweb | 736 | 71.65 |
Sends the specified, existing draft to the recipients in the
To,
Cc, and
Bcc headers.
Try it now or see an example.
This method supports an /upload URI and accepts uploaded media with the following characteristics:
- Maximum file size: 35MB
- Accepted Media MIME types:
message/rfc822:
For more information, see the authentication and authorization page.
Request body
In the request body, supply a Users.drafts resource with the following properties as the metadata. For more information, see the document on media upload.
Response
If successful, this method returns a response body with the following structure:
{ "id": string, "threadId": string, "labelIds": [ string ], "snippet": string, "historyId": unsigned long, "internalDate": long, "payload": { "partId": string, "mimeType": string, "filename": string, "headers": [ { "name": string, "value": string } ], "body": users.messages.attachments Resource, "parts": [ (MessagePart) ] }, "sizeEstimate": integer, "raw": bytes }
Examples
Note: The code examples available for this method do not represent all supported programming languages (see the client libraries page for a list of supported languages).
Java
Uses the Java client library.
import com.google.api.services.gmail.Gmail; import com.google.api.services.gmail.model.Draft; import com.google.api.services.gmail.model.Message; import java.io.IOException; // ... public class MyClass { // ... /** * Send an existing draft to its set recipients. * * @param service Authorized Gmail API instance. * @param userId User's email address. The special value "me" * can be used to indicate the authenticated user. * @param draftId ID of Draft to be sent. * @throws java.io.IOException */ public static void sendDraft(Gmail service, String userId, String draftId) throws IOException { Draft draft = new Draft(); draft.setId(draftId); // To update the Draft before sending, set a new Message on the Draft before sending. Message message = service.users().drafts().send(userId, draft).execute(); System.out.println("Draft with ID: " + draftId + " sent successfully."); System.out.println("Draft sent as Message with ID: " + message.getId()); } // ... }
Try it!
Note: APIs Explorer currently supports metadata requests only.
Use the APIs Explorer below to call this method on live data and see the response. | https://developers.google.cn/gmail/api/v1/reference/users/drafts/send | CC-MAIN-2020-10 | refinedweb | 327 | 51.44 |
Help with Remote Data
Steve Dyke
Ranch Hand
Posts: 2038
1
posted 13 years ago
I am trying to develope a web application. I need to get and post data to several AS400 files. I have created a connection class that works only on a specific file. How can I create a connection class that stores the connection to a variable. Then use this variable in classes that have SQL strings that interact with the AS400 files.
This is my connection class:
package com.ibm.drawinginquiry.rational; import java.sql.*; import com.ibm.as400.access.*; public class LogOnConnection{ public LogOnConnection(){ AS400JDBCDataSource datasource = new AS400JDBCDataSource("gvas400"); datasource.setUser("userid"); datasource.setPassword("password"); try { Connection connection = datasource.getConnection(); } catch (SQLException se) { System.err.println("Exception creating the database connection: "+se); } } }
I use the LogOnConnection conn = new LogOnConnection(); in a
Servlet
.
I need to be able to use the connection variable in the following:
package com.ibm.drawinginquiry.rational; import java.sql.*; import com.ibm.as400.access.*; public class LogOnDataClass { LogOnDataClass(){ try{ String userfname = null; String userlname = null; String sql = "SELECT fname, lname FROM webprddt6.appusrprf1 WHERE usrid = 'SDE'"; Statement sment = conn.createStatement(); ResultSet rs = sment.executeQuery(sql); if ( rs.next() ) { userfname = rs.getString("fname"); /* the column index can also be used */ userlname = rs.getString("lname"); } else { } rs.close(); sment.close(); } catch ( SQLException se ) { // System.err.println("Exception performing query: "+se); } } }
However, I get a connot resolve on conn
michael warren
Ranch Hand
Posts: 50
posted 13 years ago
not sure if this is more of question for the servlet group ?
Anyway, I think you could use the init method of servlets, something like the following (and use the destroy method to release any resources).
Connection connection; public void init(){ AS400JDBCDataSource datasource = new AS400JDBCDataSource("gvas400"); datasource.setUser("userid"); datasource.setPassword("password"); try { connection = datasource.getConnection(); } catch (SQLException se) { System.err.println("Exception creating the database connection: "+se); } }
William Brogden
Author and all-around good cowpoke
Posts: 13078
6
posted 13 years ago
Statement sment = conn.createStatement();
Where is conn defined? The following code:
Connection connection = datasource.getConnection();
creates a local variable which is valid only inside the try.
However, I get a connot resolve on conn
Why do people so frequently leave off critical information? Is that an error from the compiler or from the runtime?
Bill
And will you succeed? Yes you will indeed! (98 and 3/4 % guaranteed) - Seuss. tiny ad:
Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
Connecting to Remote Data
How do I get a Subset of a ResultSet
Creating Cewolf Combined Charts
Dynamic JSP
Test User Input Sting to Object
More... | https://www.coderanch.com/t/382625/java/Remote-Data | CC-MAIN-2020-40 | refinedweb | 452 | 52.05 |
Does anyone know the header file to include for isdigit and isalpha?
Printable View
Does anyone know the header file to include for isdigit and isalpha?
ctype.h
What does isdigit do
Code:
# include<iostream>
using namespace std;
int main ()
{
char number = '1';
if (isdigit(number))
cout << "Yes " << endl;
return 0;
}
compile this
isdigit can be found in iostream as well...
Imagine in the above program if the character held in 'number' was an 'a'.
Would "Yes " still be outputted to the screen. Well no... 'a' isn't a digit. '1' is a digit.
Pretty straightforward...
Basically how it's done:
Or, even more compactly, you could make a macro:Or, even more compactly, you could make a macro:Code:
bool isdigit(char c)
{
if(c >= '0' && c <= '9')
return true;
return false;
}
That's all there is to itThat's all there is to itCode:
#define isdigit(c) ((c >= '0' && c <= '9') ? true : false) | http://cboard.cprogramming.com/cplusplus-programming/7363-header-file-isdigit-printable-thread.html | CC-MAIN-2014-52 | refinedweb | 155 | 70.23 |
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#6518 closed (invalid)
label_suffix missing
Description
Current documentation () reports that you can change the default colon (:) for labels using the label_suffix argument in the NewForms class.
However, looking at the source code and trying to run an example, I can not find where this label is supported in version 0.96.
I get this error:
init() got an unexpected keyword argument 'label_suffix'
Request Method: GET
Request URL:
Exception Type: TypeError
Exception Value: init() got an unexpected keyword argument 'label_suffix'
Exception Location: /usr/lib/python2.5/site-packages/django/newforms/fields.py in init, line 129
Python Executable: /usr/bin/python
Python Version: 2.5.1
Python Path: ['/home/bitcircle', '/gtk-2.0']
Either the documentation is wrong (describing a feature that doesn't exist) or the argument has been removed by mistake.
Attachments (0)
Change History (10)
comment:1 Changed 6 years ago by tvrg
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
comment:2 Changed 6 years ago by cbmeeks <cbmeeks@…>
Sorry, I thought 0.96 was the latest version (SVN release)
comment:3 Changed 6 years ago by tvrg
If you want the SVN release (subversion) you need to check the code out according to these instructions: (option 1)
It might not be a bad idea to use trunk if you want to use newforms.
comment:4 Changed 6 years ago by cbmeeks <cbmeeks@…>
That's what I am using now(Trunk)....or so I thought.
I'm at revision 7049.
hmm....
comment:5 Changed 6 years ago by tvrg
you said: "I can not find where this label is supported in version 0.96.", that's why i thought you were using that version.
I suppose you are putting a suffix in a field definition, not in a form definition? Could you paste your code to dpaste?
comment:6 Changed 6 years ago by cbmeeks <cbmeeks@…>
Well, I was messing with ModelForm to generate.
My forms.py
from django import newforms as forms class NewContactForm(forms.Form): firstname = forms.CharField( required=False, max_length=64, label='First Name, label_suffix='' )
Now, I am using a more direct approach and building my templates using
{{ form.firstname.errors }} <label for="firstname">First Name</label> {{ form.firstname }}<br /><br />
Which works for my needs.
comment:7 Changed 6 years ago by cbmeeks <cbmeeks@…>
Sorry, there should be an ' after "First Name"
comment:8 Changed 6 years ago by brosner
In the future please direct usage question to django-users mailing list or #django on freenode. label_suffix is not a parameter that a Field takes. It is passed in to the Form object.
comment:9 Changed 6 years ago by tvrg
brosner beat me to it.
comment:10 Changed 6 years ago by cbmeeks <cbmeeks@…>
Well, I thought this was a bug.
I have the latest trunk (I think...revision 7049) which I *thought* was version 0.96...but I guess it's slightly newer.
But if you look at you will see label_suffix is there. However, I see now that the docs for the "official" version 0.96, label_suffix is NOT there.
Once again, I thought 0.96 was the newest which used the same docs as but I guess it doesn't. My mistake. But when you go to you see "This document describes Django version 0.96. For current documentation, go here" which points to, well, you get the idea.
I will consult the IRC channel to resolve my confusion.
Have a look at the first line of the docs:
""" This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: 0.96, 0.95."""
if you look at the documentation for your version: you'll see that there is indeed no label suffix mentioned there | https://code.djangoproject.com/ticket/6518 | CC-MAIN-2014-23 | refinedweb | 650 | 65.01 |
Okay, I can feel Harith twitching with the desire to ask questions, so let’s start a grab-bag thread. Ask whatever you want. I’ll tackle a few of the questions that are general. Please make sure you read the most recent comment guidelines so you know to avoid “what’s up with my specific site?” or other questions that won’t apply to most people.
Today I’m actually away from work up in San Francisco with my wife, so I may let questions accumulate before I tackle them. I’m going to get cleaned up and prowl Union Square for a copy of Me and My Katamari, and I guess I’ll need a PSP to go with it. I’m sure later this week I’ll be asking how to run homebrew code on a PSP firmware v2.6.
Examples of fine questions include:
–?
Update: Okay, enough questions for now. I’ll tackle a few of these, and I’ll try to do another grab-bag thread in a week or two. 🙂
181 Responses to Miscellaneous Monday: March 27, 2006 (Leave a comment)
Hi Matt
You read my mind. Thanks for this great opprtunity 🙂
Here I go, Matt:
– On which datacenter to look for improvements of the supplemental issues?
– When should we expect to see first signs of improvements regarding to canonical issues?
– Do you take Emmy with you to San Francisco 🙂
Thanks. Wish you and Mrs Cutts a nice trip.
Hi Matt!
–?
– Do you check your email (i@) ever?
— Stephen Deken.
Well, Matt, those are perfect questions, so I’ll ask them…
–?
Hi Matt,
Will the deployment of BigDaddy stabilise the rolling PR issues we are experiencing at present?
This datacentre works differently to all of the others. Noticed just a few hours ago.
Many of the oldest Supplemental results no longer appear on a site:domain.com search (but adding a keyword makes many of them re-appear) in that DC too.
Where does that DC fit into the scheme of things? Is it mainly made from newly spidered data?
For sites and pages expected to come out of supplemental status will they need to wait for an update of some kind to rank properly again? Or as the pages go live is this where they can expect to be?
Thank you.
Not so much a question…
GET A PSP! Hunt around for a pack with a larger (min. IGb) memory stick, and preferably the power pack add-on. You could go to eBay for the memory though, I just got a 1 Gig card for under £25, including postage. The built-in browse on wi-fi is just great – geek utopia. War-walking/driving is fun 😉
Did you check out the guys all painted in silver doing the robot on milk crates in San Fran?
Hello Matt,
Is Google working to eliminate parked domains used solely for displaying ads from top results (even if they match the search term exactly)?
Matt,
Can you give us a general way of getting a good idea in front of Google ?
JO
Can you comment on the so-called Google Bowling
phenomena – the ability for someone else to clobber you in the SERP’s.
The URL in my signature has an extensive writeup (I actually think my case is self-induced due to natural exuberance of folks wanting to help out) but I think this is of general interest rather than just about my site. HEY, how many times have you heard that assertion! 😉
As I wrote, this is argueably a next-generation anti-spam technique in the on-going arms race between search engines and SEO’s. Does not appear that MSN/Yahoo have deployed any algorithmic approach like this yet. So while it may seem wierd/unfair that a 3rd party could clobber your rankings, my two cents is that it is a pragmatic solution to a real-world problem, so they should be looking into it.
If we suspect this has happened, should we just file a re-inclusion request with information showing why we think this may have happened?
But I can envision it being very difficult for you guys to determine if the unnatural back links were self-generated by someone trying to game the search engines, or by nefarious black hats … as the linking signature basically looks the same.
You asked for questions. 😉
Why do you focus your attention so much on SEOs and not at webmasters who make actual quality websites?
How come some sites rank so highly when they are obviously stuffed full of spam (keyword stuffing etc) and have been reported?
How can a site rank so well for just about everything except for one key phrase? (obviously penalized, but when is that lifted, all other pages KWs seem to rank fine.
thx
Hellow Matt,
The whole adult sector has been taken over by spam and 404’s no matter what search term I search for there are at least 40 spam/redirect/404 pages for a search in adult.
Since October and even more so in December It seems as if allinanchor doesnt matter anymore because spam pages rank well in allinanchor and in regular serps even if the site has no links.
Will this problem be fixed and is Google aware of the problem?.
Thank you for doing this grab-bag Matt.
I am hoping that you can address Keyword Density and Keyword Stacking (using the same keyword over and over again on a page to increase density).
In content rich sites there is a tendency to use the same Key Words and Phrases reptitively on a page so as not to dilute them. Other times, important words can have a lack of synonoms or alternate terms.
On the other side of the balance beam are sparse websites that employ very little text around their Key Words and Phrases.
Both types of sites have the same Keyword Density. One site is content rich and user friendly, but the language might read a little unnatural. The other site reads more naturally, but there is little content to read.
The obvious examples that I am thinking of are shopping sites, though I’ve seen other examples of this as well, and it seems that very often less content is rewarded more strongly than rich content. I would think that the opposite would be true so I am wondering why this is and what guidance you can provide? Also, how do search engines use Keyword Density?
Thank you Matt.
I’m also very interested in the answer to RobinKay’s question – I’m seeing some of our pages come out of supplemental, but they’re ranked much lower than they used to be. I was thinking it could be because Google takes into account the ‘size’ of the website for the rankings, and since only a small fraction of the pages are back into the main index yet, it thinks the site is too small to be important.
Matt – without getting into the search algorithm details, could you confirm that I’m thinking in the right direction here? Thanks.
Does this mean there is an end in sight to the www/non-www canonical issue? No need anymore for re-directs in htaccess? Google now understands it’s the same site?
Also, year old (plus) cached data no longer features in Google’s index?
I’ve noticed over the past few weeks that PR for pages other than the main page of my blog has dropped to 0 while the main page seems to be as popular (even more popular) then ever in search results. This is sort of a problem as I’d like my archives to be the primary source and the main blog page is just for things that are recent.
It was just fine in the past, but now I see people hitting the main page for articles I’ve posted in the past. At that point they see a posting about a tree or something and don’t find the technical information they were searching for.
It seems like this behavior is a step backwards, at least in cases like mine where the blog is a device used to archive important info instead of a place for topical discussion.
Hi Matt,
Did you ever look into the issue with new web sites being penalized for launching on an expired domain (that may or may not have a penalty associated with it). The site I alluded to when I spoke to you still is in the pooper, but I know this isn’t supposed to be a “specific site question.” So I won’t mention the specific site 🙂
I’ll note 1 more thing:
I vaguely suspect that the fact that I tested some tier-2 and tier-3 PPC (and some 302s? by extension) early-on might have tripped a red flag, though I kind of doubt it. Just wondering if that could actually be a bad thing.
Hi Matt –
we saw a permanent switching from supplemental to normal and back, but it seems that even when back our pages are ranking for nothing; other webmasters report te same problem.
So may it be that Google forgot about the ranking by trying to fix some issues and thus making things worth and worther?
Regards from Germany
Werner G.
Matt,
I’ve reported this problem for a few times, but still nothing has changed.
On the Google SERPS page when a specific biography is searched for (or anything else which wikipedia is useful for), the ‘quicklink’ at the top like this:
Charlie Sheen — Carlos Irwin Estevez, whose stage name is Charlie Sheen, (born 3 September 1965) is an American …
According to Sheen
is shown. However, the link given is incorrect. The ‘According to Sheen’ part is, but not the actual link to click on.
Every wikipedia link is the same way on the SERPs quicklinks which has a space. Google places ‘+’ in between the two words, when infact it’s a %20 (space).
I think it would benifet every searcher if this was changed ASAP as the way it is now is no help at all.
Matt,
A subfolder of my site I rented out to a friend as mentioned in your blog here
This part of my site looks to be banned, but it didn’t affect the rest of my site.
Q) Is it possible for google to ban part of a site. Or is something else amiss.
Needless to say I’m not hosting content for this person anymore 🙂
I think you’re going to get A LOT of posts in the thread so I’m going to get my question in now!
3 of my sites have all lost thousands of indexed pages with the introduction of Big Daddy (not a supplemental issue). I know many other webmasters are having the same problem. Scratching my head a lot over this one. The only thing I can think of is that sometimes some of the pages are quite similar. I can’t help this, it’s done the best way from a usability point of view. They’ve been like this for 2 years.
My question is therefore, is Big Daddy penalising pages where the content is not different enough from page to page?
Matt,
What is the reason that there are many cases yet of sites ranking for the YSM! ads running on their sites? Is this being properly addressed in an upcoming update, or will I still need to watch for when those sites rank above me and turn off the content network so I don’t pay for the organic listings that my well written ads seem to achieve?
Does Big Daddy have a different approach to ranking third level domains?
Are there improvements in the filtering of splogs?
one binary question 😉
is bigdaddy 100% deployed in every google international versions ?
Matt-
Any specific kind of feedback you looking for now that BigDaddy is almost fully deployed?
Matt,
Is the use of DMOZ content justification for an outright ban of a domain? By ban, I mean all pages removed from the index and info: returns
Sorry, no information is available for the URL
Find web pages from the site
Find web pages that contain the term “”
I feel oppressed (in the “other” comments thread), so why won’t you approve my comment? :p
Seriously (a stupid-looking question that is actually really serious):
When is Google expected to stop doing internet searches?
Matt,
– Is there any good reason why a Supplemental result would outrank a non-Supplemental result? I see this quite a lot.
– Is it possible to elaborate any more on the 27th Dec 2005 data refresh? The SERPs changed drastically for many on this date.
– Are Google aware that raw affiliate URLs and massive pages (I’ve seen 5Mb upwards) are outranking really relevant results?
Thanks for your time Matt.
Matt-
Should we submit a spam report to Google for a site that has multiple domain names with just a few design changes on each domain name?
The company name is different on each site as well.
Ok I won’t ask the RK questions sine i assume you’ll answer them soon.
Here’s a good one.. What is Google’s reccomended course of action for a site that is copying your content?
I just had somebody steal my layout, javascript, html, and text character for character (he only today removed my email address from his FAQ).
What is Google’s reccomended course of action in this case?
Do the vote buttons on Google Toolbar really do anything?
Matt,
Thanks for your hard work at Google.
My question is about pages lost in the google index. Before big daddy my forums had around 950,000 pages indexed. After big daddy, we are now only showing around 150 pages indexed. Myself and other forums have taken a huge hit in traffic since big daddy becuase of this. Googlebot previously had indexed the site so well that we were able to use google site search. Which is now no use to us since we only have a few hundred pages now indexed.
Here is an excellent post on webmasterworld about the big daddy and forums issue:
Any help here? Are you guys penalizing forums now?
Hi Matt,
What kind of traffic does your blog generate?
Can a re-direct link like this hurt the targeted site’s Google ranking…
htt*p://spammer-site.com/go.php?
I tried to make an AdWords campaign, and use one of my sites in it. It has a peruvian TLD extension (.com.pe). Much to my dismay, the site wasn’t there at all. It didn’t appear on the site targetting tool, not even when searching for it by url. I did a few more searches, and found out there wasn’t a single .com.pe domain featured there. I’ve already reported this to google, and had a mail saying they would look into it…
My questions are:
1) Is this a bug or a limitation?
2.a) If it’s a bug, can you encourage or discuss it with an engineer to have it fixed?
2.b) If it’s a limitation, what is the reason or arguments behind it? And why is it not documented anywhere in the help, or support pages?
Thanks, regards!
@Luis: Huh, what are all of these?
Hi Matt,
For accessibility purposes, my site has ’skip navigation’ etc…to allow screen readers to get straight to the content. However, this breaks my CSS layout when implementing the stylesheet, so I have ‘hidden’ these accessibility links using display:none in the stylesheet.
In addition, again using stylesheets, I have a screen layout and print layout. The screen layout hides my full contact details whereas the print layout prints these at the top of the page.
Will Google regard this as hidden text and penalise my site?
Cheers,
Emma
P.S. Sorry for double posting – I posted this previously on “SEO Advice: Check your own site” but I guess it got lost in the noise…
What is more adictive:: Google or Katamari?
Matt explain what this means?
->
there is no even “articles” word in this site
want more weird examples ?
strange isn’t it? webmasters seeing “strange things” all the time but they scare to talk about this because the google team probably will ban their site and continue to work with smile on their face . this changes killing our business man!
please excuse my bad english
best regards: Ivan
– What’s the story on the Mozilla Googlebot? Is that what Bigdaddy sends out?
– Any new word on sites that were showing more supplemental results?
Please answer these two. I’m going on a month now with partly being out of Google and I’m going insane! Though, I think Googlebot has revisited my entire site over the last week with both the Mozilla and non-Mozilla version yet none of it is making the index.
What is the meaning of life? And what is the question?
Although not a “Publicly Noted” feature of the results (and a little hacked) the RK feature is my question. I noticed that the RK turned to 0 a couple of days ago. For those asking “what is the RK feature, here is some information: and more currently about RK being set to 0. My questions are:
– Is the RK “hidden feature” turned off
– As speculated did the RK feature have anything to do with “live and current” pagerank
– Should we expect this RK feature to become “public” knowledge without the need of “hacking” a checksum
I completely understand why Google would turn off this “feature” seeing as it isn’t a advertised feature and aparently was being abused. I’m just curious of it’s status.
Hi Matt,
A simple one, duplicate sites on different TLD’s – I see some taking up multiple top positions in the SERP’s
Bad for business bad for Google.
I understand there will be no penalty for these spammers (it’s deliberate) but when will it be fixed ?
It is a REAL problem where I am.
Thanks.
When does bigdaddy is completed and when will the SERP’s settle?
Which or how much updates may we expect in the future @ Google?
We often heard that Big Daddy is an infrastructure update. Was Big Daddy also responsible for some changes in the SERP´s in the last 3 month?
I have a question about the sandbox and the various “aging” algorithms. What’s the point?
SERPS don’t seem to have been improved by these new algorithms. New spam sites at the top of the SERPS have been replaced with old spam sites. Good news sites at the top of the SERPs have been replaced with good old sites.
Is Google implying that spammers have less patience than legitimate webmasters?
Is Google implying that you can’t make an awesome web site in 6 weeks?
On a personal level, these changes are good for me. I rank better, and there’s a bigger barrier of entry for my competition. Life is better for me, and life is also better for Google (explanation below) – but it isn’t any better for searchers.
Why is life better for Google? Well, new business’ used to pay SEOs and other webmsaters to link to them in order to get MAD traffic. Now this isn’t as attractive as it used to be, which makes the second best option, adwords, more attractive – which in turn means more money for Google and for me. But again, not for searchers.
I (really) wanted to buy a Buffalo Technology TeraStation 1.0 TB. When I search Froogle for Buffalo it now pushes me into Local Search with results for Buffalo, NY and Star-Kist Tuna – not quite what I’m looking for. Froogle should not be assumtively used for local search when so many products have city names, zip codes, etc, IMHO.
Matt,
When filing a re-inclusion request, what are the procedures at Google? My site has no hidden text, doorway pages, cloaking, etc…
This is my story, but could apply to others as well:
1. Submitted a “reinclusion request”
2. Got automated response from Google
3. Waited a week or two
4. Replied to automated response to get a status update
5. Got a response from Google telling me to check the Webmaster guidelines
6. Responded to say I read the guidelines and site is clean and please pass case to Google engineers
7. Get reponse from Google saying my case was forwarded to the engineers
8. Stuck at step 8
Why was the site removed from the index in the first place?
How long does the reinclusion request take?
If there are no violation of the webmaster guidelines, how long does it take the site to be re-indexed?
How long until the Googeblot returns to the site?
How long until a site shows back up in the SERPs?
Matt
Hello. Can Google ‘read’ images? If not, how long before Google does?
With so many spammers fooling Google with hidden text (white on white, or the like) can / does Google read pixels to defeat this? The scripts are out there for us webmasters that allows reading pixels / pictures – but they are rudimentary at best and for other uses. It certainly seems possible that Google could develop algos that at least read a background image and compare its collective pixel color with the text ‘on top’ of the background image? If that is possible, is it then possible to read an image that has text – yes- read the text in an image. Again, it seems Google should be able to do this. After all, think of the image security code that we all verify to make comments here… the image is distorted slightly to prevent a script from reading it and hacking the comments. If so, has Google the capabilities to read images in navigation bars? For sometime, we have been told by the SEO world that text is better for the search engines, and to use images sparingly. How far are the search engines from reading images and getting over this hurdle? Obviously, images make a much prettier website but a pretty website does not rank well unless text is carefully integrated. If Google can get to the point of reading images, it sure would make for a better web and even help Google beat some spammers. Are there any such applications in the Google pipeline or is this a ‘secret sauce’ question? Thanks
On the “no-follow” tag. Is G still taking these links into account in its algo? I ask this since I have started to notice pharma spam requests full of links on our blog which include the “no-follow” tag. In other words has G tried a bit of reverse psychology and included in its algo “if the link has a no-follow tag – trust the site since it must be trustworthy because it has this tag”. Can you confirm that links that have the “no-follow” tag are not used in any way to determine SERPS.
Is this story true – ?
Relevance, Matt.
How do you measure relevance?
That’s all you really need to discuss from this point forward.
Thank you.
Matt,
There is a new spam trend in the UK financial SERPs.
Many companies are using a secure server hosting company that has a high Google PR to host a form page and then point zillions of links at it most appearing to be of the link farm variety.
The result is very high ranking in very competitive markets and the trend is growing fast. The company offering the service is who are offering a legitimate service.
Some users of the service are gaining top positions on major keywords using this method as they are unable to rank using their own domains.
Do you consider this spam? Having though about it I think it is if zillions of link farm style links are used to rank the page.
We hosted a page without links as a test to see if was just the PR of the host but the page did not rank so it is the crudy links that are ranking these pages.
You opinion as “Spam Master General” would be interesting?
PS: Search for “Loans”, “Personal Loans”, Secured Loans” on and you will see what I mean in the first page of results.
I just wanted to thank you fine people at Google for finally removing those pesky scrape sites that like to use keywords in their code and then redirect that listing to questionable sites. It was getting quite bad there for a while for some of my client sites as some of their sites were getting hit relentlessly. As bad as some of those results were the ones that I found to be much worse were the ones that used my personal, full name as a search query with the resulting link being redirected to some gay porn sites. I didn’t so much mind the lesbian sites but Gay Porn… Come on – hold it, scratch that thought as I don’t like the visual it conjures up.
Thanks for the opportunity to fire a million questions at you, Matt.
And I’m going to fire off a very strange one…but rest assured I’m not the only person it affects.
With RSS gaining popularity and numerous sites syndicating other content (including Google itself), the potential for an incredibly large database full of repetitive information is huge. I’m sure big G is very aware of this potential problem, but by the same token I suspect most aren’t.
Side Note for those that don’t know how RSS works (if you know, don’t bother reading…it’s for the newbies):
RSS, or Really Simple Syndication, allows a webmaster from Site A to legally acquire syndicated content from Site B and format it in any manner (s)he sees fit, usually via an XML content feed. Think of it as the online version of newspaper article syndication. The code output from RSS is normal HTML.
The only clues as it pertains to content being fed are hyperlinks that lead away from the original content page and the mentioning of a source (if they do it properly) at the bottom.
For an example of how RSS feeds work, see .
End side note.
Okay, back to the question:
How is the issue of duplicate content and unreasonably large SERPs with said duplicate content going to be dealt with, and is there going to be a way to do so without harming the authors of said content?
Or, for those who need a practical example, if Abhilash publishes an article that gets distributed across 100 sites, would he get full credit for the backlink from those 100 sites and yet only have 1 article appear in SERPs?
What happened on March 8th, 2006 in the Google SERPS?
I’ve seen, tucked away, in dozens of threads webmasters commenting how their sites either recovered or tanked in the SERPS on that date. One of my sites tanked, severly on 3/8/06, going from 15,000 referals per day to 250 per day from Google.
Surely there must be something specific that was changes on 3/8 to cause such a drastic decline in my site while also “recovering” other sites.
Thanks,
Kurt
Why do RSS feeds rank before content pages? I have seen this several time in the serps and IMHO it does not make sense.
Hi Matt
When I change a robots.txt to exclude more existing files from being crawled, how long does it take for them to be removed from the index? Perhaps the answer is a function of how often the site is crawled and it’s PR?
How is the issue of duplicate content dealt with for (valid) articles being spread across various sites?
Since reciprocal links and some directories were downgraded in value because of abuse, will article directories also be downgraded in time?
cheers
My sitemap has about 1350 urls in it. I use coffecup sitemapper to generate the sitemap in both html and xml formats and then submit the sitemap to google. the bot vists my site at least one a day, everyday and downloads the sitemap.
When I do the site:mydomain.com, it returns anywhere from 700-1000 pages as being in the index, right now with all the bigdaddy mess happening. Why does the site command not return a figure closer to the actual number of pages? I would think that my site would have been fully crawled by now, its been around for 2+ years, but I cannot seem to get all the pages indexed. Am I missing something here?
Matt,
I posted this back in October, but wanted to repost just in case you didn’t read those comments:. I believe that advice on how to handle this type of situation would be useful to many webmasters who find themselves in this predicament.
Thank you.
Thanks so far Matt. My questions:
– image replacement // hidden text
Which method for image replacement (usability) does Google prefer? Ans with Toby Adams: can Google “see” background-pictures?
– redirection
What redirect minimizes the loss of serps?
– links
Is the linkcontext getting more important? What about footer-links.
For me future-values should be: more content, less links.
Matt, google still seem to have no handle on expired domains.Is there any progress being made in this area?
By the way the blogger profile still has all his blogs intact and even the first profile i gave you a while back is still alive and doing well although under a different profile number
If Blogger wont act on these expired blogs and google are helpless with expired domains, whats a webmaster to do… Join the merry spam goround?
Spam in “other” languages.
Hi Matt,
When Google will consider to take a closer look at spam in other languages? I understand that this requires human resources, especially who are capable of speaking/understanding the “other” language. But many spamers ranking well in SERPs are so obvious (with thousands of doorway pages, redirects, etc.) that anyone can easily see it at first sight.
For example, a site (in Turkish) uses doorway pages that has a “Click to Enter Site” (in Turkish) in 30 pt font above the fold on a seemingly blank page, but when you scroll down, you see literally a thousand keywords stuffed with a 1px font, or a visibility:hidden div.
Hi Matt,
I’ve got a question that I’ve tried to research and always seem to get a different answer wherever I go.
When moving to a new domain name, google recommends using 302 redirects. My questions are, do you lose page rank/relevance? If so, how long does it normally take to get it back? Right now our current front page has a page rank of 7, and we are pretty nervous about changing the domain name.
For those curious as to why we would change the domain name, we are going from a rather hard name to spell for most folks, to a domain name that better describes our business and is much easier to spell.
Much appreciated
Couple of questions
Using Digital Point I see rankings for phrases that I can’t see using a few different data centres manually including the original Big Daddy one. Are there still going to be major differences in results for different locations (geo-targetting)?
Also how does a major redesign from old html files to new php files affect the SERPs? Even with 301s in place. I’ve seen a badly written web site be improved code wise and content wise, just the right balance of it all, and disappear from it’s original ranking of 4th. It’s a concern with the amount of people wanting to migrate over to modern accessible standards and utilising modern methods such as php/asp files.
Thanks .
What is Google’s opinion on well written semantic code versus nasty tag soup – does it care?
I’ve lost a lot of results from my site (totally clean) in Big Daddy. It seems to have spidered a lot of crud from my site and lost the meat. I.e. down from 30,000 results to 9,000. Just wondered if this was typical and if it will recover as BD settles in?
Hi Matt,
I see a lot of my ads show up on spam websites. It angers me whenever I see this because the clicks cost a lot of money.
I get AdSence pop-ups on my computer. They seem to be triggered by content that I’m browsing. IE: Surfing for travel spawns an AdSence pop-up with travel related ppc ads.
Why does Google encourage the spam industry by supporting scum like this?
You fight spam with one hand and monetize it with the other.
Why not fight spam by Not monetizing it in the first place?
As a webmaster I can select domain names that I don’t want advertised on my website. IE: direct competitors. So… Why as an advertiser, do I not have the same privilege? IE: Select spam and parked domains that I Do Not want my PPC ads to be shown?
Thanks,
I am told that links per page should be limited to 30 or 40 per page to improve SE performance. I use around 150, I have 100% positive feed back from my customers on doing so. So why such a limit and just how much does ignoring it hurt me in the rankings?
Oh and also, you think you could come up with something better for after all this time?
“””Some fun stuff is here
You probably want this. Because, um, there’s nothing else here right now.””” – come on.
* Yes, I Python.
Hi Matt,
Just like you told me a couple of months ago, the Supplemental Googlebot (SG) got around to my site and things got sorted out. Thanks.
2 quick Qs.
1. Can you ask the team why they treat 410s and 404s the same. You really have to work hard at the .htaccess to get 410s and to me they mean you really want to tell the search engines that the pages in question are kaput, gonzo, terminated, removed and never to be seen again. Therefore, I would like for google and the other SEs to permanently remove them from the index.
2. If you are in San Fran and want to check out the Monteray Aquarium, could you please write a short review? I’ve been thinking of visiting and wondering if it is worth the trip.
How long do 301 redirects from non-www to www take to spread across the DC’s please?
(been waiting 5 months – still only visible on one DC)
Thanks
Even Matt is afraid to use a redirect from to because Google might penalize his website and put it into supplemental hell.
Hey Matt,
When you gonna’ come up to Canada and do some serious pike and walleye fishing?
Too much stress is not good for you. You got to have some fun once in a while.
Like what, exactly? He’s got enough to do just handling the blog.
My question is similar to Little Guy’s
“Can a re-direct link like this hurt the targeted site’s Google ranking…
htt*p://spammer-site.com/go.php?”
I found, via copyscape, a porn-affiliated site that copied my site (which is not porn related) into another language and linked to my site this way. Anything to worry about?
Thanks
LOL! I didn’t read anything about Matt answering ALL these questions only some..”
That IS a good one indeed though!
Hey Matt:
I am seeing some wild swings on .com However everything is fairly stable and “nice” on this datacenter:
Should I be checking into the Cleveland Clinic or just chillin for a day or two?
Cheers,
Ted
I blog. Would it be possible to add a date range to quiries? I might get 91,000,000 results, but the first 200 are 2-3 years old.
I would like to limit results to items no more than 6-12 months old.
Thanx…
When can we expect new updates to the google pagerank and/or algorithms. It seems it’s been a long time since the last update, I assume because of the big daddy infrastructure overhaul, but now that it’s nearing completion, won’t we see an update sometime soon.
Thanks. Travis.
Almost forgot:
Gmail should have an option to download all the messages as XML/Text or something to backup.
Ta!
Hi Matt,
I have a question or two pertaining to Big Daddy. With many sites coming out of supplemental there is still the burning question of are my pages ever going to be indexed like the old serp’s.
I myself went from 40k pages to a mere 650 and just can’t seem to get anything resolved. Even if I place a link to a page on my home page it just will not be cached.
Also I know this is not on only my site. I check my links etc from hundreds and hundreds of sites that seem to have the same indexing issue.
If Google bases their search results on “votes” from one page to another isn’t this non indexing of so many once indexed pages, link pages, etc going to skew the results for the worse.
My total links on a link:site.com search took a tremendous dive and I can only assume it is beacuse so many pages that had links to my site are just not indexed in Big Daddy.
Even my own pages that have links to other resources which Google calls “votes” are no longer indexed. So any site that was getting a “vote” from my site is no longer getting that same “vote’. If it was just a few pages Ok but it is thousands, if not millions. The forums are just FULL of webmasters asking where their pages are.
I guess my question is – Are these pages ever coming back? Is this part of the new BD to only index certain pages of a site?
Thanks, Joe
Can I shoot a few international Questions?
Since I’m on Tokyo time I missed your entry
1. Before the updates of the Data centers with double byte text seemed to take a bit longer is there a difference in the update schedules for certain languages? For example Japanese & English sites?
2. Since progress is moving towards international users using IDN Domains (Thank You Google for being one of the first to show IDN Domains natively) will the Domain Parking Program be extended to Japan, China, etc for the use of IDN Domains since the root of the domain is actually native searched keywords?
3. Any results on why IDN Domains don’t show pagerank.
4. Will any of you guys be attending SES in Tokyo this year?
Why does Google continuously juggle the number of sites that link to web sites on a daily basis with BD? It seems like a convenient way to increase or decrease any site’s rankings. In other words many high ranking sites show stability or a continous increase in sites linking to them but others increase and decrease. Certainly there is a natural change but some seems manufactured by Google.
Why is it that when I search for [endangered species Africa] Google gives a whole section of results for “cookie recipes”?
Now look what you did Matt! At least all the questions are in one place though…hehe
Looking forward to understanding RK parameters!
Well, you said enough questions and this is not a question so you can’t delete me, you can’t delete me…uhm yeah I guess he deleted me. ;-(
Ohh, please tell me what is this bigdaddy??? Plz help me at immi1979@gmail.com or immi1979@hotmail.com
Hi Matt,
This is an adwords issue – I am at the end of a rope – an 8 month long rope – as a responsible marketing campaign I deploy SEO, very responsibly, and am trying DESPERATELY to do adwords but am not allowed.
It seems, for some reason, Google decided to suspend my account for some breach of terms? Man, not only is my industry straight forward, but in no way did we ever think we breached any terms. Anyway, our account has been suspended, and I emailed, well, 15 times and actually communicated with someone via live chat off the adwords site (have names if interested), both ‘live’ operators of course promising to escalate our issue – twice. No word, 8 months and counting. And still no inidication to what we did wrong, and more important, what we need to do to re-instate our adwords account.
Really, it’s like we are the best kid in class but are being punished for something… But are not being told what we are being punished for. And forget about punishment, what can we do to be ‘okay’ again.
Doesn’t sit right. In any country, regime, or democracy
Zee Mee:
You’re probably better off hitting up a web discussion thread than waiting for an answer from Matt.
Those are my two favourites, but others will have other equally valid suggestions, I’m sure.
If a company changes its name (ie: goes from llc to inc) and updates the name in the whois info, does this sandbox the site or kill its rankings? Don’t forget ICANN requests that everyone updtaes their whois info for errors.
Why do copyright infringers with multiple domains using the same stolen content replace the original content site in the Google SERPS?
Why does the old Magellan site still rank well for directory terms?
Why do Monster & New York Life Insurance rank for New York Marketing Company?
If a site sells office product A, and advertises to his real-life buyers on photo site B, then would he be penalized for unrelated category links?
Can we please have all the algorithm factors so we can make this easier on all of us?
🙂
Hi Mat,
I noticed Google rolled out their Google Base for real estate listings around the country. Great concept, but I can see a major duplication problem that could get out of hand. I am seeing real estate magazines, real estate agents, real estate companies add the same real estate listings. I am even seeing some real estate agents loading the whole MLS with their IDX feeds from the board of realtors. If you have to many of the same people adding the same listings that is going to be a major mess. Maybe Google should make some guidelines that only the actual listing agent or the owner of the property be able to load listings.
If you have any questions about how all this works in our industry feel free to contact me.
Aloha
Dear Matt:
Thank you for letting us take part a little in your work.
This question in a way relates to canonicalization and avoiding double content, and I read someone seeing it as spam asked about it above:
On sites directed to international audiences with the same (high quality) content in several languages is it better to do several TLDs like mydomain.com, mydomain.de, mydomain.fr, mydomain.eu and so on or do subdomains like en.mydomain.eu, de.mydomain.eu, fr.mydomain.eu or something else like mydomain.com/en, mydomain.com/de, mydomain.com/fr?
Au revoir!
What’s the difference between a paid link and advertising? If one were to offer to sell space on their site (or consider purchasing it on another), would it be a good idea to offer to add a NO FOLLOW tag so to generate the traffic from the advertisement, but not have the appearence of artificial PR manipulation through purchasing of links?
Thanks for this venue, and I hope you still find it enjoyable enough to keep the lines of communication open.
Running a search engine is kind of like running a cutest baby contest. Every parent (webmaster) thinks that their kid (web site) is the cutest (most relavant) and when they don’t win (turn up on page 1 of the SERPs) there must be something wrong with the judges (algo)
I dont really know if these fit in this thread. I have mentioned them in the good and bad about google comments but perhaps they are more apropriate in the bag-thread. Although i guess Matt is more the google search guy. But anyway, here i go.
1) Google Analytics: I just love it. Its easy, it looks nice and it is accurate (if we forget about the two days that was unreachable and slowing down all sites using GA). At our company until now we are using Webtrends, expensive software with support contracts and all that and the reports generated just dont look as good and intuitive as with GA. We would realy like to switch to GA (we ll make a home brew solution for streaming server log analyzing then ) …. but …. GA is on ‘mountainview time’ and its not easy to explain to management when you present them with the GA Executive report that they have to substract 9 hours to make the graphs relevent for hour european company/site.
2) Google Account: Its great, one single account for all google services, having your data available cross-service (perhaps an idea : make my google reader feeds available in gmail ? ). But what i dont understand is that Google Adsense is using a different account with other password restrictions ? its really annoying when you switch from a Google site that uses Google Account login to Adsense and back.
Just my two eurocents.
Simple one that could possibly be answered with one word – are you planning to visit/speak in the UK at all in the near future?
Hi Matt,
Would google analytics have some kind on influence on positionning a website (because GG would know better the seeker behaviour) – This would help rank a site that has real interest for the consumer (Time spent on the site, number of page visited…)
Enjoy your trip (I would recommend the XBOX 360 !! 🙂
Nicolas from paris,
In addition to VJ’s comment:
Is Google working to eliminate parked domains used solely for displaying ads from top results (even if they match the search term exactly)?
——– mine:
What is Google’s policy in the adsense for domains?
I recently filed a protest via email that one of their clients is using a cybersquatting domain(wwwJobsAbroad.com).
GoAbroad.com owns and operates JobsAbroad.com and has been registered way back before this Cybersquatting domain registered theirs.
I was replied by adsense-domains-trademark@google.com and it
said:
start qoute adsense-domains-trademarks:
Because the domain is comprised of generic or descriptive terms, we have
decided to continue to provide services to this domain. If you wish to
take further action, please contact the domain owner directly.
end_quote adsense-domains-trademarks:
Can you help us in this regard? Thanks.
By the way, the protest email has been tagged #51434749.
Ismael Angelo A. Casimpan Jr.
GoAbroad.com Employee/Authorized Representative
Hi,
is there a way to set an expire-date for my classifieds site, because people still find expired (and so probably useless) listings? Is
okay, redirect to another page via “410 Gone” or whats the way to go?
What about the problem of directorys and shoping comparison spam overinding real pages.
If i search for a problem with say random lockups on my external firewire drive i want to find usefull info about that drive NOT 200 frikking shping sites.
A published road map like Msoft and Intel do for changes would be usefull so that we can pre warn our clients – rather than just making changes.
How is BigDaddy datacenters handling the renaming of domain1,com to domain2.com?
Hi Matt. As statement above, is BigDaddy properly recognizing the renamed domain1.com as domain2.com?
I’m just concerned of this issue since I read in several articles that in Google, when you choose to change the domain name of your currently listed domain from domain1.com to domain2.com, its subjected again to the sandbox, even though it has already been 301 redirected.
If this is not yet handled, I suggest that:
* Aside from the 301 redirection, there should be a special meta tag that would say “this was the old domain1.com and has just been renamed domain2.com”, probably like . This is so that other malicious people can’t exploit/pollute the legitimate intention of those good people.
Thanks,
Ismael
MC, How about if Google take a lead in educating surfers more – regarding negative matches etc ?
I am very much one who prefers all relevant results and then being able to filter out the cr@p with negative matches, rooting down etc. I dont always expect Google to find the result on my first search and in first place – but I expect Google to be able to return all pages relevant to the topic – at the moment due, perhaps due to over sensitive filters – I just dont feel that Google is returning these pages.
But I know that it is a delicate balancing act for the engine.
Lol – that is not an email address in my last post 😀
Recently Google Sitemaps removed its bot signature when validating websites, and this gets caught in my – fairly commonly used – bot rejecter in .htaccess.
This is how the bot shows up in the log:
64.233.172.2 – – [28/Mar/2006:12:03:51 +0200] “GET /google50f4dfb164fc9ca2.html HTTP/1.1” 403 1037 “-” “-”
64.233.172.2 – – [28/Mar/2006:12:03:52 +0200] “GET /GOOGLE50f4dfb164fc9ca2.html HTTP/1.1” 403 1037 “-” “-”
And this is the result in Sitemaps:
NOT VERIFIED
General HTTP error
Not the double “-” “-” – that triggers the 403 from the .htaccess. Why does any Google bot need to cloak itself like that? The code in my .htaccess that looks for and rejects bots that won’t identify themselves rejects a lot of harvesters and “irregular” bots, and was copied from a Webmasterworld thread, so chances are I’m not the only “script kiddie”. 🙂
A personal problem with google search ranking and possible penalization: this page was in the days around November 7th 2005 suddenly demoted from top ten in the SERPS to like 170 for a few days, and since then it’s been completely gone. I wouod like to file a reinclusion, but it seems peculiar to do that for a single page, and most importantly of all, I can’t see why it’s being penalized, so I don’t know what to do about it.
Does Google consider Denmark’s primary tourism icon as too sexy? Or do you only accept the Disney version? 🙂
I would _really_ appreciate a heads up on what’s happening here.
Hi Matt
I am seeing a lot of sites with “%09” (tab) and “%20″ (space) in front of the URL in Googles index. What can a site do that is indexed like this (since the URL will usually not work for a visitor and a 301-redirect is not possible)?
You can check with (ok, % is not interpreted) inurl:”%20www.”:
and
(+ the sites without using a)
Search + replace in the db? 🙂
Thanks
I like to repeat Evan’s question with the addition “and synonyms”:
How can a site rank so well for just about everything except for one key phrase? (obviously penalized, but when is that lifted, all other pages KWs and synonyms seem to rank fine…..
If the question can’t be answered, is it possible it to use a re inclusion request for a single key phrase?
The one thing that seems to be getting to people generally, is what are the post Big Daddy intentions? Fixes, spam issues, regeneration of ‘pure’ indices, supp. issues, PR and BL update, etc.
People are getting frustrated by the lack of information, although at the same time grateful for the snippets you provide, albeit often a little vague. A clear and concise ‘way forward’ for Google SERP’s would be appreciated by many.
Also if you are unable or unwilling to answer certain general questions, then to state so would alleviate webmasters anxieties and anticipation.
Everything mentioned with the best intentions for everyone, Matt 🙂
Matt when can we start expecting the new software and bigdaddy to handle 301 domain moves better? like moving from ww.old-domain.com to ?
My question … when are you going to answer all these questions? 😉
Hi Matt,
I have noticed all the supplmental problems *for me* occur with 301 redirect sites where I am re-directing anything to to – care to comment on this matt?
Is Google planning on expanding the Web API terms of use in the future? It was ahead of its time when it first came out, but Yahoo and MSN have far surpassed it in terms of use. They allow 5K and 10K queries per day per IP address. For academic researchers like myself, this is extremly useful. Google is stuck at 1000 queries per day regardless of the IP address. This makes using Google in my research extremely difficult.
Google has recently been clamping down on automated queries, blacklisting IP addresses for around 12 hours when a trigger is set off, so using the API is now our only option.
Thanks
Does Google have any plans to address the problem with sites that have been obviously penalized in the SERPS, and have no idea why?
here’s one for you:
Why do 90% of webmasters think if they don’t show up on the first page, that they’re being penalized? 🙂
Looks like Matt has enough fodder for 3 more years of blogging now.
Any word on the google UPDATE for the sites. It has been a while since Jagger.
Ryan, that is an extreme comment, and unfounded.
Most webmasters are happy with a balance throughout the SERP’s. If they find spam sites above them, then they are justified in their views. Meanwhile, I write in defence of your pick a figure and post it ‘90%’ statement.
Matt:
SPAM reports still having no effect. I’ve reported one site several times, and it is still contaminating the SERPS.
(1) Are there any special keywords that I need to use?
(2) Or is this the result of the new Google policy on notifying the webmaster of the infraction before removing it from the index?
Hey mat I have a question when is Google gonna start seeing the spam in the urls as it is getting to be a huge problem as the example below
It is getting out of hand and will get worse Optimum_Nutrition_Pro_Complex_4_4lbs_Chocolate Optimum_Nutrition_Pro_Complex_4_4lbs_Chocolate
I did a search for optimum nutrition pro complex and this selling site with top 2 shows up. Ok when is this gonna stop or can I assume I need to go to
Optimum_Nutrition_Pro_Complex_Optimum_Nutrition_Pro_Complex_Optimum_Nutrition_Pro_Complex_Chocolate.htm
and get my site up there as well. Just not in this industry it is getting so wide spread the url are longer than the description
What about split run A/B testing, using php redirects. Do google and others consider this spammy? If so, what else can you recommend for testing on page elements and their affect on conversion.
Matt,
Say I have an online store selling widgets in the US and have a .com domain. If I am successful, I may want to move into a new sales teritory, with say a .co.uk domain. I’m still selling the same widgets, but have only modified the page to reflect the currency changes.
Would I get hit by duplicate content filters and have pages removed? I would have:
mysite.com
mysite.co.uk
They would have duplicate content because they are advertising the same products, but just targeting a different audience.
Matt – I have some questions for you:
1) Why are there no blue foods in nature?
2) Who killed JFK?
3) Will Google Earth ever improve the aerial views of some of the less-famous areas of the UK?
That’s all I can think of right now, everyone seems to have already covered the interesting ones 🙂
Hi Matt,
I am looking forward for some info about the RK parameter from you. 🙂
What is it? Why was this interesting thing turned off … 😀
Blue robins’ eggs are food for some animals. Does that count?
Blue is a very rare color in nature and all foods that are blue are poisonous?
Another observation about spam filled adult serps:
I did a search earlier last week showing from 1-100 out of about 2 million results and all sites in the first 100 results where black hat redirects to dialers or 404’s.
Again for every adult search from 1-100 there is at least 40 black hat redirects and there are many searches that have 80-90% redirects.
It seems like Google’s Algorithm has gone backward since Jagger to now for adult serps.
Henry Elliss wrote > 1) Why are there no blue foods in nature?
Aaron Pratt answeres > Blue is a very rare color in nature and all foods that are blue are poisonous?
This is very interesting guys :-O
Robots.txt Supplemental?
Hi Matt,
Would using the robots.txt to block a URL that’s gone supplemental (and you don’t want in the Index at all) get rid of it once and for all??
Thanks!
Sarah
Ulysee – Why do you care about prOn links? In every sleazy money making area you will always find these examples, so what is your point?
Matt,
Seriously, How do you plan on picking which of these questions to answer? .. 🙂
Blog Spam. Its growing at a rapid rate and as a fellow blogger, I am sure you are as frustrated with it as I am. Can we get an idea of how high up in the priorities dealing with it is?
As a gammer, I am sure you have tried or are playing some form of MMO. Can I get an invite to your guild? 🙂
Poor Matt… you’ve had every question imaginable thrown at you, lol.
The most noted problem here, and one that relates to my websites, is the loss of indexed pages on Bigdaddy.
I’ve generally lost around half of my indexed pages, which isn’t good for anyone.
This is surely a big problem for google, so I’m interested to know if it’s an issue which is being worked on?
Aaron Pratt
Sleazy or not it’s the business I work in and I bet ya that it’s the most searched for area in Google!.
Everybody should care, if Google let’s one sector slip away then I think that it’s a slippery slope that should be watched for in other areas.
When it comes to bad serps in Google you better be objective and not just think of yourself. People’s businesses and lives were lost, that’s nothing to smirk at.
Matt – too bad there was so little interest in a grab bag post…hope you didn’t have any work plans this week.
Thanks for stepping up Ulysee and pointing out the huge amount of spam in the adult SERP’s these days. Adult terms are no doubt the most searched for segment on the web and always have been. However “sleazy” one might believe the business to be, sex is a very real part of the internet and it deserves to be treated as the billion dollar revenue stream that it is. The amount of redirects, spam and duplicate content that are showing in the top 30 results is certainly not going to help with search confindence. Makes it very difficult to continue being white hat as well.
Matt is the adult search segment important enough to Google to discuss?
Should I bother continuing to submit duplicate content, redirect and spam reports to Google? Please let me know if I am wasting my time.
Matt,
The serps on 64.233.187.104 are quite different than the other DCs. Is this old data without filters or a sign of things to come?
“Bad serps” LOL
Hi Matt,
I read about a new navigation feature that Google is using in Italy where there are pagerank bars in the upper left part of the search results pages on Google. So far I get nothing much from that, do you know anything about this new feature and why it’s being tested in Italy?
“Matt,
The serps on 64.233.187.104 are quite different than the other DCs. Is this old data without filters or a sign of things to come? ”
My rankings are amazing on that DC!
Probably without filters or something.
Matt,
There´s one thing I would love to see available in Google and I want to ask you if Google is willing to consider it.
Can you add this command: inhtml:
I would love to be able to search for html code and see how that ranks.
Curious if Google believes there is any benefit in being able to search inside the HTML code of pages.
Thanks,
>> Matt when can we start expecting the new software and bigdaddy to handle 301 domain moves better? like moving from ww.old-domain.com to ?
Have you looked at and at lately? Those are very different….
>>if Google let’s one sector slip away then I think that it’s a slippery slope that should be watched for in other areas.
Unless it isn’t a slip. What better way to research the “latest and greatest” of tactics than to allow them to be out there so they can be watched in action?
>>When it comes to bad serps in Google you better be objective and not just think of yourself.
Right. People should plan their lives and their futures sensibly and face the realities they have to live with in this world. Objectively speaking, there are some who will always think they deserve a free lunch but the world doesn’t always cooperate with that. Reality rules.No one can control the serps or do anything more than adapt to what is at any given moment.
>>People’s businesses and lives were lost, that’s nothing to smirk at.
True, it would be unkind to smirk. But when it comes to people’s businesses that their lives depend on, they’d better walk the safe ground and develop a business plan that’s more stable and secure than the equivalent of traversing a swamp with quicksand pits they can sink into.
The SERPs never have been a secure or stable thing, it would be unkind for people not to be told to build the road to their future on solid ground rather than on shifting sands – which is exactly what relying 100% on organic search is doing – no more secure than gambling.
People make their own choices, for better or worse. It’s called being a responsible adult.
Matt, you are such a tease 🙂 Is the rest of Google plex laughing with you?
I think all have missed your statement: “I’ll tackle a few of the questions that are general”
Ach! 140 comments?! Okay, no more questions for now. There will be future grab bag threads, I promise. 🙂
Good morning Matt
This grab bag thread reminds me of GoogleGuy situation last year. He was asked around 180 questions in a Questions for GoogleGuy thread on WMW 🙂
Once again. Thanks a bunch for this great opportunity.
Pick mine, pick mine pick mine Matt. 🙂
Mine is general.
Dear Matt,
What must a site do when it uses a lot of tags to categorize data? Should it convert all those tags to use rel=”nofollow” even if they were placed by the owner?
Thanks
Can you lock the WordPress blog post/thread dealies (I’m asking anyone in general who may have one since I don’t and therefore I don’t know)? Because now would be a real good time.
Matt, if you’re overwhelmed, look at it this way: if you sucked and were a total ass clown loser, no one would have asked you anything at all. I’d rather feel overwhelmed if I were in your boat. 🙂
Hi Matt,
Here are my questions:
– Session IDs really cause problems for googlebot? why some sites do not effect but some do?
– If doesn’t redirect (none www) domain.com to, could it be penalized? if not, why?
Thanks.
Matt, I think I’m starting to understand how google works, is it ok, or should I consult a doctor? 😉
Hey Matt
Ive been an avid reader of your blog since its inception. My question is overall BIGDADDY seems great, but I work in the ever competitive and spammy ‘casino’ / ‘gambling’ industry. Prior to BIGDADDY the results for the terms “online casino(s)” were great, but since BIGDADDY has rolled out they are full of spam and highly irrelevant.
Why is this? This makes these results seem highly spammy and irrelevant.
Cheers
Anne
I am a webmaster for an ecommerce site. I have done well in the organic
SERPS by creating a BIG link exchange. As my link pages have a good PR I get a lot of link exchange requests from other webmasters. A recent trend is for these link requests to offer a return link from a
different domain name. I think this is referred to as a triangular link
exchange.
Are triangular links “Evil” ?
Thank you Sir,
Steve
Hi,
I think that triangle linking is not that bad idea if you know that, or some how.. The main reason that its not evil is because its linking until that linking should not be FFA pages or a part of FFA linking strategy.
Just checking in to see if you still wanted that review of Spamalot from SES NY, Matt. Wasn’t sure how to track you down so I figured I’d try here.
Duane
Matt,
What happened on last 27th December ? It seemed to be more than just a “data refresh”, some sites have been seriously hit on that day. They appear to be back now, but what was it ?
Wow, going on 148 comments, and keep adding… 2 quick questions:
1 – Is this the most answered to post in your blogging history?
2 – Why does google show sites having 70k+ pages in their serps when in fact such site really has 5k+?
Cheers!
Sorry, this is not a question but an answer… to Luis Alberto’s first question. This is NOT the most commented post from Matt yet, the [url=]BMW reinclusion[/url] one reached 186 replies 🙂 (maybe you can find posts with even more comments)
You have one less question to handle, Matt 😉
Hello Matt,
Question about Sitemap.
If we check a well formed Sitemap file here :
It’s ok if we check only “Show warning”, but if we check with “Keep Going” or “Check as complete schema”, there is this error :
“Attempt to load a schema document from (source: command line) for no namespace, failed:
Not recognised as W3C XML Schema or RDDL: urlset ”
Can we have a google sitemap validator on google web site ?
Thanks
Degas
Explain option :
Show Warnings
display warning messages, e.g. about use of wildcards
Keep Going
continue schema-validation after finding errors
Check as complete schema
Normally XSV interprets its first input as a document to be validated, and the remaining inputs, if any, as schema documents for use in that validation. This means that if the only input is a schema document, XSV normally just validates that document against the Schema for Schema Documents (XMLSchema.xsd), but does not also check the Schema REC’s constraints on the corresponding schema. Ticking the “Check as complete schema” box causes XSV to treat all its inputs as schema documents, check them against the Schema for Schema Documents and check the Schema REC’s constraints on the corresponding schema.
What are supplemental pages? How do I tell which pages are supplemental? How do I move pages out of supplemental status?
hi matt,
thank you for the opportunity to pose questions! here are mine:
(1) why would not I rank for my own unique domain name? in both Y and MSN and until Jagger, I was able to rank for my own unique name. eg, drofoung.com – when searching drofoung (no TLD), I came up as #1. now, I see sites that mention my sites ranking way above me, while I rank at #50 (or so). is this a penalty, if so, what type? Im clueless here.
(2) this is somewhat related to the above: the same site contains unique content, but now the scrapers (the thieves) are ranking for strings from that content and my site does not. how does Google handle this? why should the thieves rank (and benefit) from my content?
many thanks,
sid
Okay, feel free to keep leaving comments, but I won’t be able to answer questions after this. I’ll have another grab bag thread sometime soon, I promise.
Hi Matt
Does Google care whether my sites DNS entry is an ANAME or CNAME? If so which does it prefer?
Thanks
Pablo
Cal I penalize a new site by bombing the site with links? aka, Google Bowling.
grrrrr..no more questions. 🙁
Hi Matt,
When will Google release Urchin v6?
On Google Analytics it says 2006, but could you be more precise? imminently, in a couple of months, during the summer, fall, winter?
Thank you
Regards
Leo
being a new guy to webhosting I am finding all this to be really interesting, my site like so many others is only a year old…. first we had jagger… and now the big daddy, where are sites have been up and down…. to me there is one big problem… a lack of real information on what we are doing wrong. people like me are not your so called SEO types, we are people who are intergrating the internet into our business, remember if we get it wrong we still have to pay our mortgage to pay, so lets do something for the little guy…we need to know if we are doing something wrong……
PS bad choise of name “big daddy” in my world its too close to big brother..
Matt,
I’m not sure if you will read down this far, but really hope you do.
I have a lovely old site which has done well in Google over the years (thank you). Yesterday though a bolt came out of the blue. It sank out of sight! After hectic hours of research I found the reason: what I call a ‘proxy page jack’ attack.
What I found was a site looking like this:
This was ahead of me on every term I usually get visitors on. It was some sort of proxy service, which Google had indexed my pages through (and other people’s I guess) in their frame. Their site had high PR and so on.
It looked like my site was now considered a duplicate of that copy!
I contacted and begged the proxy site owner and he was helpful. He put a 404 in htaccess to direct that one proxy return to a blank, and then put User-agent: * and Disallow: / in robots.txt. Following more begging, he submitted his proxy segment to your removal tool.
This morning, his proxy has now gone, but my site hasn’t recovered.
Is there a time lag? Or am I just doomed now?
Please please please could you advise me (I’ll add more ‘pleases’ if you want!). I don’t know what I can or should do from here. I feel I have just been hit by a quirk of fate.
Jane
Will Google every remove blog entries from the Main Serps and put in a Blog search function?
This would eliminate a whole lot of blog spam if blogs were treated as a different entitie that could not influence the main serps.
Matt,
This miscellaneous monday of yours was the start of a strange week. Our website (click my name) for some reason completele disappeared from the SERPs in Brazil. We´re a brazilian site in portuguese, with lots of information. All you always claim about what a website should do,.. we did it. adding unique content, we have our own writers for this, we were added to dmoz, we really try to make it a great website for our visitors and everything was fine. Until monday. Now we have no visitors from Google. (and we used to be found for over 20.000 different phrases per month in Google) We did nothing drastic to our site. We just keep adding pages to the site on a weekly basis. The strange thing is,.. for a couple of days we were seeing an increase in referals from Google, then it completely stopped. But I don’t understand why. I know you won’t reply to this nor email me… no need for that, I would just be happy if somebody would check the site and do what ever Google sees fit to do. I also sent a message to the webmaster help and google groups.
Thanks,
Hi Matt;
Nice joke 🙂 You had me going for a second but then I figured it out :-0
I’ve been trying to find a way to ask you a simple question but I can’t find a link saying “Ask Me a Question” 🙂
Last week our web hosting company disappeared off the map taking our money and wiping out all the server-side data.(evidently they subsequently claimed hackers had ‘dictionary’ bombed their servers) I had to quickly transfer our site to a new host and through a distrust of the poor firewalls used by uk hosting companies I moved the site to Yahoo. Problem. It is a .com site and yahoo’s server is the US. Result our site has disappeared completely off googleUK.
Met tag information or content seems to make little difference. I cannot use our .co.uk site mainly because all our expensive directory incoming links point to our .com site and trying to change some or all of them would be impossible.
Is there anything I can do?
Pages not cached by Google: Still getting some rankings. Even though Google isn’t caching those pages, does that mean Google doesn’t know anything about what’s on those pages except for the Title and Meta tags? Or does it mean Google knows what’s on those pages (such as keyword focused content, keywords in h1 tags, etc), but is just hiding the cached version from users because of some kind of “no cache request” (Example: Washingtonpost.com, and I don’t know how they’re requestion “no cache)
Thanks,
Paul.
I hope this is still the grab bag thread!
Is the a new major update comming this year considering Google bought a new algo from an Israeli student?
Hi Matt
I’ve noticed something very strange in the SERPS that I’ve not seen before. Have you ever seen something like this?
SERP Title
SERP Description
If you would like to see specifically what I’m talking about, you can see the Spin Palace result here:
I would be extremely interested in your feedback.
Best regards,
Tim
I’d like to know what is going on with Google????? My site has not been crawled in about a month, despite numerous updates!! It’s critical in my business to update frequently, otherwise, I fall in the rankings, due to enormous competition. I would also like to know, how a page rank can fall so drastically without any reason, and with same amount of traffic and links. I REFUSE to link farm, as I think that does not make a site more “important”. I have been highly ranked in Google with the same site for over 4 years, now all of a sudden, I am not getting crawled, and am starting to fall due to the other NON-relevant sites spamming the keywords that ARE relevant to my site.
Please answer this for me, as no one else at Google seem to give a rats A$$.
Thank you!!!!!
Can someone please tell me where the answers are ? or did Matt repond directly to the person who posted the questions.
Would creating a duplicate site that has robot exclusions for Googlebot and Google AdSense Robot get my primary site penalized? This would be a desperation move to try and solve an intractable problem with another search engine.
Hi Matt.
I have a question about the unethical black hat tactic known as google bowling where people are destroying the reputation of my site by polluting googles search engines with spam sites and linking to me on these sites thus dragging my site down and causing it to loose ranking on the pages they are attacking.
This has happened to me before and they have allready suceeded in wiping out one of my pages from the results, a week ago when there was a serps change i noticed hundreds more of these spam site have appeared and they are targeting another one of my page.
Is there anything i can do to defend myself against this? The bad guys tactic IS working and as i said they have allready destroyed one of my pages and they are working on destroying another…
I am continually reporting these sites as spam and when ever possible informing the hosting providers but theres to many sites and it feels like im fighting a battle i cant win. For a typical example of the googlebowlers spam sites i am reporting google search “fun/poker.htm site:.info”
Is there anything else i can do to defend against this kind of unethical practice?
Matt,
In video response session #5 you spoke about URL parameters and serving static HTML to Googlebot. Your comment about Google considering this to be cloaking needs more verification. A URL can seem static to Googlebot but serve the dynamic URL to a user while still being the exact same page. Would this not be considered cloaking? If I remove a number of parameters or assign Googlebot to a specific userid in the URL but redirect or add the parameters to a non-bot user-agent, would this be seen as cloaking?
Thanks for all the great responses so far.
Thank you,Matt! Awaiting your advise on next update report!
Big Daddy update did good on all my websites. Thanks G.
Hope bigdaddy update is eomplete i think so, what about jagar update, is it there still with any algo changes or google updates with normal algos.. please respond.
is 301 best for this? like moving from to | https://www.mattcutts.com/blog/miscellaneous-monday-march-27-2006/ | CC-MAIN-2016-44 | refinedweb | 13,226 | 71.34 |
signed imports for verified loading of python modules) which may be able to verify the Python executable itself but not the code that is dynamically loaded at runtime.
It will mostly be useful for frozen Python applications, or other situations where code is not expected to change. It will be almost useless with a standard Python interpreter.
If you’re just after a black-box solution, you could try one of the following function calls to sign your app with a new randomly-generated key:
signedimp.tools.sign_py2exe_app(path_to_app_dir) signedimp.tools.sign_py2app_bundle(path_to_app_dir) signedimp.tools.sign_cxfreeze_app(path_to_app_dir)
These functions modify a frozen Python application so that it verifies the integrity of its modules before they are loaded, using a one-time key generated just for that application.
But really, you should read on to understand exactly what’s going on. There are plenty of caveats to be had.
Enabling Signed Imports
To enable signed imports, you need to create a SignedImportManager with the appropriate cryptographic keys and install it into the import machinery:
from signedimp import SignedImportManager, RSAKey key = RSAKey(modulus,pub_exponent) mgr = SignedImportManager([key]) mgr.install()
From this point on, all requests to import a module will be checked against signed manifest files before being allow to proceed. If a module cannot be verified then the import will fail.
Verification is performed in coopertion with the existing import machinery, using the optional loader method get_data(). It works with at least the default import machinery and the zipimport module; if you have custom import hooks that don’t offer this method, or that don’t conform to the standard file layout for python imports, they will will not be usable with signedimp.
Keys
Currently signedimp uses RSA keys for its digital signatures, along with the “Probabilistic Signature Scheme” padding mechanism. To generate a new key you will need PyCrypto installed, and to do the following:
from signedimp.crypto.rsa import RSAKey key = RSAKey.generate() pubkey = key.get_public_key()
Store this key somewhere safe, you’ll need it to sign files. The simplest way is using the “save_to_file” method:
with open("mykeyfile","wb") as f: key.save_to_file(f,"mypassword")
To retreive the key in e.g. your build scripts, do something like this:
with open("mykeyfile","rb") as f: key = RSAKey.load_from_file(f,getpass())
You’ll also need to embed the public key somewhere in your final executable so it’s available for verifying imports. The functions in signedimp.tools will do this for you - if you’re writing you own scheme you can either pickle it, or embed its repr() somewhere in your source code.
Manifests
To verify imports, each entry on sys.path must contain a manifest file, which contains a cryptographic hash for each module and is signed by one or more private keys. This file is called “signedimp-manifest.txt” and it will be requested from each import loader using the get_data() method - in practice this means that the file must exist in the root of each directory and each zipfile listed on sys.path.
The manifest is a simple text file. It begins with zero or more lines giving a key fingerprint followed by a signature using that key; these are separated from the hash data by a blank line. It then contains a hash type identifier and one line for each module hash. Here’s a short example:
---- key1fingerprint b64-encoded-signature1 key2fingerprint b64-encoded-signature2 md5 76f3f13442c26fd4f1c709c7b03c6b76 os.pyc f56dbc5ee6774e857a7ef07accdbd19b hashlib.pyc 43b74fc5d2acb6b4e417f4feff06dd81 some/data/file.txt ----
The format of the fingerprint and signature depend on the types of key being used, and should be treated as ASCII blobs.
To create a manifest file you will need a key object that includes the private key data. You can then use the functions in the “tools” submodule:
key = RSAKey(modulus,pub_exponent,priv_exponent) signedimp.tools.sign_directory("some/dir/on/sys/path",key) signedimp.tools.sign_zipfile("some/zipfile/on/sys/path.zip",key)
Bootstrapping
Clearly there is a serious bootstrapping issue when using this module - while we can verify imports one this module is loaded, how do we verify the import of this module itself? To be of any use, it must be incorporated as part of a signed executable. There are several options:
- include signedimp as a “frozen” module in the Python interpreter itself, by mucking with the PyImport_FrozenModules pointer.
- include signedimp in a zipfile appended to the executable, and put the executable itself as the first item on sys.path.
- use the signedimp.tools.get_bootstrap_code() function to obtain code that can be included verbatim in your startup script, and embed the startup script in the executable.
Since the bootstrapping code can’t perform any imports, everything (even the cryptographic primitives!) is implemented in pure Python by default. It is thus rather slow. If you’re able to securely bundle e.g. hashlib or PyCrypto in the executable itself, import them before installing the signed import manager so that it knows they are safe to use.
Of course, the first thing the import manager does once installed is try to import these modules and speed up imports for the rest of the process.
A word of caution - most freezer programs (e.g. py2exe or bbfreeze) execute their own startup scripts before running the user-supplied script, and these startup scripts often import common modules such as “os”. You’ll either need to hack the frozen exe to run the signedimp bootstrapping code first, or securely bundle these modules into the executable itself.
So far I’ve worked out the necessary incantations for signing py2exe, py2app and cxfreeze applications, and there are helper functions in “signedimp.tools” that will do it for you.
I don’t belive it’s possible to sign a bbfreeze application without patching bbfreeze itself. Since bbfreeze always sets sys.path to the library.zip and the application dir, there is no way to bundle the bootstrapping code into the executable itself.
Caveats
All of the usual crypto caveats apply here. I’m not a security expert. The system is only a safe as your private key, as the signature on the main python executable, and as the operating system it’s run on. In addition, there are some specific caveats for this module based on the way it works.
This module operates by wrapping the existing import machinery. To check the hash of a module, it asks the appropriate loader object for the code of that module, verifies the hash, then gives the loader the OK to import it. It’s quite likely that the loader will re-read the data from disk when loading the module, so there is a brief window in which it could be replaced by malicious code. I don’t see any way to avoid this short of replacing all the existing import machinery, which I’m not going to do.
As mentioned above, this module is useless if you load it from an untrusted source. You will need to sign your actual executable and you will need to somehow bundle some signedimp bootstrapping code into it. See the section on “bootstrapping” for more details.
You must also be careful not to import anything before you have installed the signed import manager. (One exception is the “sys” module, which should always be built into the executable itself and so safe to import.)
Finally, you may have noticated that I’m going against all sensible crypto advice and rolling my own scheme from basic primitives such as RSA and SHA1. It would be much better to depend on a third-party crypto library like keyczar, however:
- I want the verification code to be runnable as pure python without any third-party imports, to make it as easy to bootstrap as possible.
- I’ve copied the signature scheme directly from PKCS#1 and it’s broadly the same as that used by keyczar etc. This is a very simple and well understood signing protocol.
- The signing code is supposed to be run offline, in a controlled setting with controlled inputs, so the risk of e.g. timing attacks is small.
- The verifying code can’t leak any info about the private key because it simply doesn’t have any, so it can be as slow and sloppy and clunky as needed.
I am of course open to negotiation and expert advice on any of these points.
You have been warned.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/signedimp/0.3.1/ | CC-MAIN-2017-51 | refinedweb | 1,421 | 63.49 |
Checks if the entry pointed to by entry_to_check contains the role indicated by role_dn.
#include "slapi-plugin.h" int slapi_role_check(Slapi_Entry *entry_to_check, Slapi_DN *role_dn,int *present);
This function takes the following parameters:
The entry in which the presence of a role is to be checked.
The DN of the role for which to check.
Pointer to an integer where the result is placed. For results:
non 0 means present
0 means not present
This function returns one of the following values:
0 for success if role_dn is present in entry_to_check and present is set to non-zero. Otherwise it is 0.
non-zero (error condition) if the presence of the role is undetermined. | http://docs.oracle.com/cd/E19528-01/820-2492/aaimk/index.html | CC-MAIN-2014-42 | refinedweb | 113 | 72.87 |
Banners are one of the most popular mobile advertisement formats. They don’t consume much space, as, for example, interstitial advertisements. And this allows developers to combine banners with other UI elements. Banners can be added to many app’s screens. If you read this article, you will find a method of how to add banners to your app, in such a way that banners will not overlap other UI elements. You need not change layout of XML and make much app’s code changes. Banner integration code is minimal. You can easily insert banners into ready app with only a few lines of code. Also, this method allows you to integrate banners in such a way that you can easily make advertisement free app version without complex code changes. The method described in the article is universal and can be used with different advertisement APIs. The article will be interesting for both novice and experienced developers. In order to understand the subject of the article, you do not need to have any in-depth knowledge. You just need to understand the basic concepts of the Android development. And experienced developers can find in it a ready-to-use solution that they can implement in their apps. But, advertisement service initialization, working with specific advertisement services and caching are not in scope of this article. To solve these issues, please refer to developer’s manual for your advertisement service.
An idea of this article was born when we had a situation with one of our Android apps. We had to add banners in several places, but we had to do so in a way that doesn't damage the look and feel of the app and not to overlap other UI elements with banners. We had written all app’s code, and we didn’t want to rewrite it. And because of this, we tried to make the banners integration as simple and correct as possible, and to not affect the work of the existing code. Another reason, because we needed to simplify the banners integration – the possibility to make release of paid version without ads. If we had integrated banners anywhere in the layout XML, it would greatly complicate the creation of ad-free version of UI.
To make it more clear about what I’m writing, please look at the following screen:
UI elements occupy the entire space of the screen. There is no empty space here. In this case, we can add a banner at the top or at the bottom. A variant of placing the banner at the bottom is more appropriate, because the banner will be far from other buttons, and user will not accidentally tap the banner trying to tap “Back” or “Done”. We need to place the banner below the photo GridView. And as the banner is downloaded over the network, it may not always be available. If the banner is not downloaded, it will be empty space at the bottom. And it will look ugly, as UI design defect. If we place the banner over GridView – it will overlap a part of photos create inconvenience for a user, which is also unacceptable.
GridView
To better understand it, you can install our free app BMEX. You can find it here. In BMEX, please, tap “Send” and then select “Photo” or “Video” and wait a bit until ads are loaded.
So, we formulate our task as: we need to make UI without additional empty space. And when the banner is loaded – dynamically add margin at the bottom and show the banner. On the other hand, we need to make the banner placement code as simple as possible, without any complex initializations. E.g. passing UI elements’ ids or ViewGroup references is not appropriate. Inserting banners into layout XML of every screen – is also not appropriate, because it requires severe changes. An ideal banner placement code must look like:
ViewGroup
Ads.showBottomBanner(activity);
Only one line of code, only one method call. Method gets reference to an Activity in which the banner will be placed. This code can be easily inserted into Activity’s onCreate method.
onCreate
In order to implement this, we need to get access to Activity’s ContentView. ContentView is a ViewGroup in which all Activity’s UI elements are contained. There is no ContentView direct access method in the Activity class. But, thanks to StackOverflow user nickes, we have found a solution. You can find it here. We need a reference to a Window in which the Activity resides. Window has DecorView, and DecorView contains the ContentView.
ContentView
Activity
Window
DecorView
So, we need to get Activity’s Window, then get DecorView, then get ContentView, then get a first child of ContentView. And change padding of this first child. The code can be found below:);
}
We have found a solution to dynamically add padding. And now we need to add the banner itself. Various ad services has different APIs. Some APIs has banner View that can be created and added to ViewGroup. But, some APIs have no access to banner’s View, and have only method that will show the banner. Let's consider both these variants.
View
Let’s call this view class name Banner. (Please, refer to your ad API developers’ manual for details about its name and how to instantiate it). First, we need to create Banner view:
Banner
final Banner banner = new Banner(activity);
Then, we setup event listener to receive banner loaded event (and, again, this is example code. For actual listener name and how to set it up, please your ad API developers’ guide):
banner);
}
});
When banner is loaded, we call setupContentViewPadding. It will dynamically add space at bottom.
setupContentViewPadding
Then, we add our banner to Window.
We will add it above ContentView with existing UI elements. Window has addContentView method for this:
addContentView
FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(
FrameLayout.LayoutParams.MATCH_PARENT,
height);// Utils.toDIP(activity, BANNER_HEIGHT));
layoutParams.gravity = Gravity.CENTER_HORIZONTAL | Gravity.BOTTOM;
activity.getWindow().addContentView(banner, layoutParams);
We don’t have banner view and we can’t create and add it explicitly. But, we have API that has methods, like showBanner.
showBanner
I’ll call ads service API class – AdAPI. You can replace it with your actual ads service API class name. In this case, the banner placement code will look like (and again, this is not real code. Please, refer your ad API developers' guide for details about class names and how to use them):
AdAPI
Ad banner = AdAPI.loadBanner();
banner.addListener(new AdListener() {
public void adLoaded() {
// add bottom padding, when banner is loaded.
setupContentViewPadding(activity, true, BANNER_HEIGHT);
}
});
When BANNER_HEIGHT is a constrant with banner height value;
Here are some issues. You need to know or setup explicitly the banner’s height through ad service administration interface. We had this problem, when we ran our app on 3.7 inch smartphone and 10.1 inch tablet. Banner sizes were different. On smartphone, the app looks fine, but on tablet, the banner was so big that it consumed too much UI space.
BANNER_HEIGHT
As you can see, banner is shown and it doesn’t overlap any UI elements. Space at the bottom is added dynamically.
This is what we required.
To see how it works, you can run our free app BMEX. You can find it here. You need to tap “Send”, then tap “Photo” or “Video”, and wait a bit until ads are loaded.
Summarizing all that is written above, I will describe how to integrate this into your app.
Ads class.
public class Ads {
// replace it with your actual value
final private static int BANNER_HEIGHT = 75;
public static void showBottomBanner(Activity activity) {
// replace with your actual ad API code
final Banner banner = new Banner(activity););
}
});
FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(
FrameLayout.LayoutParams.MATCH_PARENT,
height);// Utils.toDIP(activity, BANNER_HEIGHT));
layoutParams.gravity = Gravity.CENTER_HORIZONTAL | Gravity.BOTTOM;
activity.getWindow().addContentView(banner, layoutParams);
});
}
}
Banner placement can be made by Ads.showBottomBanner(this) call from Activity’s onCreate method.
Ads.showBottomBanner(this)
Just replace code in showBottomBanner method with your ad API calls.
showBottomBanner
In the article, I described a method of how to easily and correctly integrate banners into Android app. There are more banner placement types. For example: take the first screen in the article and place banner between photos, not at the bottom. I hope the article was useful for you. Please post your suggestions as comments. Thank you for your attention. I wish you success in development!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://www.codeproject.com/Articles/1005286/How-to-Add-Banners-into-Android-App-and-Not-Overla | CC-MAIN-2018-51 | refinedweb | 1,435 | 58.28 |
Activation Timer -- a Simple Task Scheduler
Overview
One of the most common problems in Win32 programming is dealing with a number of functions that must be executed each time the specified period expires. For example, you have three routines that must run:
A common solution involves several objects known as timers, which may seem fairly easy but cumbersome to use. This article introduces a simple yet powerful set of classes that greatly reduce the effort (and headache) required to handle multiple periodic jobs.
Algorithm
- For each appended task, create a new thread.
- Create a separate timer within every thread.
- Use these timers in a common way.
This may seem great, but only if your system has enough resources to handle as many threads and tasks you wish to run. It is better to use a single timer and open a new thread only when your background task is really about to start. The modified algorithm is:
- Create a timer.
- After any task is appended:
- Kill the timer.
- Calculate a minimal timeout value = Greatest Common Measure of all timeout values.
- Restart the timer.
- Each time the timer activates, check whether any tasks are up to run.
- If so, create a new thread(s) and run the task(s) in this thread(s).
This is nearly perfect, but one problem remains. If the task's execution period is less than the task's working time, a single task can start multiple threads; therefore, we must keep an eye on all of the threads to avoid resource leaks. In other words, we must implement some kind of garbage collection algorithm, which will track all thread objects and will close all unused handles.
The final algorithm is:
- Create a timer.
- After any task is appended:
- Kill the timer.
- Calculate a minimal timeout value = Greatest Common Measure of all timeout values.
- Restart the timer.
- Each time the timer activates, check whether any tasks are up to run.
- For each ready-to-run task:
- If a task is run for the first time, create a list of active threads, create and add a single thread object to this list.
- If the thread list is not empty, append a new thread object to the end of the list.
- Check all thread objects on the list, except of one recently appended, for completition of execution.
- If any thread is not active, release its thread handle.
Advantages of this approach:
- Minimal resource overhead.
- Elimination of all memory and resource leaks.
- Relatively high computational efficiency.
- Ease of use (and debug)—only one timer to track.
Class Hierarchy
The given set consists of five classes; the higher class is always a manager-class (and an aggregate) for a subordinate class.
The actual hierarchy is:
ActivationTimer --- manages ---> TaskList --- contains ---> Tasks --- (each task) controls (its own) ---> ThreadList --- handles ---> Threads struct Thread // Structure that incapsulates the // thread handle. { // Only Thread list is allowed to create/close threads: friend class ThreadList; protected: // Active thread handle: HANDLE ThreadHandle; Thread* Next; // Next thread in a list. Thread* Prev; // Previous thread in a list. // Construction / destruction: Thread(HANDLE hNewThreadHandle); virtual ~Thread(); }; class ThreadList // Double-linked list of thread objects. { // Only task is allowed to handle ThreadList: friend class Task; protected: Thread* Begin; // Head of list Thread* End; // Tail of list int NumOfThreads; // Number of threads in a list. // Construction / destruction: ThreadList(HANDLE hNewThreadHandle = NULL); virtual ~ThreadList(); // Add a new thread to the list: int AppendThread(HANDLE hNewThreadHandle); // Get the pointer to a 'num'th thread object: Thread* operator[] (const int num) const; // Determine whether 'num'th thread is active or not: bool IsThreadActive(const int num) const; // Close the 'num'th thread object: int CloseThreadObject(const int num); // Get the number of threads in a list: int GetNumOfThreads() const; }; class Task // Class that incapsulates task data. { // No one is allowed to create and handle Task except for // TaskList: friend class TaskList; protected: // Time (in milliseconds), at which task is periodically // performed: unsigned long timeout; void* pFunc; // Pointer to the task // (i.e.: function). void* pParameter; // Pointer to a *structure* // parameter of task. ThreadList* listOfThreads; // List of active threads // opened by a task. Task* Next; // Pointer to the next task Task* Prev; // Pointer to the previous task // Construction / destruction: Task(const unsigned long msTimeout, void* pNewFunc, void* pParam); virtual ~Task(); void Execute(); // Call a task AND clean the // threadlist. }; class TaskList // Double-linked list of // task data { // No one is allowed to create and handle TaskList except // for ActivationTimer: friend class ActivationTimer; protected: Task* Begin; // The very first Task. Task* End; // The last Task. int NumOfTasks; // Number of Tasks in the list. // Construction / destruction: TaskList(const unsigned long msTimeout = 0, void* pNewFunc = NULL, void* pParam = NULL); virtual ~TaskList(); // Retrieve a pointer to the 'num'th task: Task* operator[] (const int num) const; // Get the number of tasks: int GetNumOfTasks () const; // Get the timeout value for the 'num'-th task: unsigned long GetTaskTimeout(const int num) const; // Add a task: int AppendTask(const unsigned long msTimeout, void* pNewFunc, void* pParam); // Remove a task from the list: int DeleteTask(const int num); // Call a task and create an individual thread for it: void CallTask (const int num); }; class ActivationTimer // A simple task scheduler - // designed as a singleton. { protected: static TaskList* listOfTasks; // Pointer to the list // of tasks. // Minimal timeout - timer checks tasks for execution every // 'minTimeout' milliseconds: static unsigned long minTimeout; static unsigned long maxTimeout; // Maximum timeout in // the list. static unsigned long curTimeout; // Current time. static unsigned int TimerId; // ID of the internal // timer. // Check if it is a time to call a task: static void ExecAppropriateFunc(); // Calculate 'minTimeout' value. void RecalcTimerInterval(); static void CALLBACK TimerProc(HWND, UINT, UINT, DWORD); public: // Construction / destruction: ActivationTimer(const unsigned long msTimeout = 0, void* pNewFunc = NULL, void* pParam = NULL); virtual ~ActivationTimer(); // Add a single task: int AddTask(const unsigned long msTimeout, void* pNewFunc, void* pParam = NULL); // Remove a task: int RemoveTask(const int cpPos); // Get the number of active tasks: int GetNumOfTasks() const; // Stop the timer: void Halt(); // Restart the timer: void Restart(); // Deallocate a singleton object and // reallocate it once again. void Reset(); };
Note 1: As you can see, two double-linked lists (the list of tasks and the list of threads for every task) form the foundation of this set of classes. The question may arise: Why not use STL's <list> container template class? The answer is: STL's <list> brings not only the <list>, but also a HUGE amount of unnecessary code, which slows and enlarges the application. So I've decided to use the most simple (and the most efficient, in our case!) "hand-made" implementation of a double-linked list.
Note 2: Another point of interest—the core class, ActivationTimer, which is implemented as a Singleton. A singleton is a class that assures existence of a maximum of one object of its type at a given time, and provides a global access point to this object. This article gives a comprehensive overview of a singleton design pattern.
Note 3: The real cause of ActivationTimer being a singleton is a stupid limitation of Win32 API: the timer callback procedure, TimerProc, is the only callback function in Win32 that does not take any user-defined argument. This behavior can be overridden in two ways:
- By using not an ordinary timer (SetTimer / KillTimer), but a special synchronization object—waitable timer
(CreateWaitableTimer / SetWaitableTimer / CancelWaitableTimer).
Unfortunately, this approach produces significant performance overhead.
- By using a special synchronization object, which is new to VC++ 7 - timer-queue timer
(CreateTimerQueue / CreateTimerQueueTimer / DeleteTimerQueueEx / DeleteTimerQueueTimer).
Timer-queue timers are lightweight objects that enable you to specify a callback function to be called at a specified time or a period of time. These objects are new to VC++ 7, and I still don't have the ability to implement them, so ... their time is yet to come.
Implementation
Note 4: For better understanding, only member-functions that have a direct influence on the algorithm will be explained here. The source code contains detailed comments on every function.
Step by step:
- Create a timer:
// Create a timer with one task in the task list: ActivationTimer::ActivationTimer(const unsigned long msTimeout, void* pNewFunc, void* pParam) { if((msTimeout <= 0UL) || !pNewFunc) { listOfTasks = new TaskList(); TimerId = 0; minTimeout = 0; maxTimeout = 0; curTimeout = 0; return; } listOfTasks = new TaskList(msTimeout, pNewFunc, pParam); minTimeout = msTimeout; maxTimeout = msTimeout; curTimeout = msTimeout; Restart(); } // Restart the timer: void ActivationTimer::Restart() { TimerId = SetTimer(NULL, NULL, minTimeout, TimerProc); }
// Add a new task to the task list. // Return value: zero-based position of task in a task list. int ActivationTimer::AddTask(const unsigned long msTimeout, void* pNewFunc, void* pParam) { // A little bit of parameter validation: if((msTimeout <= 0UL) || !pNewFunc) return -1; // Kill the timer: Halt(); // Append a new task: int pos = listOfTasks->AppendTask(msTimeout, pNewFunc, pParam); // Recalculate 'maxTimeout' value: maxTimeout = maxTimeout < msTimeout ? msTimeout : maxTimeout; // Recalculate 'minTimeout' value: RecalcTimerInterval(); // Recreate the timer: Restart(); return pos; }
// Calculate the value of minTimeout. // This is a time when 'ExecAppropriateFunc()' routine is // periodically called. void ActivationTimer::RecalcTimerInterval() { minTimeout = listOfTasks->GetTaskTimeout(0); for(int i = 0; i < listOfTasks->NumOfTasks; i++) { unsigned long tempTimeout = GCM(minTimeout, listOfTasks-> GetTaskTimeout(i)); minTimeout = minTimeout > tempTimeout ? tempTimeout : minTimeout; } curTimeout = minTimeout; }
// Check whether it is a time to call one of the tasks from // the task list. void ActivationTimer::ExecAppropriateFunc() { bool reset = true; for(int i = 0; i < listOfTasks->NumOfTasks; i++) { // If time has come, call the 'i'th task: if((curTimeout % listOfTasks->GetTaskTimeout(i)) == 0) { listOfTasks->CallTask(i); reset &= true; } else { reset &= false; } } // If all tasks are called simultaneously, // reset curTimeout value: if(reset) curTimeout = minTimeout; else curTimeout += minTimeout; }
// Call a task and create an individual thread for it: void TaskList::CallTask(const int num) { Task* task = operator[](num); task->Execute(); } // Create an individual thread for a task: void Task::Execute() { listOfThreads->AppendThread(CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)pFunc, pParameter, 0, 0)); // ...and wipe out all unused thread handles: int numThreads = listOfThreads->GetNumOfThreads(); for(int i = 0; i < numThreads - 1; i++) { if(!(listOfThreads->IsThreadActive(i))) listOfThreads->CloseThreadObject(i); } }
Task::~Task() { delete listOfThreads; } ThreadList::~ThreadList() { Thread* target = Begin; Thread* temp; if(target) // If ThreadList is not empty... while(target) { temp = target->Next; delete target; target = temp; } } Thread::~Thread() { // Check if thread has exited. // If it did not - terminate it. DWORD exitCode; GetExitCodeThread(ThreadHandle, &exitCode); // !!! UNSAFE CODE !!! // Try not to use lengthy operations OR // use some kind of internal completition // flag to explicitly instruct thread to exit. // Refer MSDN on hazards of TerminateThread. if(exitCode == STILL_ACTIVE) TerminateThread(ThreadHandle, 0); CloseHandle(ThreadHandle); }
Note 5: The Thread::~Thread destructor needs some explanation. Calling TerminateThread can be quite dangerous if the target thread is still running, because:
-.
A possible solution may be one of the following:
- Do not use lengthy operations as a thread function.
- Use a special bool variable (something like: m_bStopExecution), and periodically check the state of this variable.
If this variable is true, the thread can gracefully quit.
- Use a synchronization object—for example, semaphore or waitable timer—to instruct a thread to stop execution (just the same as above).
Please refer to MSDN for more information on TerminateThread and synchronization objects.
Usage
To use ActivationTimer, do the following:
- Include "ActivationTimer.cpp" and "ActivationTimer.h" in your project.
- Write the following line at the beginning of the "StdAfx.h" file:
#include "ActivationTimer.h"
#include "StdAfx.h"
ActivationTimer actTimer;
void MyFunction() { .... } .... actTimer.AddTask(10000, MyFunction);This will periodically execute MyFunction every 10 seconds.
void MyFunction2(void* pParam) { int* value = (int*)pParam; .... } .... int someValue = ... ; actTimer.AddTask(15000, MyFunction2, &someValue);
typedef struct { int valueA; float valueB; bool valueC; ... } MyStruct;and write:
void MyFunction3(void* pParam) { MyStruct* structVal = (MyStruct*)pParam; .... } .... MyStruct* somestruct = new MyStruct; ... // initialize struct here actTimer.AddTask(5100, MyFunction3, somestruct);
The demo project (see link below) is an ordinary "Hello, World!" application, built with AppWizard, with these examples included. Enjoy!
Acknowledgements
Thank you all the CodeGuru visitors who posted comments, and double-Thank-you to everyone who wrote me, reporting bugs, ideas, and suggestions!
P.S.
Why not a Task Scheduler instead of the Activation Timer? That's because Windows already has its Task Scheduler application, so I've decided not to mess up. At least, Activation Timer application is not included in a standard Windows package, yet........
DownloadsDownload demo project - 12 Kb
Download source - 5 Kb | https://www.codeguru.com/cpp/w-p/system/misc/article.php/c5783/Activation-Timer--a-Simple-Task-Scheduler.htm | CC-MAIN-2020-16 | refinedweb | 2,028 | 54.22 |
03 January 2012 17:30 [Source: ICIS news]
HOUSTON (ICIS)--Chinese energy and petrochemicals major Sinopec has agreed to pay $2.2bn (€1.7bn) to acquire one-third of the interest of Devon Energy in five ?xml:namespace>
The assets are Niobrara, Mississippian,
Devon CEO John Richels said the deal with Sinopec would improve
“We can accelerate the de-risking and commercialisation of these five plays without diverting capital from our core development projects,” Richels added.
The companies expect to close the deal in the first quarter of 2012, subject to regulatory approvals.
For Sinopec, the deal marks its entry into the upstream
The Chinese firm is already active in Canada's upstream oil and gas sector. Last year, Sinopec acquired a Canadian natural gas producer, and in 2010 it took a stake in a Canadian oil sands firm.
In related news on Tuesday, French energy and petrochemicals major Total said it had acquired an interest in shale gas assets in Ohio from US firms Chesapeake and EnerV | http://www.icis.com/Articles/2012/01/03/9519954/sinopec-to-pay-2.2bn-for-part-of-devons-us-shale-gas-interests.html | CC-MAIN-2015-06 | refinedweb | 168 | 56.69 |
Trace[F]in the context bound for those and call it a day.
Hi! Need some help with wiring with natchez.
I have one big wire function:
def wireAll[F[_]: ContextShift: Timer: Parallel: Concurrent: Async]: Resource[F, HttpRoutes[F]]
And one run function:
def run[F[_]: ContextShift: ConcurrentEffect: Timer: Parallel: Bracket[*[_], Throwable]]: F[Fiber[F, Unit]] = { val resource = for { traceEntryPoint <- entryPoint[F]: Resource[F, EntryPoint[F]] httpRoutes <- wireAll[Kleisli[F, Span[F], *]]: Resource[Kleisli[F, Span[F], *], HttpRoutes[Kleisli[F, Span[F], *]] //after that I can obtain HttpRoutes[F] from HttpRoutes[Kleisli[F, Span[F], *] using middleware and run server ... } yield () ... }
The problem is in the first
Resource type parameter -
Kleisli[F, Span[F], *] (
F needed)
Do you know how to rewrite this correctly?
Kleisli.applyK, but you'll still need a
Span[F]. You can probably use one that you get from the entrypoint - this will basically be the span of "starting the application"
httpRoutes <- traceEntryPoint.root("init").use { span => wireAll[Kleisli[F, Span[F], *]].mapK(Kleisli.applyK(span)) }- roughly
F[Fiber[F, Unit]], it's easy to make a mistake. See if you can use
.backgroundinstead of
.startwherever it is.
It's me again. I am trying to wrap Client[F]. (I have Trace[F] instance there)
def apply[F[_]: Trace: Sync](client: Client[F]): Client[F] = { Client[F] { request => val spanName: String = ??? for { b <- Trace[F].span(spanName) { //should be F[] } //only have Resource[F, Response[F]] type for //client.run(modifiedRequest)) } yield ??? } }
Can't get how to transform
Resource to wrap it with
Trace[F].span | https://gitter.im/tpolecat/natchez?at=5f9028a598a7774f1b58d386 | CC-MAIN-2022-40 | refinedweb | 264 | 67.15 |
Ok, this is going to sound strange, but what is the capability of your PC. As I recall, I had something similar happen and it turned out to be a machine specific issue. When I ran the code on my Unix box, it worked fine, but when I ran it on my PC, it didn't.
So, that is just a thought. You are correct from a mathematical point, but check in case your PC can't handle that percision.
Good Luck.
Hm, I'm not quite sure what you mean by my system specifications but it would make sense that the machine may simply be incapable of handling this program. Though you wouldn't think 32 bits are such a big deal -_-
Hints:
What is the type of x, and what type does pow return?
What value does (pow(2.0, (int) (sizeof (unsigned int))*8)) return, and what is the value of x after initializing it?
Also... Try:
cout << UINT_MAX;
There is always a problem if you mix signed and unsigned ints in a statement, and definitely if you use floats or doubles too. Always be carefull for implicit conversions. Unsigned types have precedence over signed types.
Be aware that a float only has 24 bits of mantisse. An unsigned int can contain a larger whole number than a float. For a double the mantisse is 53 bits. (See float.h)
The function "double pow(double,int)" is the problem.
The value of pow(2.0,32) does NOT fit in an unsigned int. The max number that an unsigned int can store is 2^32-1.
If you only calculate with ints and bits stick with ints.
What you want to do is to set bit 31 of x.
A better 'for' loop is
for (x= (1<<(sizeof(unsigned int)*8-1)); x; x>>=1) {
cout << ( (num&x) ? "1" : "0" );
}
In Visual C 'int' and 'long' are both 32 bits.
If you want larger ints in Visual Studio use
long long
__int64
They also come in unsigned versions.
Look at __int64 and then 'Fundamental Types' in the Help.
Mario Veraart
Problems with C++ 32-bit integers
I was trying to make a function called show_binary() that would display the binary representation of the unsigned integer sent to it in its parameters. Here is my code:
#include <iostream>
#include <cmath>
using namespace std;
void show_binary (unsigned int num);
int main() {
unsigned int i = 0;
cout << "Enter an integer: ";
cin >> i;
cout << '\n';
show_binary(i);
return 0;
}
void show_binary (unsigned int num) {
unsigned int x;
for (x = (pow(2.0, (int) (sizeof (unsigned int))*8)) ;x > 0;x /= 2){
if (num&x) cout << "1 ";
else cout << "0 ";
}
}
When I ran this in the Visual Studio Command Prompt, the FOR loop did not run and nothing was displayed by the function. However when I lowered the initial value of x to only show 24 or less bits of the integer, it worked fine. I even tried putting 4,294,967,296 (2 raised to 32) and had the same result. Visual Studio uses 32 bit integers and "pow(2.0, (int) (sizeof (unsigned int))*8)" does in fact result in 4,294,967,296 so this should work.
Any suggestions?
-James Waltz
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/problems-with-c-plus-plus-32-bit-integers/ | CC-MAIN-2019-51 | refinedweb | 551 | 81.02 |
Calling C# or other .NET code from Clojure
As part of our efforts to alleviate developer fatigue, we’re showcasing the ways that JNBridgePro can support interoperability when using alternative and emerging languages based on the JVM or CLR. So far, we’ve demonstrated interoperability with Groovy, Jython, and Iron Python. In this post, we’re going to look at Clojure, a LISP variant that runs in a JVM and includes the ability to call Java. (There’s also a .NET-based version of Clojure.)
We’ll put JNBridgePro through its paces by reworking our earlier Jython example, where Jython code called a .NET library. Here’s the .NET library we’ll be using, written in C# (although things will work just as well with a library written in VB.NET or any other .NET-based language):
namespace DotNetLibrary { public class HelloWorldFromDotNet { private string theString = ""; public HelloWorldFromDotNet() { theString = "Hello World from .NET!"; } public HelloWorldFromDotNet(string s) { theString = s; } public string returnString() { return theString; } } }
Once we build the library and proxy
DotNetLibrary.HelloWorldFromDotNet and supporting classes into a proxy jar file, we can easily call the classes from a Closure REPL (read-eval-print loop).
Assuming you’ve already downloaded and installed Clojure (go here to download it), and you’ve collected the Clojure jar along with the proxies and
jnbcore.jar and
bcel-5.1-jnbridge.jar into a single directory, start up Clojure as follows, with all of the above jar files in the classpath:
java -cp "clojure-1.6.0.jar;jnbcore.jar;bcel-5.1-jnbridge.jar;proxies.jar" clojure.main
Once Clojure starts, and you get a user prompt, enter your commands to configure and initialize JNBridgePro, and then to call the proxies. Since the proxies are Java classes like any other, call them in the same way as you would call other Java classes.
Clojure 1.6.0 user=> (def props (new java.util.Properties)) #'user/props user=> (.setProperty props "dotNetSide.serverType", "sharedMem") nil user=> (.setProperty props "dotNetSide.assemblyList.1", "C:/Clojure Example/DotNetLibrary.dll") nil user=> (.setProperty props "dotNetSide.javaEntry", "C:/Program Files (x86)/JNBridge/JNBridgePro v7.2/4.0-targeted") nil user=> (.setProperty props "dotNetSide.appBase", "C:/Program Files (x86)/JNBridge/JNBridgePro v7.2/4.0-targeted") nil user=> (com.jnbridge.jnbcore.DotNetSide/init props) nil user=> (def h1 (new DotNetLibrary.HelloWorldFromDotNet)) #'user/h1 user=> (.returnString h1) "Hello World from .NET!" user=> (def h2 (new DotNetLibrary.HelloWorldFromDotNet "Test String from DotNet")) #'user/h2 user=> (.returnString h2) "Test String from DotNet" user=>
While the form of the calls to
Properties,
DotNetSide, and the proxy class may look strange if you’re not familiar with Clojure, but only familiar with Java, a close examination of each line above should allow you to translate it into the equivalent Java or Jython code.
Note that, while we use a REPL above, we can also pre-compile the Clojure code and it should run equally well. Also, we can clean up the above code a bit by importing namespaces, but in the code above we wanted to be as transparent as possible. Finally, if we want, we can dispense with the references to
java.util.Properties and instead use the variant of
DotNetSide.init() that takes a string representing a path to a properties file containing the JNBridgePro configuration.
As you can see, using JNBridgePro to call .NET code from Clojure is as easy as calling it from Java or any other JVM-based language.
Are you currently using or planning to use Clojure? Do you have any scenarios where you need to call .NET assemblies from your Clojure code? If so, let us know! | https://jnbridge.com/blog/calling-net-code-from-clojure | CC-MAIN-2019-26 | refinedweb | 607 | 60.01 |
.
At this point, we're getting really close to PDC and I can't wait. At PDC, I'm going to go through some examples of the new formats in all three applications (Word, Excel, and PowerPoint). I'll continue to talk about Office 2003 as well, but there will be more focus on the 12 formats from that point on. That's still a few weeks away though, so I figured today I would still focus on Office 2003. I want to write really quickly about Excel's ability to map XML structures as both the input and output of a spreadsheet. In the Intro #2, I showed how you could use an XSLT to transform your data into SpreadsheetML for rich display. Most folks who read that and knew about the XML support in Excel 2003 realized that there was a much easier way to do this. You can use the XML mapping functionality to completely skip the XSLT step, which makes it a lot easier.
Let's start with that same example we used in Part 2. Take this XML and save it to a file on your desktop:
<?xml version="1.0"?><p:PhoneBook xmlns: <p:Entry> <p:FirstName>Brian</p:FirstName> <p:LastName>Jones</p:LastName> <p:Number>(425) 123-4567</p:Number> </p:Entry> <p:Entry> <p:FirstName>Chad</p:FirstName> <p:LastName>Rothschiller</p:LastName> <p:Number>(425) 123-4567</p:Number> </p:Entry></p:PhoneBook>
Open up a blank Excel spreadsheet (you need to be using a version of Excel 2003 that supports custom defined schema), and go to the Data menu. Find the XML flyout, and choose XML Source. The XML Source task pane should now be up on the side. It's currently blank because we haven't specified an XML schema to map yet. Click on the XML Maps... button and it will bring up a dialog that let's you specify the XML schema that you want to map. Click the Add... button and find the XML file you saved to your desktop. You will be notified that there isn't a schema but that Excel can infer a schema for you. In this example we're starting with an XML instance, so we want Excel to infer a schema. We could have also just started with a schema file if we had that. Go ahead and press OK, and you will now have a tree view of the inferred schema in the XML Source task pane.
Click on the node for the Entry element and drag it out onto the spreadsheet. This will map the child nodes and give them titles. After doing this, you've told Excel where you want the elements to be mapped to in the grid. You can change the titles of the columns if you want so that they have a more user friendly title. By default they have the namespace prefix and element name in the title.
Now that the nodes have been mapped, you can tell Excel to import the data. Right click on the mapped region, navigate to the XML fly-out menu, and select Refresh XML data. That will import the data from your XML file. The region that the data was imported into has a blue border around it. This is a new feature in Excel 2003 called a "list". A list is a structured region in Excel that consists of repeating content. The list was automatically generated for us when we mapped the Entry element into the spreadsheet.
Now that we have our list mapped to the XML schema, we can also choose to import multiple XML files at once if you have a couple XML files that adhere to your schema. Just make a copy of the XML file you saved to the desktop, open it in notepad and make some changes. Now let's import both of the files. Right-click on the list and under the XML flyout choose Import... Now just select both of your XML files and hit OK. Now both sets of data are imported into the list.
If you want to export your data, it's just as easy. Right click on the list again and this time under the XML flyout choose Export... You can choose to export to a brand new XML file, or to overwrite one of the files you imported.
This example shows how easy it is to bring your own XML data into Excel, work on it, and then output it back into it's original XML schema. Once common use I've seen of this functionality is that people will have two schemas. The first schema is used to import a huge data set that comes form a web service or some other external data source. Using the XML mapping functionality you can bring that data into Excel, and then run whatever models you want to on the data. The 2nd schema is used to map the results of the model in Excel. Map the result regions with the 2nd schema, and use that to export the results as XML. This allows Excel to serve as a very powerful transformation tool with rich UI. It's pretty cool
-Brian | http://blogs.msdn.com/brian_jones/archive/2005/08/25/456298.aspx | crawl-002 | refinedweb | 876 | 80.51 |
Microsoft’s Language INtegrated Query, or LINQ extensions to C# allow SQL-like queries to be run on a variety of data sources, ranging from simple types such as arrays and lists to more complex data types such as XML files and databases. The built-in support for databases extends only as far as Microsoft’s own SQL Server, which is fair enough considering that’s their main database product. However, I’ve used MySQL for my own modest database requirements for many years, largely because it’s free and does everything I need.
We’ve already seen how to connect to MySQL from within a C# program, but in that example, we interacted with the database by constructing SQL commands as strings within C# and then using interface methods to pass these commands to the MySQL database, which took care of making the actual changes to the data. The main purpose of LINQ is to move the data processing commands into the C# language directly.
Doing this, however, does require that there is a lot of underlying code that handles the connection between C# and the database. LINQ as provided by Visual Studio contains all the tools you need to interact with regular data structures, XML and SQL Server, but if you want to talk to MySQL, you’ll need a third-party package to handle the interaction.
One such package that I’ve used only briefly is DbLinq, which provides interfaces between LINQ and not only MySQL, but a number of other popular databases as well. Since I’m interested only in MySQL, that’s all I’ll look at here.
Using DbLinq is fairly straightforward, although the lack of documentation can make it a bit of a trial to get running. I’ll run through the steps that I followed here, although if you’re reading this some time after a new version has come out, things may have changed.
First, go to the DbLinq site, follow the link to the zipped releases downloads page, and then get the zip file containing the source code (with an ‘src’ in its name). This is useful since this contains a ready-made Visual Studio project with several examples for the various databases supported. Unzip this file, go into the src folder and load DbLinq.sln in Visual Studio. Build the solution (which for me ran without errors).
If you want to run the MySQL example provided with the zip file, you’ll need to create the Northwind database on your local MySQL installation. An SQL file is provided which will allow you to do this. In the extracted folder from the zip file, go to the folder examples\DbLinq.MySql.Example\sql where you’ll find a file called create_Northwind.sql. You can use this file to create the database by opening a cmd window, cd to the folder where the SQL file is, then running mysql in command-line mode. You can then give the command “source create_Northwind.sql”, and this should create the database. Alternatively, there are several front-end programs such as sqlyog which can be used to load the file in a GUI.
Once you’ve got the Northwind database installed, you should be able to run the example program. In Visual Studio’s Solution Explorer, open the ‘examples’ folder and set DbLinq.MySql.Example as the startup project. Open the Program.cs file and find the definition of the string connStr in the Main() method. You’ll need to edit this to provide the login credentials for your own MySQL database. Once you’ve done that, you should be able to run the example and have it show you the results of a few LINQ commands.
If you know a bit of LINQ, you can play around with the Program class’s code at this point and experiment with accessing the Northwind database (although I found the program crashed when attempting to remove an item from the database). Obviously, though, you’ll want to use LINQ with your own MySQL databases at some point, so we’ll need to examine how you do that.
I mentioned above that Visual Studio provides a lot of background code to make LINQ work with various data sources, and that to get it to work with MySQL, you’ll need to provide this code. You might notice in the Northwind example that there is a large file called Northwind.cs, and if you look at the top of that file you’ll see it’s automatically generated code from a program called DbMetal. This is the one fiddly bit about using DbLinq: you’ll need to run DbMetal externally (outside Visual Studio) in order to generate the required code for the database you want to access.
DbMetal reads the structure of your database from MySQL and generates the interface code required to get LINQ to work with that database. Since each database has a different structure (different tables and so on), you’ll need to run DbMetal for each database you want to use. You’ll need to run it only once per database, unless you change the structure of the database by adding or deleting tables or adding or deleting columns from tables.
You’ll find a .bat file for running DbMetal in the src\DbMetal folder from your zip file. There is one .bat file for each database type, so for MySQL, look at run_myMetal.bat. If you open this file in notepad, you’ll see it looks like this:
REM: note that the '-sprocs' option is turned on bin\DbMetal.exe -provider=MySql -database:Northwind -server:localhost -user:LinqUser -password:linq2 -namespace:nwind -code:Northwind.cs -sprocs
There are a few changes you’ll need to make to get this to work for your own database. First, there is no ‘bin’ folder below the one in which the .bat file is located, so the file won’t find DbMetal unless you delete the ‘bin\’ and then move the file to the folder where DbMetal.exe is located.
Second, of course, you’ll need to change the user and password to whatever is needed to access your MySQL installation. You will also need to change the name of the database, and you’ll probably also want to change the name of the namespace and code file that DbMetal will produce. I also deleted the -sprocs option since I got an error when it was there. Once you’ve done all that, you can run the .bat file in a cmd window, and it will produce the C# file (Northwind.cs in the example above). You can then copy this file into your Visual Studio project so you can start writing your own LINQ code on your own database.
To use the class generated by DbMetal, define the string connStr for connecting to the database as in the Northwind example, and then create an object from the DbMetal-generated class. For example, if your database is called Comics and you told DbMetal to create a file called Comics.cs in a namespace called comics you would add a “using comics;” line at the top of your file and then open a Comics object with lines:
string dbServer = Environment.GetEnvironmentVariable("DbLinqServer") ?? "localhost"; string connStr = String.Format("server={0};user id={1}; password={2}; database={3}" , dbServer, "<Your username>", "<Your password>", "Comics"); Comics db = new Comics(new MySqlConnection(connStr));
One final thing you will need to do though: you’ll need to make sure the various dll files are available to Visual Studio. To do this, right-click on References in Solution Explorer and select Add Reference. Select the Browse tab and then navigate to the ‘build’ folder produced by building the original DbLinq project. To use DbLinq with MySQL, you’ll need to add DbLinq.dll and DbLinq.MySql.dll. To use MySQL itself, you’ll also need MySql.Data.dll, which is found in the ‘lib’ folder. You’ll know when you’ve got all the right files as without them, you’ll get compiler errors about symbols that can’t be found.
One final caution about DbLinq. As the web site itself says, it’s still prototype software and may not work for complex queries, so make sure you test it thoroughly before relying on it too much. For most simple queries, though, it should be fine. | https://programming-pages.com/2012/05/11/linq-with-mysql-using-dblinq/ | CC-MAIN-2018-26 | refinedweb | 1,401 | 69.92 |
iShader Struct ReferenceSpecific shader. More...
#include <ivideo/shader/shader.h>
Detailed DescriptionSpecific shader.
Can/will be either render-specific or general The shader in this form is "compiled" and cannot be modified.
Definition at line 284 of file shader.h.
Member Function Documentation
Activate a pass for rendering.
Completly deactivate a pass.
Get name of the File where it was loaded from.
Get shader metadata.
Get number of passes this shader have.
Query a "shader ticket".
Internally, a shader may choose one of several actual techniques or variants at runtime. However, the variant has to be known in order to determine the number of passes or to do pass preparation. As the decision what variant is to be used is made based on the mesh modes and the shader vars used for rendering, those have to be provided to get the actual variant, which is then identified by the "ticket".
Query the object.
Set name of the File where it was loaded from.
Setup a pass.
Tear down current state, and prepare for a new mesh (for which SetupPass is called).
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/structiShader.html | CC-MAIN-2017-04 | refinedweb | 204 | 61.12 |
Using Switch with Raspberry Pi – Python
Contents
I hope that you already go through our tutorial, LED Blinking using Raspberry Pi. Detecting switch status is one of the basic step in learning Raspberry Pi GPIO operations. Here we using Python programming language.
I hope that you already installed Python GPIO Library in your Raspberry Pi, if not please follow our first tutorial LED Blinking using Raspberry Pi.
Raspberry Pi GPIO Pin Out
Pull Up and Pull Down
Raspberry Pi has internal Pull Up and Pull Down resistors which can be enabled through software. Alternately external pull up and pull down resistors may also be used. Here we use Internal Pull Up Resistors.
Pull up resistors give a default HIGH state to input pin while pull down resistors a default LOW state. If no pull resistor is added to an input pin, it remains floating. Any wire connected to that pin can act as an antenna and can build up stray voltages. Pull resistors can avoid such discrepancies and provide a smooth operation.
The GPIO ports consist of software configurable internal pull up and pull down resistors. Pull up resistors have a value between 50KΩ ~ 65KΩ and pull down resistors between 50KΩ ~ 60KΩ. You should use following commands to configure it.
Pull Down
GPIO.setup(port_or_pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
Pull Up
GPIO.setup(port_or_pin, GPIO.IN, pull_up_down=GPIO.PUD_UP)
Circuit Diagram
Connect the push button switch to 6 (ground) and 12 (GPIO).
Python Programming
- Open terminal.
- Launch IDLE IDE by typing the following command.
sudo idle
This launches IDLE with superuser privileges which are essential for executing GPIO scripts.
- After IDLE launches, open a new window
Type the code below in the window.
import RPi.GPIO as GPIO #Import GPIO library import time #Import time library GPIO.setmode(GPIO.BOARD) #Set GPIO pin numbering GPIO.setup(12, GPIO.IN, pull_up_down=GPIO.PUD_UP) #Enable input and pull up resistors while True: input_state = GPIO.input(12) #Read and store value of input to a variable if input_state == False: #Check whether pin is grounded print('Button Pressed') #Print 'Button Pressed' time.sleep(0.3) #Delay of 0.3s
- Save the code
- Run your code
The program will print “Button Pressed” message when the push button switch is pressed. The time.sleep(0.3) command will provide necessary delay between switch press. If it is not used, the program will print multiple “Button Pressed” message with one single press. Hope you understand the tutorial. If you have any doubts, please comment below. | https://electrosome.com/using-switch-raspberry-pi/ | CC-MAIN-2018-30 | refinedweb | 419 | 58.69 |
Introduction
I’ve been talking about user-space file systems for several articles now. The concept of being able to quickly create a file system using almost any language you want using FUSE (File System in Userspace) libraries and kernel module is a very powerful one (however I’m still waiting on the Fortran bindings). One can build a file system that meets a particular set of requirements without having to develop and maintain kernel patches for a long period of time, without having to ask testers to apply the kernel patches and test, and then going through the kernel gauntlet. It can be developed quickly, with a variety of languages, get immediate feedback from testers, does not have to be tied to a particular kernel release, does not require a kernel patch and/or rebuild.
GlusterFS is a very sophisticated GPLv3 file system that uses FUSE. It allows you to aggregate disparate storage devices, which GlusterFS refers to as “storage bricks”, into a single storage pool or namespace. It is what is sometimes referred to as a “meta-file-system” which is a file system built on top of another file system. Other examples of meta-file-systems include Lustre and PanFS (Panasas’ file system). The storage in each brick is formatted using a local file system such as ext3 or ext4, and then GlusterFS uses those file systems for storing data (files and directories).
Arguably one of the coolest features of GlusterFS is the concept of “translators” that provide specific functionality such as IO schedulers, clustering, striping, replication, different network protocols, etc. They can be “stacked” or linkned to create a file system that meets your specific needs. Using translators, GlusterFS can be used to create simple NFS storage, scalable cloud storage with replication, or even High-Performance Computing (HPC) storage. You just create a simple text file tells GlusterFS the translators you want for the server and the client along with some options and then you start up GlusterFS. It’s that simple.
But getting the translators in the proper order with the proper functionality you want can sometimes be a long process. There is a company,, that develops GlusterFS and also provides for-fee support for GlusterFS. They have also taken GlusterFS and combined it with a simple Linux distribution to create a software storage appliance. You just pop it into a server that either has some attached storage or SAN storage, and you can quickly build a storage cluster.
GlusterFS is arguably the most sophisticated FUSE based file system with a great deal of capability. Let’s take a look at it to understand the capabilities and how it can be used.
GlusterFS
The current version of GlusterFS is 3.0.5 (July 8, 2010) but it has evolved over time. To properly discuss GlusterFS I think we need to jump back to the versions prior to 2.0.8. In these versions creating the file system “translator stack” was a more manual process that took some time and experimentation to develop.
GlusterFS Before 2.0.8
For versions prior to 2.0.8 of GlusterFS and earlier (basically version 2.0.7 on down), configuring GlusterFS required more manual work but also offered the opportunity for highly tuned configurations. GlusterFS used the concept of a stackable file system where you could “stack” capabilities in some fashion to achieve the desired behavior you want (that may sound vague but keep reading because it gets more specific). In particular, GlusterFS uses translators which each provide a specific capability such as replication, striping, and so on. So connecting or stacking translators allows you to combine capabilities to achieve the design you want. Let’s examine version 2.0.7 and how one can build a GlusterFS file system.
GlusterFS begins with the concept of a storage brick. It is really a sever that is attached to a network with some sort of storage either directly attached (DAS) or has some storage via a SAN (Storage Area Network). On top of this storage you create a local file system using ext3, ext4, or another local Linux file system (ext3 is the most commonly used file system for GlusterFS). GlusterFS is a “meta-file-system” that collects these disparate file systems and uses them as the underlying storage. If you like analogies, these local file systems are the blocks and inodes of GlusterFS (Note: there are other meta-file-systems such as Lustre).
GlusterFS allows you to aggregate these bricks into a cohesive name space using the stacked translators. How you stack the translators, what network protocols you use, how you select the storage bricks, how you create the local file systems, all contribute to the capacity, performance, and manageability of GlusterFS. If you haven’t read between the lines, one can easily say theat the “devil is in the details” so let’s start with system requirements and configuration details for GlusterFS.
Recall that GlusterFS is in user-space so at the very least you’ll need a kernel on both the storage servers and the clients that is “FUSE-ready”. You also need to have libfuse installed, version 2.6.5 or newer. The GlusterFS User Guide suggests that you use Gluster’s patched FUSE implementation to improve performance.
If you want to use InfiniBand then you’ll need to have OFED or an equivalent stack installed on the servers and the clients. If you want to improve any web performance you’ll need mod_glusterfs for Apache installed. Also, if you want better small file performance, you can install Berkeley DB (it uses a distributed Berkeley DB backend). Then finally you’ll need to download GlusterFS – either in binary form for a particular distribution or in source form. This web page gives you details on the various installation options.
Assuming that we have all the software pieces installed on at least the servers the next step is to configure the servers. The configuration of GlusterFS is usually contained in /etc/glusterfs. In this directory you will create a file that is called a volume specification. There are two volume specification files you need to create – one for the server and one for the client. It’s a good idea to have both files on the server.
/etc/glusterfs
The volume specification in general is pretty simple. Here is an example from the Installation Guide
volume colon-o
type storage/posix
option directory /export
end-volume
volume server
type protocol/server
subvolumes colon-o
option transport-type tcp
option auth.addr.colon-o.allow *
end-volume
The first section of the volume specification describes a volume called “colon-o”. It uses the POSIX translator so that it is POSIX compliant. It also exports a directory, /export.
volume colon-o
type storage/posix
option directory /export
end-volume
volume server
type protocol/server
subvolumes colon-o
option transport-type tcp
option auth.addr.colon-o.allow *
end-volume
volume colon-o
type storage/posix
option directory /export
end-volume
volume server
type protocol/server
subvolumes colon-o
option transport-type tcp
option auth.addr.colon-o.allow *
end-volume
/export
The second part of the volume specification describes the server portion of GlusterFS. In this case it says that this volume specification is for a server (type protocol/server). Then it defines that this server has a subvolume called “colon-o”. The third line, after defining the volume, states that the server will be using tcp. And finally the line “option auth.addr.colon-o.allow *” allows any client to access colon-o.
After creating the server volume specification file, the next step is to define the client volume specification. The Installation Guide has a sample file that is reproduced here.
volume client
type protocol/client
option transport-type tcp
option remote-host server-ip-address
option remote-subvolume colon-o
end-volume
The file defines a volume called “client” and states that it is a client (“type protocol/client”). It uses tcp (next line down), and then defines the IP address of the server (just replace “server-ip-address” with the address of the particular server). Then finally it states that it will use a remote subvolume named colon-o.
volume client
type protocol/client
option transport-type tcp
option remote-host server-ip-address
option remote-subvolume colon-o
end-volume
volume client
type protocol/client
option transport-type tcp
option remote-host server-ip-address
option remote-subvolume colon-o
end-volume
Once these files are created, then you just start GlusterFS on the server and you start GlusterFS on the clients. The commands are pretty simple – on the server the command is,
# glusterfsd -f /etc/glusterfs/glusterfsd.vol
where /etc/glusterfs/glusterfsd.vol is the volume specification created for the server.
# glusterfsd -f /etc/glusterfs/glusterfsd.vol
# glusterfsd -f /etc/glusterfs/glusterfsd.vol
/etc/glusterfs/glusterfsd.vol
On the client, the command is fairly similar,
# glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs
where /etc/glusterfs/glusterfs.vol is the name of the client volume specification file (be sure it is on every client or it is in a common name space shared by all clients – perhaps a simple NFS mounted directory). The second argument to “glusterfs” is the mount point, /mnt/glusterfs. Be sure this mount point exists before trying to mount the file system.
# glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs
# glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs
/etc/glusterfs/glusterfs.vol
/mnt/glusterfs
For configuring and starting GlusterFS on a cluster you can use a parallel shell tool such as pdsh to create the mount point on all of the clients and then run the “glusterfs” command to mount the file system on all of the clients.
As I mentioned previously, there are a large number of translators available for GlusterFS. These translators give GlusterFS the ability to tailor the file system to achieve specific behavior. The list of translators is fairly long but deserves to be listed to show the strength of GlusterFS and perhaps more importantly, user space file systems.
fcntl
But the power of GlusterFS, it’s configurability through the numerous translators, can also make it difficult to setup. What’s the proper order for the translators? Which translators are better on the client and which one’s are better on the server? What is the order of translators for the best performance or best reliability or best capacity utilization? In the next generation of GlusterFS, the developers have made installation and configuration a bit easier.
GlusterFS From 2.0.8 to 3.0
This version of GlusterFS has some of the basic system requirements of the earlier versions:
In this version, GlusterFS is still a meta-file-system so it’s built on top of other local file systems such as ext3 and ext4. However, according to the GlusterFS website, xfs works but has much poorer performance than other file systems. So be sure to build the file systems on each server prior to the configuring and starting GlusterFS.
Recall that after GlusterFS is installed on the servers and the clients the next step is to create the volume specification files on the server. Prior to version 2.0.8 we had to create these files by hand. While not difficult, it was time consuming and errors could easily have been introduced. Starting with version 2.0.8 and extending into version 3.x of GlusterFS, there is a new command, glusterfs-volgen that creates the volume specification file for you. A simple example from the Server Installation and Configuration Guide illustrates how to do this.
# glusterfs-volgen --name store1 hostname1:/export/sdb1 hostname2:/export/sdb1 \
hostname3:/export/sdb1 hostname4:/export/sdb1
The options are pretty simple: “–name” is the name of the volume (in this case “store1″). After that is a list of the hosts and their GlusterFS volumes that are used in the file system.
glusterfs-volgen
# glusterfs-volgen --name store1 hostname1:/export/sdb1 hostname2:/export/sdb1 \
hostname3:/export/sdb1 hostname4:/export/sdb1
# glusterfs-volgen --name store1 hostname1:/export/sdb1 hostname2:/export/sdb1 \
hostname3:/export/sdb1 hostname4:/export/sdb1
For this particular example, a total of 4 files are created by the glusterfs-volgen command.
hostname1-store1-export.vol
hostname2-store1-export.vol
hostname3-store1-export.vol
hostname4-store1-export.vol
store1-tcp.vol
The first four files are for the servers (you can pick out which file belongs to which server) and the fifth file is for the clients.
hostname1-store1-export.vol
hostname2-store1-export.vol
hostname3-store1-export.vol
hostname4-store1-export.vol
store1-tcp.vol
hostname1-store1-export.vol
hostname2-store1-export.vol
hostname3-store1-export.vol
hostname4-store1-export.vol
store1-tcp.vol
This example creates a simple distributed volume (i.e. no striping or replication). You can create those volumes as well with some simple additional options to glusterfs-volgen.
One the volume specification files are created by “glusterfs-volgen” then you can copy them, using something like scp, to the appropriate server. But you will also need to copy the client file to all clients or you can use a nifty new feature of GlusterFS to allow each server to pull the correct file. The following command
# glusterfs --volfile-server=hostname1 /mnt/glusterfs
tells GlusterFS where to get the volume specification file and where to mount GlusterFS (just be sure the mount point exists on all clients before using this command). This command will look for the file, /etc/glusterfs/glusterfs.vol so you can either copy the client file to this file on hostfile1 or you can symlink the client file to it.
# glusterfs --volfile-server=hostname1 /mnt/glusterfs
# glusterfs --volfile-server=hostname1 /mnt/glusterfs
At this point we’ve configured the servers and we can start glusterfs on each one as we did before.
# glusterfsd -f /etc/glusterfs/glusterfsd.vol
where /etc/glusterfs/glusterfsd.vol is the volume specification created for the server. On Redhat style systems you can also used the command,
# /etc/init.d/glusterfsd [start|stop]
which looks for the file, /etc/glusterfs/glusterfsd.vol on each server. Be sure this file is the correct one for each server.
# /etc/init.d/glusterfsd [start|stop]
# /etc/init.d/glusterfsd [start|stop]
The client portion of GlusterFS is just as easy as the server. You download the correct binary or you build it from source. The next step is to actually mount the volume you want on the client. You need a client volume specification file on each client before trying to mount GlusterFS. Previously it was mentioned that it’s possible to have the client pull the client volume specification file from a server. Alternatively you could just copy the the client volume specification file from the server to every client using something like pdsh. Regardless, the .vol file needs to be on every client as /etc/glusterfs/glusterfs.vol.
You can mount glusterfs using the normal “mount” command with the glusterfs type option.
# mount -t glusterfs hostname1 /mnt/glusterfs
Or you can put the mount command in the /etc/fstab file just like any other file system.
# mount -t glusterfs hostname1 /mnt/glusterfs
# mount -t glusterfs hostname1 /mnt/glusterfs
/etc/fstab
GlusterFS can also use Samba to re-export the file system to Windows clients. But one aspect that many people are not fond of is that GlusterFS requires that you use a user-space NFS server, unfs3, for re-exporting GlusterFS over NFS. You cannot use the kernel NFS server to re-export GlusterFS – you have to use unfs3. You can use any NFS client you wish, but on the server you have to use unfs3.
GlusterFS – The Model for Future File System Development?
The last several articles I’ve been talking about user-space file systems. While good, stable, and useful file systems are notoriously difficult to write, it is perhaps more difficult to get a new file system into the kernel for obvious reasons. FUSE allows you to write a file system in user space which has all kinds of benefits – faster release of code, no kernel recompiles, languages other than C can be used (I’m still waiting for the Fortran bindings). But is it worthwhile to write very extensive file systems in FUSE?
GlusterFS is an example of how much you can achieve by writing file systems using FUSE. It is likely the most configurable file system available with many options to achieve the behavior you want. The concept of stackable translators allows you to tune the transport protocol, the IO schedulers (at least within GlusterFS), clustering options, etc., to achieve the behavior, performance, and capacity you want (or need). Even better, you could always write a translator to give you a specific feature for you application(s). I would bet big money that you could never get something like this into the kernel – and why would you?
Keeping the file system in user space allows developers to rapidly update code and get it in the hands of testers. If the file system was in the kernel, the pace of release would be much longer and the pace of testing could possibly slower. Who wants to roll out a new kernel to test out a new file system version? Almost everyone is very conservative with their kernel and rightful so.
GlusterFS is a very cool file system for many reasons. It allows you to aggregate and use disparate storage reasons in a variety of ways. It is in use at a number of sites for very large storage arrays, for high performance computing, and for specific application arrays such as bioinformatics that have particular IO patterns (particularly nasty IO patterns in many cases). Be sure to give it a try if it suits your needs.
But I can’t help but wonder if GlusterFS represents the future of file system development for Linux?
We’ve used Gluster now a couple of times, and we’re still having some performance problems and growing pains. We started even before 2.0.8, and ran into problems. More’n likely these were amenable to remedy, but we didn’t really have time on a quasi-production cluster to sort out the poor performance. Fast forward to 3.0.8 and today. We’ve seen some issued with OFED that have been problematic, and performance is dropping again. However, this time, I think we’re going to work with the Gluster folk and sort it out. When it works, it really works well.
gerry
CAUTION: Gluster configuration has changed since this article was written. The Refer to the Gluster Information Wiki for the current steps+commands to configure Gluster.
They are not only perfect for forward movement, but they are additionally remarkable regarding horizontal movements. A correct position ans that your bk ould always be straight, and your knees high so as to balance your body.I have a part ti job in a the office. It is one of the finest folding treadmill. This will need to improve quick time because before the know it they will be a tight and tense tussle just to be runners up and make the playoff.
Wholesale Jerseys
“The employee replies with a smiley fe: “Gave you a little hand. Meanwhile in Group 3, the Czechs are second last in their group, although with one game in hand and they will need to make their move in the next couple of games to remain in contention. This is the most recurrent lead to of critical incidents. All of them are in the midst of abundant nature and there is every reason to believe that you will be provided with the perfect opportunity to clear your mind and rejuvenate body and soul when paying a visit. She ca to and said ‘I need to get enough votes to win this’.
Cheap Jerseys China
Have you ever considered niche. Awesome blog!
I truly appreciate this post. I’ve been looking everywhere for this! Thank goodness I found it on Bing. You’ve made my day! Thanks again
* Hold with the footwear is another aspect you need to look into. Getting them in a fantastic cost causes it to be better yet.Students of dance are taught how to express their concept and emotions through innurable forms of dance such as modern, jazz, traditional, folk, and ballet.com“Online marketing firm seo77 confird they had developed sofare to help her win national TV awards to beat Helen Flanagan in the public vote”They also ran a plex system which boosted her Twitter following by hundreds of thousands.One of the most critical functions to think about when getting a stroller are the safety features.
Wholesale NFL Jerseys
. As with many great painters like Picasso, Earnhardt fanatics consistently decline to permit his ambition and legacy to give-up the ghost. . Lagos town is also worth a closer look, with a museum housed in an old fortress and the St..” Gangwon laws and regulations by the teeth. Why not opt for a systematic approh!The book brings out systematic weight loss suggestions countering the popular values like resorting on caloric cra food items, exercising a couple of hours eh day, slim pl much more. Practicality is absolutely the most important aspect of finding groomsman gifts.
Wholesale NFL Jerseys
It is also a status symbol of high-class people. Smaller and easier organizations are paratively simpler and can be pleted far more quickly. In this section we review the finest of the weight loss grams aessible like Fat Loss 4 Idiots that is exceptionally rated even in Whinfrey Oprah show, alot of people have loss weight through this gram and many testify. They are made from coffee beans which are cultivated usually in Brazil. Costa Rica and El Salvador are safety to the final round of qualifying to begin next year.
Wholesale Jerseys China
Talk about what impts and influences individual happiness levels, and what can be done to support people to improve and maintain them on a sustainable basis. His Life Skills Coaching program appropriates the following tools for transformation: self transcendence, holistic wellness, life path astrology, shadow work, pain-body healing, heart-brain intention, the power of presence, and the law of attraction.The Appreciative Inquiry based approh supports the trend towards a more positive focus on strengths and what-is-working, rather than the old style problem-solving (Na, Bla & Sha) approh as the most effective way to approh REAL team building ideas.. Festival preparations begin a month or two in advance.
Wholesale Jerseys Free Shipping
Resilience AQ. It is caused by a partial blockage sowhere from the nose to the vocal chords. The set as standard is:The balls used on both the tables are also something else. You ould also discover out when the psychic requires anything of you during the reading..
Wholesale NFL Jerseys. This subtle exchange of energy is known as the Law of Attraction. For various years, Ed plus his runken heads have been the butt of morbid surrounding jokes.One of the best ways on how to prevent and slow the aging process is by being happy.Looks pretty fy.
Contrarily, if we accept it with unconditional love, then we change the frequency of our response. It was developed by Pythagoras the ideal mathematician in which even now his theorems are nevertheless utilized inside contemporary mathematics. deliver exceptional results in a sustainable manner. * Snoring – It is the coarse sound of obstructed breathing..
Cheap Jerseys
An execllent way to market your pany is quiring Youtube subscribers. Moreover, let us not forget the ever-resounding Michael Jordan.So dance oe goods relay only dance oes that can be purchased on. Students show their excellence in sports, cultural programs, dance programs, athletics and others. I’ve seen Febreze coupons in the small coupon book Costco hands out whenever you key in in a lot a lot more than one oasion.Academic profileThe university has fifteen colleges in its different campuses which offer degrees at Associate, Bachelor??, Master?? and Doctorate level.co.There are a few different websites which charge you a one ti fee of around $35 or charge you per download.The Sports Network’s diverse content coverage allows them to solidify their global position also in key markets, with specific interests in particular sports.Lavish climax, pointless to say, software, this type of go walking upright vacuum cleaner, the main automatic robot are classified as the violin with regards to degree shown incredible rubbing guitar strings, ribbon proficiencies.
youtube. Collecting any sports memorabilia or sports collectibles is a very personalized pastime that has value based more on the individuals likes and passions as much as the success of the single player or team. For Each household drinking is sometimes considering 300 in which to 600 yuan can enjoy the very best Japan dining, “Regardless Of money-sucking, nonetheless , pleasant. Mars – chanics, warriors, army, soldier, chemist and druggists, carpenters, bankers and insurance agents. You can aess the entire of Live Online Chat as well as its free talk rooms no registration needed within the forum.” Gangwon laws and regulations by the teeth. Other items that are bound to be used include leather toiletry bags, leather wallets, money clips, business card holders and cases, passport holders, grooming kits, garment bags, leather notepads, as well as engraved thumb drives. The famous public research university was founded in 1890. Do you know that they can try to eat tissue to fill their bare stomh so when famied fails to rating a lot more within the thin journey, they holiday resort on operative implies!Shed energy at the fee for your wellbeing is quiring a good a lot of thumbs-lower from individuals around the globe. Spend less and quire a lot more.
Wholesale Jerseys Cheap
And take it from . Attempt to creating your vendee feel as comfy shopping for eyeglasses as they’re any of your different merchandise. The stories range from Anal, Erotic Horror, to Taboo.. Still, there would be thousands of modern videos you have not explored.
Cheap Jerseys
I like this web blog very much so much superb info .
You are my aspiration, I own few web logs and occasionally run out from brand :). “Analyzing humor is like dissecting a frog. Few people are interested and the frog dies of it.” by E. B. White.
. As Joseph Campbell said, “Follow your bliss. You have to move the table directly so that you will be able to detect if the table is having a fine leveling. But can you ditate with an i-pad?April 5, 2014 at 7:23 AM EG CaraGirl said. The emotional reaction that they receive as a result of the drama is pure energy for the pain-body to “feed on.
Cheap NFL Jerseys
People want to get into that (market). Learning of logic, grammar and philosophy: The good constellations for studying grammar are the Rohini, Mrigasara, Punarvasu, Puya, Hasta, Dhanistha and Revati. But, unfortunately we constantly overdo and let our hunger to hunger for more and more.. Museum, sports campus, canteen facility and libraries have been established in each college campus to provide dynamic learning experience. Merge, acquire, integrate and eliminate competition. In fact, you could plan to spend the first or latter half of your trip exploring Faro, as that would be the natural airport for you to land or leave from.. Admission for Graduate College is governed by the graduate dean. The learning methodologies are unique and used in a very sophisticated manner so that each student can grasp the concept in an easy way.
Wholesale Jerseys
About point of view, through the provincial-level arranging combined with the cement arena, real to progress several imperative local commercial rise in aggressive arranging, and as a result certainly take care of the sum concrete floor assembly then growing demand debt balances as well as the structural resetting, smooth a glass decide to superior improvement proposal, to help describe the creation of route, desired goals, projects, as well as corroborating insurance coverage.Physical tivity is necessary to keeping your body feeling young even as you age.Cozy indeed. On contemporary tables the reds in addition to yellows are solid colour, with the blk having an ’8′ in a white circle upon it..
Cheap MLB Jerseys China
April 4, 2014 at 10:25 PM v . Quit now, no matter what your age, to help your body have a healthy aging process. By bringing the positive, focus on more of what-is-working approh of Appreciative Inquiry Team Building ideasOnce youe satisfied your shopping needs,Men’s Minnesota Vikings #4 Brett Favre Purple NFL Jersey, you can continue your fun with the nightlife No matter if you plan to buy and live in the property or to just make a fast profit from it,Men’s St. Goddess Durga is symbol of divine power who fights against all odds in the society.
Wholesale Jerseys From China .@J4 JH93 4MG2 ’5@:
. Do not spend too much ti holding onto those painful feelings. You can also get vintage opera glasses that may cost much more merely because they are antique. Don’t get the leather-based soaked or expose it to the harsh atmosphere for extended times for instance leaving it in direct sunlight of one’s automobile.Be alive while you are alive.
Excellent web site. Lots of helpful info here. I am sending it to several pals ans additionally sharing in delicious. And obviously, thanks to your effort!
Z13qVu
If you dont mind, where do you host your blog? I am searching for a very good web host and your webpage seams to be extremely fast and up all the time
What i do not realize is in reality how you are not actually much more smartly-liked than you may be now. You’re very intelligent. You understand thus significantly when it comes to this subject, produced me in my opinion consider it from a lot of varied angles. Its like men and women are not involved except it’s one thing to do with Woman gaga! Your personal stuffs great. Always handle it up!
• Go 4. At times, pain may possibly not be felt across the diseased organ, but be felt at one other location. Too many people in the online business world, particularly affiliate marketers, get lost in the crowds and are unable to stand out from their chosen niche because they lack the commitment to financially support their dreams. If a buyer decides they want the item, but they do not have cash on them, always take a deposit to hold the item until they are able to come back. One can only control ones, mental processes after carefully studying them.It is always better to take precautions to avoid musculoskeletal disorders than to treat them after you get affected.Now before you read any further let me just jump in here and say that I really do hope you are finding this interesting and indeed helpful. And with every family membership at the club kids play for free. Winning four relief Pitcher of the Year Awards, and the 1979 Cy Young, Sutter dominated with his splitter to overcome his lack of velocity.0 training at.
Cheap Jerseys From China
AQkpvw my review here I want to create a blog that has a creative layout like what you find on MySpace, but with more traffic. I am not a fan of the Blogger site… Any suggestions?.
Hey there. I discovered your web site by the use of Google at the same time as looking for a related matter, your site came up. It looks great. I have bookmarked it in my google bookmarks to come back then.
Do you have any video of that? I’d care to find out some additional information.
By having the comfortable clothing on it gives you that added confidence whe?Oxford, UK is a holiday destination to which the traveler can happilyreturn again and again. Too many of us have died and been injured in wars over the last thousand years to allow legal and government people to tell us how to live our lives. No! You need to do specifically what he did to obtain there. If you are over 6 foot then your clubs should be one inch longer than standard.” Ms.
Wholesale Jerseys China
Hello There. I found your blog the use of msn. This is a really smartly written article. I’ll be sure to bookmark it and come back to learn extra of your helpful info. Thank you for the post. I’ll definitely return.
Now that activity trackers are all tthe fad,
I assumed I might examine them with the identgical methodology.
My web blog: plumbing require
Many thanks, this website is really practical
Hop a sightseeing bus.) Fill the container halfway with water. He does not shift his weight to the back.lpvitamins. Course management- Before playing in a big competition it is essential to know the course.
Cheap Jerseys
Compared to the sexy and delicate halter wedding gowns, the ball gown wedding dress seems a tad
conservative. Rule #5: Leverage Your Image for
Greater Visibility by Looking the Part Now.
It was midnight when he was woke up by a strange
breeze of wind and an engaging sound. Pair this gift with a stylish dress
from a wholesale apparel store, and you’ve got the perfect gift for her.
Journalists, as well as assisted staff and camera crew,
are required to adhere to strict dress codes set forth by Buckingham Palace.
From matching hair pins to footwear and ribbings & bangles everything is
colour and attractive for girls.
Here is my blog post: dresses
Maintain the excellent work and bringing in the crowd!
how to get a viagra prescription online [ ] viagra samples free pfizer uses for viagra viagra generic review how much does 100mg viagra cost viagra natural alternative viagra maximum dose daily legal viagra online drugs similar to viagra viagra weed sildenafil 20
The foxtrot is a dance with motionless expression; the foxtrot is inventive and wonderful, while improvising within the pulse of the lody. Men are practical and everything they use has a function or meaning.So why do you need to obtain low-cost Twitter Followers? Let us start from the truth that increasingly more individuals use social fields for discussion, however what Twitter and Febook are tually useful for in our days is advertising.. Nevertheless, the value in the process is challenged by two Danish scientists who reviewed the major clinical trials of screening mammography declared that five of the seven trials had been flawed which none shown that it saved lives. Even the researchers own institution distanced itself in the report, stating the findings had not been submitted to the Nordic Cochrane Centers usual Michael Kors arduous assessment. For transfer student (who has changed their university or courses), the university provides transfer credit online facility which helps them to earn online credit hours to get degree in any valuable courses. Mars – chanics, warriors, army, soldier, chemist and druggists, carpenters, bankers and insurance agents. The learning methodologies are unique and used in a very sophisticated manner so that each student can grasp the concept in an easy way.” Getting a girl on webcam is easyif you learn how.
Wholesale Jeryseys China
Here is a superb Blog You may Discover Exciting that we encourage you to visit.
The Truth About Women and Sex. It is incredible just how your attitude modification by just turning up to an occasion and being around like-minded business owners. The children at kids parties just love princess parties. Perhaps these are the ?For beginners or professionals there are ten rules which they should stick to. There are too many easier internet opportunities to compete with it.
“Very neat article post. Great.”
thank so a lota lot for your internet site it helps a great deal
You are my aspiration, I own few web logs and infrequently run out from post :). “Never mistake motion for action.” by Ernest Hemingway.
Hi Alina, No, I didn’t have any issues with cracked chalk paint. It may have been the brand of chalk paint you used. You could try using BB Frosch chalk paint powder to paint over it?
“I cannot thank you enough for the article post.Really looking forward to read more. Will read on…” | http://www.linux-mag.com/id/7833/comment-page-1/ | CC-MAIN-2018-13 | refinedweb | 6,080 | 63.59 |
I needed a software that allowed me to capture screenshots of a web application I developed. The software should do it automatically (batch capture). This way it’d save me a lot of time.
How do I used to do that?
I visited each web page I wanted to take a screenshot. It took me about 1 hour to finish the work.
I posted a question at Super User as always: Software to automate website screenshot capture and got an answer suggesting that I use a combination of a URL fetcher + Selenium capability to take screenshots.
Well, I tried Selenium (.NET bindings selenium-dotnet-2.0rc3.zip ) to test its screenshot capture feature but it doesn’t seem to fit the job because it doesn’t allow you to configure screenshot properties as size (height x width). Moreover it doesn’t work well with Ajax (requires you to write a lot of code to check for the existence of Ajax calls, etc). This kills a screenshot that needs everything in place (I mean every DOM object should be part of the screenshot). I tried the 3 drivers available: Internet Explorer, Firefox and Chrome. Screenshots taken with Internet Explorer driver were close to what I expected.
This is a sample code I used based on the code taken from here:
using System; using System.Drawing.Imaging; using System.Text; using NUnit.Framework; using OpenQA.Selenium; using OpenQA.Selenium.Chrome; using OpenQA.Selenium.Firefox; using OpenQA.Selenium.IE; using OpenQA.Selenium.Support.UI; namespace SeleniumTest { [TestFixture] public class SeleniumExample { private FirefoxDriver firefoxDriver; #region Setup [SetUp] public void Setup() { firefoxDriver = new FirefoxDriver(); } #endregion #region Tests [Test] public void DisplayReport() { // Navigate firefoxDriver.Navigate().GoToUrl(""); IWebElement startDate = firefoxDriver.FindElement(By.Name("StartDate")); startDate.Clear(); startDate.SendKeys("January 2011"); IWebElement generate = firefoxDriver.FindElement(By.Id("Generate")); generate.Click(); var wait = new WebDriverWait(firefoxDriver, TimeSpan.FromSeconds(5)); wait.Until(driver => driver.FindElement(By.Id("Map"))); SaveScreenShot(firefoxDriver.Title); } /// <summary> /// Saves a screenshot of the current error page /// </summary> public void SaveScreenShot(string fileName) { // Get the screenshot Screenshot screenshot = firefoxDriver.GetScreenshot(); // Build up our filename StringBuilder filename = new StringBuilder(fileName); filename.Append("-"); filename.Append(DateTime.Now.ToString("yyyy-MM-dd HH_mm_ss")); filename.Append(".png"); // Save the image screenshot.SaveAsFile(filename.ToString(), ImageFormat.Png); } #endregion #region TearDown [TearDown] public void FixtureTearDown() { if (firefoxDriver != null) firefoxDriver.Close(); } #endregion } }
Indeed, Selenium is powerful for what it does, that is, helping you automate browser interactions while you test your code. It even allows you to take a screenshot let’s say when something goes wrong (a test fail for example). That’s great and that’s what it does best. It’s a choice for every operating system since it’s an API that can be programmed against.
What I needed was something more specialized to take screenshots. A software that allows me to configure screenshot properties. The good news is that I managed to find such piece of software and it’s called Paparazzi! - a very suggestive name by the way. One drawback is that it’s only available for Mac OS. As one would expect, it uses Safari browser engine behind the curtains to capture the screenshots. Paparazzi! has minor bugs but it gets the job done. It doesn’t have documentation. I had a hard time trying to make it work. It has batch capture capability but no docs explaining how to do it. So I hope this post will shed some light…
The following lines describe what I did to achieve my objective with Paparazzi!:
1 - Created a list of URLs I’d like to take screenshots of. Like this (one URL per line):
.
.
.
2 - Open Paparazzi! and click the Window menu => Batch capture (really difficult to find this option
):
Picture 1 - Paparazzi! difficult to find Batch Capture menu option
3 - Drag and drop a text file .txt (the file that contains the URLs) to the Batch Capture window:
Picture 2 - Paparazzi! Batch Capture window surface
Here is where I think I found a limitation and it’s by design. This should definitely not happen IMHO. If you try to add a file clicking on ( + button), Paparazzi won’t let you select a text file. The only way I got it working was selecting the .txt file and then drag and dropping the file to the Batch Capture window.
4 - Configure screenshot properties by clicking the list button (see mouse cursor above it):
Picture 3 - Paparazzi! screenshot process basic configurations
You can define the screenshot size. There are pre-defined values for standard screen resolutions. It allows you to define new presets.
You can also delay the capture to wait the page finish loading, etc.
There are a set of configurations available related to the batch capture functionality. To access these configurations, go to Paparazzi! menu and select Preferences:
Picture 4 - Paparazzi! Preferences… menu option
The first configuration worth mentioning the Default Filename Format available in the General section:
Picture 5 - Paparazzi! General preferences section
Above I’m defining this format:
%t = page title
%Y = year
%m = month
%d = day
%H = hour
%M = minute
%S = second
The example in the picture is pretty clear…
Another set of configurations is available in the Batch capture section:
Picture 6 - Paparazzi! Batch Capture preferences section
Here you can choose where to save the screenshots as well as the type of the images.
After configuring the batch capture session, it’s the gran finale time...
5 - Click the Play button, go take a coffee and relax while the computer does the job for you
.
Picture 7 - Paparazzi! Batch Capture in action
Hope you have found this post interesting and that it’s useful to help in documenting a little bit of this small but really powerful application.
Now I get all the screenshots in less than 1 minute!
References
Paparazzi! | http://www.leniel.net/2011/07/software-automate-site-screenshot.html | CC-MAIN-2015-48 | refinedweb | 968 | 59.09 |
MongoDB C Driver¶
A Cross Platform MongoDB Client Library for C
The MongoDB C Driver, also known as "libmongoc", is the official client library for C applications, and provides a base for MongoDB drivers in higher-level languages.
The library is compatible with all major platforms. It depends on libbson to create and parse BSON data.
Download¶
Latest release:
mongo-c-driver-1.8.2.tar.gz
Example: Count documents in a collection¶
#include <mongoc.h> #include <bcon.h> #include <stdio.h> static void print_query_count (mongoc_collection_t *collection, bson_t *query) { bson_error_t error; int64_t count; count = mongoc_collection_count ( collection, MONGOC_QUERY_NONE, query, 0, 0, NULL, &error); if (count < 0) { fprintf (stderr, "Count failed: %s\n", error.message); } else { printf ("%" PRId64 " documents counted.\n", count); } }
Documentation¶
How To Ask For Help¶
For help using the driver: MongoDB Users Mailing List.
To file a bug or feature request: MongoDB Jira Issue Tracker. | http://mongoc.org/?jmp=docs | CC-MAIN-2017-51 | refinedweb | 146 | 50.84 |
Created on 2014-03-12 10:29 by ethan.furman, last changed 2015-05-17 22:41 by terry.reedy. This issue is now closed.
`bytes` is a list of integers. Passing a single integer to `bytes()`, as in:
--> bytes(7)
b'\x00\x00\x00\x00\x00\x00\x00'
results in a bytes object containing that many zeroes.
I propose that this behavior be deprecated for eventual removal, and a class method be created to take its place.
Class method is not needed. This is just b'\0' * 7.
I don't have a strong opinion on this, but I think you are going to have to articulate a good use/usability case for the deprecation. I'm sure this is used in the wild, and we don't just gratuitously break things :)
I would think the argument for deprecation is that usually, people type bytes(7) or bytes(somesmallintvalue) expecting to create a length one bytes object using that value (happens by accident if you iterate a bytes object and forget it's an iterable of ints, not an iterable of len 1 bytes). It's really easy to forget to make it bytes([7]) or bytes((7,)) or what have you. If you make the same mistake with str, list, tuple, etc., you get an error, because they only accept iterables. But bytes silently behaves in a way that is inconsistent with the other sequence types.
Given that b'\0' * 7 is usually faster in any event (by avoiding lookup costs to find the bytes constructor) and more intuitive to people familiar with the Python sequence idiom, I could definitely see this as a redundancy that does nothing but confuse.
I agree with Serhiy that the method is not needed in any case.
I was about to post the same missing rationale: people misunderstand 'bytes(7)' and write it expecting to get bytes([7]) == b(\x07'), so it would be better to make bytes(7) raise instead of silently accepting a buggy usage. I was thinking that one rationale for bytes(n) might be that it is faster than b'\0' * n. Since Josh claimed the contrary, I tried to test with timeit.repeat (both console and Idle) and got this error message
TypeError: source code string cannot contain null bytes
Both eval and compile emit this message. So it seems that one justification for bytes(n) is to avoid putting null bytes in source strings.
I think this issue should be closed. Deprecation ideas should really be posted of python-ideas and ultimately pydev for discussion and approval.
If Ethan wants to pursue the idea, he should research the design discussions for bytes() (probably on the py3k list) and whether Guido directly approved of bytes(n) or if someone else 'snuck' it in after the initial approval.
Terry: You forgot to use a raw string for your timeit.repeat check, which is why it blew up. It was evaluating the \0 when you defined the statement string itself, not the contents. If you use r'b"\0" * 7' it works just fine by deferring backslash escape processing until the string is actually eval-ed, rather than when you create the string.
For example, on my (admittedly underpowered) laptop (Win7 x64, Py 3.3.0 64-bit):
>>> min(timeit.repeat(r'b"\0" * 7'))
0.07514287752866267
>>> min(timeit.repeat(r'bytes(7)'))
0.7210309422814021
>>> min(timeit.repeat(r'b"\0" * 7000'))
0.8994351749659302
>>> min(timeit.repeat(r'bytes(7000)'))
2.06750710129117
For a short bytes, the difference is enormous (as I suspected, the lookup of bytes dominates the runtime). For much longer bytes, it's still winning by a lot, because the cost of having the short literal first, then multiplying it, is still trivial next to the lookup cost.
P.S. I made a mistake: str does accept an int argument (obviously), but it has completely different meaning.
I'm inclined to leave it open while I do the suggested research.
Thanks for the tips, Terry, and the numbers, Josh.
AFAIK, bytes(int) is a remnant from times when bytes was mutable. Then bytes was split to non-mutable bytes and mutable bytearray and this constructor was forgotten. I'm +0 for deprecation.
Python 2.7.3 (default, Sep 26 2012, 21:51:14)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
--> bytes(5)
'5'
--> bytearray(5)
bytearray(b'\x00\x00\x00\x00\x00')
----------------------------------------------------------------------
Creating a buffer of null bytes makes sense for bytearray, which is mutable; it does not make sense, and IMHO only causes confusion, to have bytes return an /immutable/ sequence of zero bytes.
Bringing over Barry's suggestion from the current python-ideas thread [1]:
@classmethod
def fill(cls, length, value=0):
# Creates a bytes of given length with given fill value
[1]
Why would we need bytes.fill(length, value)? Is b'\xVV' * length (or if value is a variable containing int, bytes((value,)) * length) unreasonable? Similarly, bytearray(b'\xVV) * length or bytearray((value,)) * length is both Pythonic and performant. Most sequences support multiplication so simple stuff like this can be done easily and consistently; why invent a new approach unique to bytes/bytearrays?
Also, to me 'fill' implies something is being filled, not that something is being created.
The fill() name makes more sense for the bytearray variant, it is just provided on bytes as well for consistency. As Serhiy notes above, the current behaviour is almost certainly just a holdover from the original "mutable bytes" design that didn't survive into the initial 3.0 release.
Under the name "from_len", this is now part of a larger proposal to improve the consistency of the binary APIs:
May we close this as superceded by pep467?
Superseded by PEP467. | http://bugs.python.org/issue20895 | CC-MAIN-2015-22 | refinedweb | 964 | 71.55 |
A long time ago, at SenX, we helped to kickstart The Things Network. We received our starter kit, and left it a few months in the dust, because no one had time to play with it. Friday, ending a big project, I really wanted to power everything and connect it to Warp 10™ in just ONE day.
But... It didn't work as expected! To save you lots of pain, I will explain here all the hardware update steps you need to go through to get everything working. Then I will explain the easy part, the Warp 10™ MQTT connection.
Day 1: The Gateway
First, open it and make sure the LoRa module is correctly inserted in its connector. The PCB must touch the connector in the blue circle below :
Then upgrade the firmware with the latest stable version. The update procedure is here. You need a FAT32 formatted micro SD card. Once done, reboot the system, and register your gateway. Johan Stokking also made a short video on youtube to help you. You should have 4 LEDs turned on, and you can check the gateway status on this local URL:. Once the gateway was working, I spent the remaining hours reading the doc and setting up the first The Things UNO.
Day 2: The Things UNO boards
I have five of these Microchip RN2483 boards:
The RN2483 is a LoRaWAN™ Class A protocol stack. So it has its own firmware. And its own bugs... I spent one more day stuck with joining problems. With exactly the same example program, two boards were sending packets, but not a third one. Both were on my desk, 5m away from the gateway, it could not be a coverage problem. After following this Johan Stokking tutorial on youtube, my Arduino logs looked like:
Sending: mac join otaa Response is not OK: no_free_ch Send join command failed Sending: mac join otaa Join not accepted: denied Check your coverage, keys and backend status. Sending: mac join otaa Join not accepted: denied Check your coverage, keys and backend status.
You need to update the RN2483 on your The Things UNO. To do so, load the PassThrough example from the TheThingNetwork arduino examples. It enables a direct communication between Microchip device and your computer. Then, download the latest RN2483 firmware from here. I took the 1.0.5 firmware and the LoRa Development Suite for Linux.
You don't need a root access to install the LoRa Development Suite for Linux: make the run file executable, then execute it as a normal user. Uncheck the Docker Server Image and Java Redistributable RE8. I assume you already have
openjdk-8-jre-headless installed on your system. This software also has a bug that prevents you from updating the firmware... You need to create three xml files on your system to avoid a Java exception. These xml files must point to an existing directory on your system, for example:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE map SYSTEM ""> <map MAP_XML_VERSION="1.0"> <entry key="FilePath" value="/home/"/> </map>
Create missing directories:
mkdir -p ~/.java/.userPrefs/dfu mkdir -p ~/.java/.userPrefs/fed mkdir -p ~/.java/.userPrefs/toplevel
Then edit these three files to paste the valid xml described upper:
vim ~/.java/.userPrefs/dfu/prefs.xml vim ~/.java/.userPrefs/fed/prefs.xml vim ~/.java/.userPrefs/toplevel/prefs.xml
Unplug and replug your board, launch the Microchip software:
cd ~/Microchip/LoRaSuite/Applications/LoRaDevUtility java -jar LoRaDevUtility.jar
You can now go to the DFU tab, then press the select file button (the one which won't work if you do not put the xml files before). Select the
combined/RN2483_Parser.production.unified.hex file you extracted from the microchip zip file. Click update, and loop until all your The Things UNO boards are up-to-date!
Device to MQTT
Follow the Johan Stokking tutorial on youtube to get your device ID and send your first packets. My program is the example slightly modified to send a decreasing counter:
#include <TheThingsNetwork.h> // Set your AppEUI and AppKey const char *appEui = "xxxxxxxxxxxxxxxx"; const char *appKey = "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy"; (); debugSerial.println("-- JOIN"); ttn.join(appEui, appKey); } unsigned char counter = 255; void loop() { debugSerial.println("-- LOOP"); // Prepare payload of 1 byte byte payload[1]; payload[0] = counter--; // Send it off ttn.sendBytes(payload, sizeof(payload)); delay(10000); }
You should have your data visible in your application, in TheThingsNetwork console. It is time to test MQTT. First, install an MQTT client on your computer:
sudo apt install mosquitto-clients
To read the MQTT stream from TheThingsNetwork, you need to use your application name as user, and your application access key as the password. You can find it in the console too:
You can now test with mosquitto. TheThingNetwork API is documented here. These commands subscribes to the senx-sensor1 uplink topic.
mosquitto_sub -h eu.thethings.network -P 'MYSECRETKEY' -u 'senx-sensors' -d -t 'senx-sensors/devices/senx-sensor1/up'
Every 10s, I receive a json message with a payload:
{ "app_id": "senx-sensors", "dev_id": "senx-sensor1", "hardware_serial": "0004A30B001C4258", "port": 1, "counter": 476, "payload_raw": "3A==", "metadata": { "time": "2019-02-18T16:38:06.456817974Z", "frequency": 868.5, "modulation": "LORA", "data_rate": "SF7BW125", "airtime": 46336000, "coding_rate": "4/5", "gateways": [ { "gtw_id": "lora-senx", "gtw_trusted": true, "timestamp": 3224012587, "time": "2019-02-18T16:38:06Z", "channel": 2, "rssi": -60, "snr": 7.25, "rf_chain": 1, "latitude": 48.442142, "longitude": -4.4144206, "altitude": 2, "location_source": "registry" } ] } }
payload_raw is the base64 encoded payload. Everything is working as expected after two days struggling with hardware... Time for the easy part.
MQTT to Warp 10™ with MQTT plugin
Warp 10™ has an MQTT plugin. We did it last year for a customer. It is time to use it again.
git clone "" cd warp10-plugin-mqtt ./gradlew shadowJar # replace destination by your Warp 10 lib directory cp build/libs/warp10-plugin-mqtt-*.jar ~/warp10/lib/
How does it work? As you know, MQTT is a publish-subscribe based messaging protocol. It works on top of the TCP/IP protocol. There is no real standard for the message content though.
The idea behind the Warp 10™ MQTT plugin is very close to the TCP/UDP/HTTP plugins we already provided with Warp 10™ 2.0. Each packet is crunched by a WarpScript macro, everything configured in a set of warpScripts!
First of all, edit your Warp 10™ configuration file to add the following lines:
warp10.plugin.mqtt = io.warp10.plugins.mqtt.MQTTWarp10Plugin // // mqtt options: the home directory of WarpScript MQTT handlers // mqtt.dir = ${standalone.home}/mqtt // // mqtt options: scan changes in the directory every 10000ms. // mqtt.period = 10000 // // usefull to have the STDOUT function // warpscript.extension.debug=io.warp10.script.ext.debug.DebugWarpScriptExtension
The scripts you will place in the mqtt subdirectory will be reloaded after 10s if their size has changed. You can now restart Warp 10™. In a terminal, watch the Warp 10™ logs, it will be useful to check STDOUT output:
cd myWarp10path/bin ./warp10-standalone.sh restart tail -F ../logs/warp10.log
Time for the easiest part: subscribe to topics, and attach a macro to process the messages. Go into your mqtt directory, and create a new test.mc2 file. Edit it with your favorite WarpScript IDE. Here is an example:
//subscribe to the topics, attach a warpscript macro // callback to each message. the macro reads // TheThingNetwork message to extract the first byte // of payload, the server timestamp, and the device id. 'Loading MQTT TTN Warpscript' STDOUT { 'host' 'eu.thethings.network' 'port' 1883 'user' 'senx-sensors' 'password' 'MYSECRETKEY' 'clientid' 'Warp10' 'topics' [ 'senx-sensors/devices/senx-sensor1/up' 'senx-sensors/devices/senx-sensor2/up' 'senx-sensors/devices/senx-sensor3/up' ] 'timeout' 20000 'parallelism' 1 'autoack' true 'macro' <% 'message' STORE $message MQTTPAYLOAD 'ascii' BYTES-> JSON-> 'TTNmessage' STORE $TTNmessage 'payload_raw' GET OPB64-> 0 GET 'countervalue' STORE $TTNmessage 'metadata' GET 'time' GET TOTIMESTAMP 'ts' STORE $TTNmessage 'dev_id' GET 'sensorID' STORE $message MQTTTOPIC ' ' + $sensorID + ' ' + $ts ISO8601 + ' ' + $countervalue TOSTRING + STDOUT // print to warp10.log %> }
Save it, and wait. Look at the log tail. After a few seconds, the first messages from the two connected boards on my desk appeared:
Loading MQTT TTN Warpscript senx-sensors/devices/senx-sensor1/up senx-sensor1 2019-02-18T21:08:55.118861Z 112 senx-sensors/devices/senx-sensor2/up senx-sensor2 2019-02-18T21:08:59.324649Z 62 senx-sensors/devices/senx-sensor2/up senx-sensor2 2019-02-18T21:09:11.343120Z 63 senx-sensors/devices/senx-sensor1/up senx-sensor1 2019-02-18T21:09:12.343001Z 113 senx-sensors/devices/senx-sensor2/up senx-sensor2 2019-02-18T21:09:23.461037Z 64 senx-sensors/devices/senx-sensor1/up senx-sensor1 2019-02-18T21:09:29.467510Z 114 senx-sensors/devices/senx-sensor2/up senx-sensor2 2019-02-18T21:09:35.577162Z 65 senx-sensors/devices/senx-sensor1/up senx-sensor1 2019-02-18T21:09:46.592010Z 115 senx-sensors/devices/senx-sensor2/up senx-sensor2 2019-02-18T21:09:47.696662Z 66 senx-sensors/devices/senx-sensor2/up senx-sensor2 2019-02-18T21:09:59.913020Z 67
In the Warpscript macro, I can push every datapoints in a Warp 10™ instance, or store them in RAM with the Warp 10™ SHM extension: it is really the easiest part of the job! If you want a custom hosting solution for your IoT data, contact us.
Day 3: Connecting BeerTender...
Two days to deal with the buggy lower layer firmware, 20 minutes to setup the Warp 10 MQTT plugin, it is time to go further. In our office, we have a Krups BeerTender. This is really not an IoT device, but there are two sensors inside. One for temperature, one for the weight of the beer barrel. For fun, it could be nice to be able to detect the barrel change and correlate SenX team beer consumption to... I don't know yet. Moreover, the gauge on our old BeerTender turned completely wrong, it displays full when the barrel is empty. Maybe it is a calibration problem, maybe it is hardware problem. Anyway, I love electronics challenges.
As expected, finding the schematic is impossible. It is not really an open source hardware. Intuitively, the temperature sensor is a low cost variable resistor (an NTC), and the weight sensor a strain gauge.
The beer temperature channel
This is the easiest one. As I expected, the NTC is continuously powered in a simple resistor divider. It is easy to read the voltage with an Arduino.
The barrel weight channel
This strain gauge sensor is rather complex, but keep in mind that a general microcontroller can only manage voltages or digital signals. So, there must be somewhere a rather complex input stage that will ultimately deliver a voltage to the microcontroller. Or maybe a digital signal, SPI or I2C.
Tear Down
Tearing down is easy. Two screws, a few connector to remove. Good surprise: It is a class II power supply, so no ground problem when I will connect an oscilloscope or my computer on the system.
- Good guess, the temperature sensor is a basic NTC. We see the resistor divider and a little filter capacitor. Reading this will be easy.
- Good surprise, the power supply is a good old 7805. No need to recreate +5V for the arduino board, just take it there.
- The strain gauge controller is an FS511. The datasheet is available. I don't know anything about this circuit. It is an 18 bit SPI ADC. With a multimeter and the FS511 pin configuration, it is easy to follow each SPI wire to the microcontroller.
Pick useful connections
It is time to sold wires on the interesting nets:
- Black: Ground
- Red: +5V
- Green: NTC voltage
- Brown: Slave Data In
- Yellow: Chip Select
- Blue: Slave Data Out
- Yellow with black marks: Clock
FS511 is a rather complex piece of silicon. The ADC could be configured to read the bridge sensor, but also several offsets. Spying Data Out and Clock may not be enough, we don't know exactly what is measured. Looking at the oscillosope, there are 10 measures per second, but sometimes another one with a different result, maybe an offset. Clock is either really slow (70µs period, or really fast (12µs period, 1µs true, 11µs false). It seems the microcontroller does some software SPI for the first 4 bits, then regular hardware for the last 3 bytes.
Decoding this will be hard too! 4 bit address then 24 bits data is not a common pattern.
Is the Arduino UNO fast enough?
1µs spikes of the clock are troublesome... To be sure, I made a little loop sketch on The Things arduino UNO:
#define output 0 void setup() { // put your setup code here, to run once: pinMode(output,OUTPUT); } void loop() { digitalWrite(output,LOW); digitalWrite(output,HIGH); }
Result raise a huge red alarm in my brain: period is 7µs... I need a 20 times faster microcontroller to decode the signal! In my mind, there are two possible solutions to get around:
- Develop another program with a faster DSPic33F board I have, put it between the BeerTender board and the UNO LoraWan board.
- Cut everything between the existing microcontroller and the FS511. It means reading the barrel weight information value myself, and perfectly understanding the FS511.
The first one is not feasible by everyone. I'm going to try the second one.
Before going further, it is important to get a glimpse at the used configurations of FS511. There is no magic behind, I used an oscilloscope to capture Clock and Din during the system boot. I do not have any logic analyzer with me.
Time to cut the wires on the board to rewire them on the Arduino. If the BeerTender software is well done, the weight measurement is independent from the temperature control loop.
Little mistake: there is no need to cut the Dout line. Let it as it is, the pullup is already on the board.
Read sensor values
Now it is time to try to read something from the strain gauge. I am going to do software SPI all along, so I can wire everything on The Things UNO, no matter the pin number. Just avoid D0 and D1, used by LoraWan stack.
PRO TIP: The Lora UNO board will be powered by the BeerTender +5V. So... use a special micro USB Cable that will not inject the +5V of your computer inside the board! Voltages will be slightly different, and this is a risk for your beloved computer.
I will do what BeerTender did: measure offset during one second, value during one second. As the convergence time are pretty slow, I will measure every 100ms, ignore the 5 first samples, do a mean with the 5 next ones, store the result.
LoRaWan is an open network. I have to keep bandwidth for everyone. I will send results every 20s, with a 10 byte payload:
struct rawResultType{ unsigned long offset; unsigned long value; unsigned short temperature; };
The payload will be a mean of the 10 previous samples. Here is the full code:
Click to see Arduino Code
#include <TheThingsNetwork.h> // Set your AppEUI and AppKey const char *appEui = "70B3D57ED0017ECA"; const char *appKey = "MY APP KEY"; #define loraSerial Serial1 #define debugSerial Serial // Replace REPLACE_ME with TTN_FP_EU868 or TTN_FP_US915 #define freqPlan TTN_FP_EU868 const int TemperatureAnalog = A0; //yellow with black dots #define SPIclock 7 //yellow #define SPIcs 6 //blue slave data out #define SPImiso 5 //brown slave data in #define SPImosi 4 #define SPIcycleClock digitalWrite(SPIclock, HIGH); digitalWrite(SPIclock, LOW) TheThingsNetwork ttn(loraSerial, debugSerial, freqPlan); void setup() { loraSerial.begin(57600); debugSerial.begin(9600); //setup the SPI bus: pinMode(SPIclock, OUTPUT); digitalWrite(SPIclock, LOW); pinMode(SPIcs, OUTPUT); digitalWrite(SPIcs, HIGH); pinMode(SPImiso, INPUT); pinMode(SPImosi, OUTPUT); digitalWrite(SPImosi, LOW); // Wait a maximum of 10s for Serial Monitor while (!debugSerial && millis() < 10000) continue; debugSerial.println("-- STATUS"); ttn.showStatus(); debugSerial.println("-- JOIN"); ttn.join(appEui, appKey); debugSerial.println("-- init the strain gauge ADC"); sendFS511(0,0xF4); sendFS511(1,0xDC); sendFS511(2,0x93); sendFS511(3,0x55); } void sendFS511(unsigned char address, unsigned char data) { digitalWrite(SPIcs, LOW); digitalWrite(SPImosi, LOW);// first bit is 0; SPIcycleClock; digitalWrite(SPImosi, (address & 2) != 0 ? HIGH : LOW); SPIcycleClock; digitalWrite(SPImosi, (address & 1) != 0 ? HIGH : LOW); SPIcycleClock; digitalWrite(SPImosi, LOW); //fourth bit is 0; SPIcycleClock; digitalWrite(SPImosi, (data & 0x80) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x40) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x20) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x10) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x8) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x4) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x2) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPImosi, (data & 0x1) != 0 ? HIGH : LOW); //faster than a loop... SPIcycleClock; digitalWrite(SPIcs, HIGH); } unsigned long readFS511adc() { int i; unsigned long r = 0; digitalWrite(SPIcs, LOW); digitalWrite(SPImosi, HIGH); // first bit is 1; SPIcycleClock; digitalWrite(SPImosi, LOW); // second bit is 0; SPIcycleClock; digitalWrite(SPImosi, LOW); // third bit is 0; SPIcycleClock; digitalWrite(SPImosi, HIGH); // fourth bit is 1; SPIcycleClock; for (i = 0;i < 24;i++) { //ok, this time, let's do a loop. r |= (digitalRead(SPImiso) ? 1 : 0); SPIcycleClock; r <<= 1; } digitalWrite(SPIcs, HIGH); return r; } unsigned short temperatureSensorValue = 0; unsigned long strainGaugeValue = 0; unsigned long strainGaugeOffset = 0; unsigned long strainGaugeValueSum = 0; unsigned long strainGaugeOffsetSum = 0; unsigned long strainGaugeValueMean = 0; unsigned long strainGaugeOffsetMean = 0; int loopCounter = 0; int loopSendCounter = 0; // 100µs loop. // record offset, then value every 2seconds // send data every 20s. struct rawResultType{ unsigned long offset; unsigned long value; unsigned short temperature; }; //again, payload will be a mean of 20s of data. struct rawResultType payloadMean[10]; struct rawResultType payload; void loop() { loopCounter++; if (loopCounter >= 0 && loopCounter < 10) { //record the offset. convergence of the value is a slow process. //5 first results are ignored, the 5 next one are used to compute a mean. sendFS511(0,0xF4); strainGaugeOffset = readFS511adc(); //debugSerial.println("straingaugeoffset=" + String(strainGaugeOffset)); if (loopCounter > 4) { strainGaugeOffsetSum += strainGaugeOffset; } if (loopCounter == 9) { strainGaugeOffsetMean = strainGaugeOffsetSum / 5; debugSerial.println("straingaugeoffsetMEAN=" + String(strainGaugeOffsetMean)); } } if (loopCounter == 10) { //read the temperature temperatureSensorValue = analogRead(TemperatureAnalog); debugSerial.println("temperature=" + String(temperatureSensorValue)); } if (loopCounter >= 10 && loopCounter < 20) { //record the value. convergence of the value is a slow process. //5 first results are ignored, the 5 next one are used to compute a mean. sendFS511(0,0x88); strainGaugeValue = readFS511adc(); //debugSerial.println("straingaugeValue=" + String(strainGaugeValue)); if (loopCounter > 14) { strainGaugeValueSum += strainGaugeValue; } if (loopCounter == 19) { strainGaugeValueMean = strainGaugeValueSum / 5; debugSerial.println("straingaugeValueMEAN=" + String(strainGaugeValueMean)); } } if (loopCounter == 20) { loopCounter = 0; strainGaugeValueSum = 0; strainGaugeOffsetSum = 0; payloadMean[loopSendCounter].temperature = temperatureSensorValue; payloadMean[loopSendCounter].offset = strainGaugeOffsetMean; payloadMean[loopSendCounter].value = strainGaugeValueMean; loopSendCounter++; } if(loopSendCounter == 10) { loopSendCounter = 0; long vs=0; long os=0; long ts=0; //do a mean, again. for( int i = 0 ; i < 10 ; i++){ vs += payloadMean[i].value; os += payloadMean[i].offset; ts += payloadMean[i].temperature; } payload.value = vs / 10; payload.offset = os / 10; payload.temperature = ts / 10; debugSerial.println("Send temp="+String(payload.temperature)+" V="+String(payload.value)+" O="+String(payload.offset)); //send data ! ttn.sendBytes((byte*)&payload, sizeof(payload)); debugSerial.println("sending data :"+String(sizeof(payload))); } delay(100); }
A quick look at the oscilloscope: Everything seems OK, Clock and Data out are alive.
Decode the payload in WarpScript
Now, it is time to write the WarpScript to decode the payload in Warp 10™.
Here the raw payload decoded by Warp 10™: A1FC30006B47D201DD02. Temperature is 733, 0x2DD. The end of my payload is DD02. I have an endianness problem! Note for next Warp 10™ release: add functions to easily reverse bytes arrays. Here is the WarpScript™ that decodes the MQTT messages from our connected BeerTender:
//subscribe to the BeerTender topic, attach a warpscript macro callback to each message // the macro read TheThingNetwork message to extract the payload and process it. 'Loading MQTT TTN BeerTender Warpscript' STDOUT { 'host' 'eu.thethings.network' 'port' 1883 'user' 'senx-sensors' 'password' 'myappkey' 'clientid' 'Warp10' 'topics' [ 'senx-sensors/devices/senx-beertender/up' ] 'timeout' 20000 'parallelism' 1 'autoack' true 'macro' <% //in case of rx timeout, the macro is called with NULL on the stack to flush buffers if any. 'message' STORE <% $message ISNULL ! %> <% $message MQTTPAYLOAD 'ascii' BYTES-> JSON-> 'TTNmessage' STORE $TTNmessage 'payload_raw' GET B64-> ->HEX 'rawpayload' STORE //bff830001d48d201db02 string $TTNmessage 'metadata' GET 'time' GET TOTIMESTAMP 'ts' STORE $TTNmessage 'dev_id' GET 'sensorID' STORE //little helper macro to reverse bytes in the string representation <% 's' STORE '' // push an empty string on the stack $s SIZE 2 - 0 <% 2 - %> //iterate from string size - 2 to 0, step -2 <% 'i' STORE $s $i 2 SUBSTRING + //concatenate with the string on the stack %> FORSTEP %> 'reverseEndianness' STORE $rawpayload 0 8 SUBSTRING @reverseEndianness FROMHEX 'offset' STORE // first 8 characters to long $rawpayload 8 8 SUBSTRING @reverseEndianness FROMHEX 'value' STORE // next 8 characters to long $rawpayload 16 4 SUBSTRING @reverseEndianness FROMHEX 'temperature' STORE // last 4 characters to long $sensorID ' ' + $ts ISO8601 + ' ' + ' temp=' + $temperature TOSTRING + ' value=' + $value TOSTRING + ' offset=' + $offset TOSTRING + ' raw=' + $rawpayload + STDOUT // print to warp10.log %> IFT %> }
Uploading data in Warp 10™ is just a few more lines of WarpScript:
//my local write token "vQbBxsWwjtadYyr0jajvlK5QgdaeRxtbpuo2vsdbOCRXmspZ" 'wt' STORE NEWGTS 'beertender.rawtemperature' RENAME $ts NaN NaN NaN $temperature ADDVALUE $wt UPDATE NEWGTS 'beertender.rawvalue' RENAME $ts NaN NaN NaN $value ADDVALUE $wt UPDATE NEWGTS 'beertender.rawoffset' RENAME $ts NaN NaN NaN $offset ADDVALUE $wt UPDATE
Final Check calibration
On my screen, I have the Arduino output and the Warp 10 logs: perfect match.
12h15 : Lunch, time for a beer! I put an already chilled BeerTender barrel in our newly connected device, and wait a few minutes. Then I served one and another one for a. I wrote a quick WarpScript™ to get data, remove the offset, multiply temperature to have in on the same scale:
// @preview gts //read local data from BeerTender "C5UrCikHoK8axjnweNrA24v1aPSHEn97v0AkTFfnQPW9S1BvFf7FVJC3NHePk" 'rt' STORE [ $rt '~beertender.(rawvalue|rawoffset)' {} NOW MINLONG ] FETCH <% DROP 'g' STORE $g [ $g bucketizer.min 0 0 1 ] BUCKETIZE 0 GET VALUES 0 GET - %> LMAP // [ SWAP [] reducer.sum ] REDUCE // sum value and offset ? maybe ? [ $rt 'beertender.rawtemperature' {} NOW MINLONG ] FETCH 0 GET 200 *
Great, the result is here!
Further work: calibrate the temperature sensor and try to guess the temperature law to correct the strain gauge results.
Conclusion
In a few days, I discovered LoRa, MQTT, I reversed engineered the BeerTender weight sensor, and now I have raw data flowing into the Warp 10™ time series database. Advanced analytics are now possible!
I hope you also learned a lot while reading this article!
We encourage you to check out Warp 10™ and its multiple plugins, enabling you to do advanced IoT applications without having to deploy multiple tools.
If you like electronics, the IoT, code ... and beer, ping us, SenX is hiring!
Related posts:
Build a simple Raspberry 3.5 inch dashboard with Warp 10, controlling the framebuffer from WarpScript instead of using X.Org and heavy stuff!
Manipulation of raw binary payloads in WarpScript could be done easily with existing functions, no need to call external programs!
Electronics engineer, fond of computer science, embedded solution developer. | https://blog.senx.io/connecting-a-beertender-to-warp-10-using-mqtt-on-lorawan-with-thethingsnetwork/ | CC-MAIN-2020-05 | refinedweb | 3,821 | 58.18 |
Typosquatting 154
plashdoy writes: "Oh what a tangled Web we weave: ZDNN article on Typosquatting. Don't you hate it when stuff like this is more profitable than your honest efforts?".
slashdit (Score:2)
Re:Hotmail and Hotmale ain't the same thing (Score:1)
Re:you know... (Score:2)
Why on earth should I bother bookmarking slashdot, when it takes far much effort to use the bookmarks-button, than to just type 'slashdot.org' ?
You only show that you probably have a patethic 400keys-per-minute typingspeed.
--
Re:OrangoTango (Score:1)
Re:Typos' (Score:1)
Of course, it's college... and everyone had a good laugh...
The best qoute from that quarter, from my professor: "I only visit porn sites to make sure that they are within the law." Yeah right
I eat dog. Free DVDs [opendvd.org]. Horray!
Re:The Evil Potential for this (Score:1)
You mean like the scam Rahoule described in Comment 156 [slashdot.org] to this article?
Re:What typo site? (Score:1)
That's a typical example of bailing out at the first sign of danger if I ever saw one...
But still, can't blame them, they probably got alle the income they wanted from the site the second it got posted on
Re:New TLDs... (Score:1)
"It is well that war is so terrible, lest we grow too fond of it."
Re:Making bucks off someone else's rep (Score:2)
Do you know exactly what the address of your local McDonalds is? I don't. But I know the sign on the front says 'McDonalds' and generally what its appearance is.
A McDonald's street address would correspond to a website's IP address. NOT its DNS name.
How many people make typos? (Score:1)
For every one who flames, there are probably several who never notice and several more who do but don't get worked up about it.
You can probably get a few hundred hits a day off of the ineptitude of others. Maybe they'll learn to use bookmarks someday (or just make
Re:Making bucks off someone else's rep (Score:2)
most disturbing typosquatting I've stumbled upon.. (Score:1)
I was attempting to search at google.com [google.com] and instead I accidentally typed in gogole.com [gogole.com]. This sends me to Bill Gate's personal website at microsoft.com, framed in an ad. Weird, eh?
Even stranger, I don't seem to remember an ad the first time I went there...
Don't forget whitehouse.com (Score:1)
Re:What typo site? (Score:1)
Re:Typos' (Score:2)
Re:you know... (Score:2)
Pope
Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
The url is a typo . .that's a *little funny ? ? ? (Score:1)
OT (Score:1)
Warmth?
-- Dr. Eldarion --
It doesn't bother me. (Score:1)
Now it's time for the obligatory go-at.cx link... (Score:1)
arg (Score:1)
Re:Making bucks off someone else's rep (Score:1)
If they are loading it in a frame, then Taco owes them a portion of the banner proceeds.
:-)
typosquatting... clever...? (Score:1)
Microsfot.com (Score:1)
Sometimes i like typosquatters... [microsfot.com]
Re:Phone companies did this long ago (Score:1) (Score:1)
Bill Gates personally overseeing the vaccination of third world countries to protect them from Linux!
use javascript to prevent that.. (Score:1)
Re:Making bucks off someone else's rep (Score:1)
However, if you were to set up mcdonalsd.com and fill the site with porn banners or something, McDonald's wouldn't have a leg to stand on. There would be no confusing your porn-banner website with McDonalds' real site.
Shaun
Re:slsahdot.org (Score:1)
(I knew someone was gonna catch that
Kills frame-squatting dead. (Score:5)
function changePage()
{
if(self.parent.frames.length != 0)
self.parent.location="/index.html";
}
Remember TCBY? (Score:1)
Re:What typo site? (Score:1)
anyway... i hope nobody is terribly offended by this.
DavesClassics and PayPal (Score:2)
I used to frequently visit an emulation site called Dave's Classics (). I had to be very careful typing in the address because if I typed:
DavesClassics is now known as VintageGaming.com [vintagegaming.com].
I remembered someone on Slashdot mentioning that PayPal had a problem with a site typosquatting on paypaI.com and grabbing people's credit card numbers. (The lowercase 'l' and uppercase 'I' are almost indistinguishable in some fonts...)
Re: (Score:1)
stealing content? wtf are you talking about. Since when is a frameset stealing content. It isn't going to a pretend slashdot server or anything like that.. it just is slashdot for lazy people like me, who still type teh instead of the in word processors...
Doesn't sound so evile (Score:1)
This doesn't sound so bad to me. If the site is particularly misleading so the person doesn't realize he has landed on the wrong site, that might be grounds to sue. Other than that?
I don't see anything wrong with it.
Dlugar
Early typosquatting (Score:1)
The very earliest case I now of is a guy who was jealous of his brother's success. So he set up his own town and put signs on the road pointing the other direction, leading to Reme.
What typo site? (Score:1)
What about parody? (Score:1)
Just had one today (Score:1)
Be sure to check out INVESTER.com (Score:1)
Interesting (Score:1)
Re:Typos' (Score:1)
The typosquatters included typo'd version of washingtonpost.com, yahoo.com, and microsoft.com. (The last one surprised me a bit. You'd think that microsoft would have accidentally acquired all of the typos around their name...)
Like I said, most of these were advertising sites, shilling various banner ads and the like. A couple were porn sites, but not very many compared to the total list.
Kierthos
Re:money. (Score:1)
--
Mews sites (Score:2)
See foxmews.com [foxmews.com], cbsmews.com [cbsmews.com], and nbcmews.com [nbcmews.com].
Re:slsahdot.org (Score:1)
Sometimes typosquatting is good (Score:1)
I for instance miss a search engine on Slashdot which would be able of searching through ***all*** submissions and comments ever posted (I don't even know whether Slashdot keeps all submissions and comments). It would also be cool if the search engine would have multiple options like threshold for comments etc.
Just an idea
the skunk
How to prevent typosquatting (Score:1)
New TLDs... (Score:3)
-Chris
"Hotnail" prompts for login. (Score:1)
location.replace() - Please! (Score:1)
These "frame breaking" scripts have been around for at least the last three years (that's when they became popular, anyway), but most of them cause problems for the poor web surfer. If you simply assign a new value to the location attibute it effectively disables the "back" button on the browser. This is because it creates a new element in the history array, so if you try to go back where you came from, it still remembers the last page you put in the frame and executes its javascript again - sending you forward to the "unframed" state.
Use the location.replace(URL) method instead; it replaces the current history entry, essectially forgetting that you were ever there. The back button still works, and everyone is happy. Unless, of course, they actually wanted to go back to slashd0t.0rg...
Re:slsahdot.org (Score:1)
while serving slsahdot.org
error while executing
deferred
HTML::Mason::Request::__ANON__('deferred at
Carp::confess('deferred') called
Re:Making bucks off someone else's rep (Score:1)
IHMO, here's the ranking from ok to sleezy:
Re:Reminds me... (Score:1)
--
You don't become a failure until you are content with being one.
This reminds me... (Score:2)
And that's why the phone police on the TV ads now tell you to dial 1-800-CALL-ATT.
Re:you know... (Score:1)
.
._ _ .__. ___ ___ ._ _. _.. _. .. .
Re:What typo site? (Score:2)
Meanwhile you have just generated additional revenue for them, as a horde of interested Slashdotters (like me) type in this URL to see what shows up.
Personally, while its kinda sleazy and I sure as hell would not be likely to engage in typosquatting, I do think its inventive. This attitude will survive until such time as someone start typosquatting one of my websites of course
:)
IMHO: Most interesting Slashdot typo site (Score:1)
Re:Kills frame-squatting dead. (Score:5)
<script language="JavaScript"><!--
// Break out of frames! I'm the tops!
if (window.location != window.parent.location) {
window.parent.location = window.location;
}
//--></script>
If you want a nav-bar, that's why your browser has that little toolbar at the top of the screen. Or you can implement a floating bar that sits in it's own trimmed-down browser window. Then you won't run afoul of frame-busting scripts.
-=Julian=-
So how would you fix it? (Score:3)
For example, if I owned frito.com and you owned fritto.com [m-w.com], a perfectly legitimate word (maybe a chef's site, for example), is that a violation?
How would you quantify this in a way NSI and others could enforce? It seems like any solution would require subjective review by a committee, and that means that it would be political, capricious, and subject to manipulation like the WTO.
Personally, I think the internet advertising market will change in coming years, and just serving up a banner won't make you the 5 cents a click that people claim to receive now. This will make running a "typosquatting" site less lucrative. I also see no difference between "typosquatting" and perfume knockoffs, rolex watch knockoffs, kit cars, and other sorts of ways of leeching off a major brand name. It's a healthy part of how capitalism works.
The only big problem I see is intentional deceit, such as the recent problem with bank of america [arbforum.com] where someone was trying to deceive people into sending in personal info. We have existing fraud laws to cover that.
So, unless someone is trying to trick you into thinking that they are really bankofamerica.com or slashdot.org, I don't have a problem with "typosquatting".
Reminds me... (Score:1) [doanload.com]
into the browser.
--
You don't become a failure until you are content with being one.
Re:you know... (Score:1)
Oh wait...that was me.
-A.P.
"One World, one Web, one Program" - Microsoft promotional ad
TO say the least (Score:2)
-------
CAIMLAS
Re:Typos' (Score:1)
Re:Phone companies did this long ago (Score:1)
I think all of this has gone too far, domain names should be first come first served. Not quick enough to get the domain you wanted, too bad, think of a new one or attempt to buy it off the person who did register it for whatever they want to charge you for it.
Re:money. (Score:1)
INCORRECT (Score:1)
My Dad ws one of the owners of TCBY, and I mean the chain, not just a couple of stores. So I've never heard this story and doubt it's validity, particularly considering that TCBY always did stand for "THE COUNTRY'S BEST YOGURT" and has nothing to do with the question of whether or not the product is actually a yogurt-derivative.
So there you have it.
Re:Who cares? (Score:2)
Oh, really? Banner ads are not that different than ads on television and in magazines: mass exposure of a name or brand, grouped around content that hopefully has a higher density of your target audience than average. Name recognition and image building is extremely important to become a successful product. There might be a few exceptions, but you didn't really think that Coca-Cola or Microsoft would be as big as they are without the ads, do you? Billions of dollars a year are put into advertisement, and trust me, most companies would not do that if it didn't boost sales.
Witness the frequent "AOL keyword" phrase in radio ads.
The "AOL keyword" namespace isn't in any way bigger than the DNS namespace. If it becomes popular as an alternative for domains, you get the same problems as you find now with domains. Except that it's all in the hands on one company. You won't get many domainsquatters, no, you have one: AOL, and it's got *everything* squattable. (It would be the same for any other company trying to make such "name spaces"). Oh, and you don't really think that Ford Motors will say, "we already have ford.com, we don't mind if someone else uses 'AOL keyword: Ford'", do you?
Domain names should not be typed in by hand very often. Use bookmarks and search engines.
Well, to put something in a bookmarks file, you first have to find the address somehow. Search engines are nice, but not an alternative. Could you imagine a radio ad for Xyzzy soap saying "Visit out web site, go to your favourite search engine, search for 'xyzzy soap', and find us in the huge list of returned matches". No self respecting marketing person is going to fall for that - nor will the public accept it. Besides, search enignes fall for the "typo trap" as well; and you don't even need different domain names for that. Also, Yahoo was mentioned as one of the companies with typo sites.... you really thing that using a search engine to go to yahoo is going to solve that typo problem?
I forsee a day when there are multiple orthogonal online namespaces akin to Yahoo, and URLs will be passed around as ""
Well, that's how it all started. But nowadays, everything needs to have its own domain name - and it isn't just companies. Just look at the postings with this story, how many people here are saying "I have
.(com|net|org)". Just like big companies, geeks want their own domain too. It's all vanity and the phobie to type punctuation characters.
My favorite bakery has a site at SantaCruz,CA/Buttery (by city)" which would translate to
Cute, but since Henry Ford mass produced cars (and before that, railroads), we no longer live in a society where people spend 364 days a year in their own village. The world, and especially the electronic world is global. Geographic domains don't work in general, and any attempt to do more than two-letter toplevel domains has been a failure. And even two-letter domains don't really work well. Or do you really think all the
.to and .cx domains are located in the Pacific? And then there's the obvious problem of people and companies relocating... Would you want to have your email address change when you move?
-- Abigail
Re:typosquatting... clever...? (Score:1)
Typosquatting rules! (Score:1)
Re:What typo site? (Score:2)
In my opinion, this is plagiarism. They are plagiarising Slashdot to raise money for themselves or a content provider. If someone was doing this to a commercial website I owned, I would be seeing lawyers and issuing "Cease and Desist" notices very quickly.
I think the best way of thwarting it tho is through Javascript. Something like:
if (top.location != self.location) top.location = self.location;
--
my all time fav (Score:2)
...instead of the news service, you get "free web based email" -- just enter your uname and password.
who would have guessed the web could make our lives so simple?
ICANN? (Score:3)
What's the problem? (Score:2)
the alternative would be that any given trademark etc. would include permutated characters and similar words.
Any name hard enough to make it possible to spell wrong by _anyone_ , i.e more than 4 characters, is just a bad name for the net.
Re:Making bucks off someone else's rep (Score:2)
Re:Kills frame-squatting dead. (Score:2)
OrangoTango (Score:2)
at the top of the screen. Or you can implement a floating bar...
I beleive there's a company called [orangatango.com]
OrangoTango that's working ona product to
do some of these things and more. Browser/Location independant
bookmarks, preferences, etc... available anywhere, from any
browser... there must be more to it if you simply
implement these by making floating js bars.
Making bucks off someone else's rep (Score:5)
I have a feeling that if I went out on the street, put up a green sign with silver arches, and called it MacDonalds and started selling chicken sandwiches, that the company that has sold Billions and Billions would have proper recourse to land on me with a ton of lawyers. But here in cyberspace, it's *just* a typo?
I don't think so. And even if it is, when folks like the 800 pound gorilla from Redmond get into the act, it won't stay that way long, DOJ lawsuits aside. And for once, I think that's as it should be. indeed, don't try that not-a-link unless you're 18.....
--
That Isaac Hayes, he's one baaaad mutha...(Hush yo mouth!)
I'm just talkin' 'bout Chef! (We can dig it!)
Re:not totaly realted but (Score:2)
Typos' (Score:2)
slsahdot.org (Score:2)
slsahdot.org [slsahdot.org]
No money in it, of course, I just got tired of getting a DNS lookup error everytime I misspelled it that particular way.
I hate that (Score:3)
What is this world coming to?
Rights in misspellings (Score:2)
As a business, "typosquatting" probably ranks with standing around with a "will work for food" sign, so it's not going to go beyond the joke level.
Re:What typo site? (Score:2)
Ian
Re:Making bucks off someone else's rep (Score:2)
A closer analogy would be if Burger King set up a store at 123 N. Main St., while McDonald's was at 123 S. Main. St.. People know where they are, they just got the address wrong. The misdialed phone number analogy is also good...
Re:Making bucks off someone else's rep (Score:2)
No, your analogy is imperfect...
a real analogy would be someone opening a store that looked exactly (or almost exactly) like McDonalds, and called it MacDonalds (notice the extra 'a'). THAT's a typo analogy.
Just for shits and giggles (Score:2) [widnows.com] [micorsoft.com] [gooogle.com] [exciite.com] [microsofy.com] [hotmaik.com]
Here's a funny one: [netwroksolutions.com]
And [networksloutions.com] - which redirects to, and provides a link to
You can probably come up with a LOT more than this list, just by entering misspelled domain names. Hell, it worked for me, and this is probably just a tiny sample of what's out there.
-- Give him Head? Be a Beacon?
Re:Typos' (Score:2)
The ass that got fired was warned multiple times about his browsing and chose to continue. He had a pretty sizeable collection on his harddrive, looked like the majority came from e-mail attachments he was exchanging with his buddies. I couldn't understand that one. Guess he was pretty compulsive about it.
Right after this clown was fired, a memo went out reminding everyone of the policy and that everything was logged and sniffed. Guess a bunch of people were warned and cleaned up their act since. I'm no prude, like to see some skin myself once in a while but it completely escapes me why someone would need to do this while at work.
Re:Kills frame-squatting dead. (Score:2)
Re:Typos' (Score:2)
Mozilla bug 29346 [mozilla.org] has been futured, so I'm afraid we might have to live with this problem for a while longer (the multiple popup ads, not the lack of porn).
--
Re:What typo site? (Score:3)
==
Re:Kills frame-squatting dead. (Score:3)
I actually have a page on my own web site (not for external browsing, [arbutus.cx]) that has a narrow (75px) left sidebar frame where I can navigate between my favorites. Easier for me that having to drag around my favorites whereever I go.
CNN likes to kill my happy little side frame, which is a reason why I usually go to it last. But I guess CNN would have more "frame linking" problems (especially linking directly to articles). I wouldn't be happy is Slashdot did this (the first link on my side navbar), but I guess I could live with it.
you know... (Score:2)
- A.P.
--
* CmdrTaco is an idiot.
If there was a model of the most vague story... (Score:2)
What's he referring to, sites like Slashgrits?
Re:Typos' (Score:2)
Re:Kills frame-squatting dead. (Score:2)
Another thing I really hate is the opposite of this -- reframing. The site on which it bothers me the most is SecurityFocus. Who in the world thought that reframing a BugTraq post that I link to along with 7 other frames, putting into a tiny little frame and making it unreadable, is convenient? I ended up using other BugTraq archives.
--
Re:Making bucks off someone else's rep (Score:2)
--
Making 1st amendment go bye-bye (Score:2)
Let's not get into the mess that would happen if I run doglovers.com and some other dog lover couldn't get that domain so they made their own legitimate dog site called dogloverz.com. Or maybe he's busy working on his site and has only some opening HTML and a few test ads. I'd rather not have brat netizens calling "typo" site and demaning pulled domains. Outlawing typo sites would be a great way for losers like GW Bush and Jack T. Chick to get rid of parody sites named after them. Go ahead and try to define "typo" site.
Typo sites should be kept alive and well and if you feel they're using your content without permission (framings) that doesn't mean all typo domains should be abolished it means you have a problem with one specific webmaster who is actively trying to fool people.
As far as linking to the "real" site, thats just as much bullshit as the rest. That could fool the user into thinking that slashbot.org has an association with slashdot.org. You're better off without them, eventually they should realize that hey this isn't the place I wanted to go.
What you should be doing is less whining and more hustling, inform the ad providers and the company that they're advertising that you saw their ad in an unfair fashion and will think twice before shopping there and prefer the honest admanship (this isn't a word or is it?) of their competitors.
Then again I don't see most ads, click my homepage to get a small but effective ad blocking hosts file.
Re:you know... (Score:2)
Pope
Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
Re:slsahdot.org (Score:2)
You're using WINDOWS. yuck...
--
Re:What typo site? (Score:2) | https://slashdot.org/story/00/09/23/1733257/typosquatting | CC-MAIN-2017-13 | refinedweb | 3,873 | 75 |
14 November 2012 17:00 [Source: ICIS news]
LONDON (ICIS)--Here is Wednesday’s end of day European oil and chemical market summary from ICIS.
CRUDE: Dec WTI: $85.82/bbl, up 44 cents/bbl. Dec BRENT: $109.30/bbl, up $1.04/bbl
Crude oil futures gained in a short space of time late on Wednesday due to unrest in the ?xml:namespace>
NAPHTHA: $925-930/tonne, down $3/tonne
Two cargo trades were done at $929-930/tonne. December swaps were assessed at $927-929/tonne.
BENZENE: $1,330-1,350/tonne, up $10/tonne on the buy side
November values edged down to $1,315-1,340/tonne in a subdued market but later moved back up as Brent saw gains of over $1/bbl. December bids came down as low as $1,305/tonne, but later closed at $1,325-1,340/tonne.
STYRENE: $1,440-1,480/tonne, down $10-25/tonne
November lost ground this afternoon alongside benzene, as the current month is well supplied and derivative demand remains soft. Bids moved as low as $1,430/tonne but the market later recovered to close at $1,440-1,480/tonne. December offers were at $1,480/tonne but not met with any bids, later edging back up to $1,490/tonne.
TOLUENE: $1,330-1,370/tonne, steady
November was steady in a quiet market. Nevertheless, balanced supply levels mitigated any downward turn on pricing.
MTBE: $1,114-1,117/tonne, down $9-12/tonne
Prices eased off with two afternoon trades. EuroBob gasoline traded at $951-963/tonne | http://www.icis.com/Articles/2012/11/14/9613822/evening-snapshot-europe-markets-summary.html | CC-MAIN-2014-35 | refinedweb | 266 | 67.76 |
We started listening to touch events in Android Lesson Five: An Introduction to Blending, and in that lesson, we listened to touch events and used them to change our OpenGL state.
To listen to touch events, you first need to subclass GLSurfaceView and create your own custom view. In that view, you create a default constructor that calls the superclass, create a new method to take in a specific renderer (LessonFiveRenderer in this case) instead of the generic interface, and override onTouchEvent(). We pass in a concrete renderer class, because we will be calling specific methods on that class in the onTouchEvent() method.
On Android, the OpenGL rendering is done in a separate thread, so we’ll also look at how we can safely dispatch these calls from the main UI thread that is listening to the touch events, over to the separate renderer thread.
public class LessonFiveGLSurfaceView extends GLSurfaceView { private LessonFiveRenderer mRenderer; public LessonFiveGLSurfaceView(Context context) { super(context); } @Override public boolean onTouchEvent(MotionEvent event) { if (event != null) { if (event.getAction() == MotionEvent.ACTION_DOWN) { if (mRenderer != null) { // Ensure we call switchMode() on the OpenGL thread. // queueEvent() is a method of GLSurfaceView that will do this for us. queueEvent(new Runnable() { @Override public void run() { mRenderer.switchMode(); } }); return true; } } } return super.onTouchEvent(event); } // Hides superclass method. public void setRenderer(LessonFiveRenderer renderer) { mRenderer = renderer; super.setRenderer(renderer); } }
And the implementation of switchMode() in LessonFiveRenderer:
public void switchMode() { mBlending = !mBlending; if (mBlending) { //); } else { // Cull back faces GLES20.glEnable(GLES20.GL_CULL_FACE); // Enable depth testing GLES20.glEnable(GLES20.GL_DEPTH_TEST); // Disable blending GLES20.glDisable(GLES20.GL_BLEND); } }
Let’s look a little bit more closely at LessonFiveGLSurfaceView::onTouchEvent(). Something important to remember is that touch events run on the UI thread, while GLSurfaceView creates the OpenGL ES context in a separate thread, which means that our renderer’s callbacks also run in a separate thread. This is an important point to remember, because we can’t call OpenGL from another thread and just expect things to work.
Thankfully, the guys that wrote GLSurfaceView also thought of this, and provided a queueEvent() method that you can use to call stuff on the OpenGL thread. So, when we want to turn blending on and off by tapping the screen, we make sure that we’re calling the OpenGL stuff on the right thread by using queueEvent() in the UI thread.
Further exercises
How would you listen to keyboard events, or other system events, and show an update in the OpenGL “Listening to Android Touch Events, and Acting on Them”
Hai,
I want to know the object coordinates of the cube.This I am using.But giving Null Exception.How can i rectify it???
public void handleActionDown(int eventX, int eventY)
{
if (eventX >= (x – bitmap[numFaces].getWidth()/2) && (eventX = (y – bitmap[numFaces].getHeight()/2) && (y <= (y + bitmap[numFaces].getHeight()/2 )))
{
Log.d("Mytag", "Coords: x=" + eventX + ",y=" + eventY);
setTouched(true);
} else
{
setTouched(false);
}
}
else
{
setTouched(false);
}
// TODO Auto-generated method stub
}
On surface view it is,
public boolean onTouchEvent(MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN)
{
mcube1.handleActionDown((int)event.getX(), (int)event.getY());
Log.d("Mytag", "Coords: x=" + event.getX() + ",y=" + event.getY());
}
if (event.getAction() == MotionEvent.ACTION_MOVE) {
if (mcube1.isTouched()) {
mcube1.setX((int)event.getX());
mcube1.setY((int)event.getY());
Log.d("works", "Coords: x=" + event.getX() + ",y=" + event.getY());
}
} if (event.getAction() == MotionEvent.ACTION_UP) {
if (mcube1.isTouched())
{
mcube1.setTouched(false);
Log.d("works", "Coords: x=" + event.getX() + ",y=" + event.getY());
}
}
return true;
How can I find out if the user clicked on a position, such as in the lower right corner of the screen. I know I can do in the game.
Alternatively, could you show me a tutorial on buttons.
Thank you for your response. | http://www.learnopengles.com/listening-to-android-touch-events-and-acting-on-them/ | CC-MAIN-2018-39 | refinedweb | 618 | 50.94 |
A deep dive into some of the parameters of the
read_csvfunction in pandas
Pandas is one of the most widely used libraries in the Data Science ecosystem. This versatile library gives us tools to read, explore and manipulate data in Python. The primary tool used for data import in pandas is
read_csv().This function accepts the file path of a comma-separated value, a.k.a, CSV file as input, and directly returns a panda’s dataframe. A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values.
The
pandas.read_csv()has about 50 optional calling parameters permitting very fine-tuned data import. This article will touch upon some of the lesser-known parameters and their usage in data analysis tasks.
pandas.read_csv() parameters
The syntax for importing a CSV file in pandas using default parameters is as follows:
import pandas as pd
df = pd.read_csv(filepath)
1. verbose
The verbose parameter, when set to
True prints additional information on reading a CSV file like time taken for:
- type conversion,
- memory cleanup, and
- tokenization.
import pandas as pd
df = pd.read_csv('fruits.csv',verbose=True)
2. Prefix
The header is a row in a CSV file containing information about the contents in every column. As the name suggests, it appears at the top of the file.
Sometimes a dataset doesn’t contain a header. To read such files, we have to set the
header parameter to none explicitly; else, the first row will be considered the header.
df = pd.read_csv('fruits.csv',header=none)
df
The resulting dataframe consists of column numbers in place of column names, starting from zero. Alternatively, we can use the
prefix parameter to generate a prefix to be added to the column numbers.
df = pd.read_csv('fruits.csv',header=None, prefix = 'Column')
df
Note that instead of
Column, you can specify any name of your choice.
3. mangle_dupe_cols
If a dataframe consists of duplicate column names — ‘X’,’ X’ etc
mangle_dupe_cols automatically changes the name to ‘X’, ‘X1’ and differentiate between the repeated columns.
df = pd.read_csv('file.csv',mangle_dupe_cols=True)
df
One of the
2015 column in the dataframe get renames as
2015.1.
4. chunksize
The
pandas.read_csv() function comes with a chunksize parameter that controls the size of the chunk. It is helpful in loading out of memory datasets in pandas. To enable chunking, we need to declare the size of the chunk in the beginning. This returns an object we can iterate over.
chunk_size=5000
batch_no=1
for chunk in pd.read_csv('yellow_tripdata_2016-02.csv',chunksize=chunk_size):
chunk.to_csv('chunk'+str(batch_no)+'.csv',index=False)
batch_no+=1
In the example above, we choose a chunk size of 5000, which means at a time, only 5000 rows of data will be imported. We obtain multiple chunks of 5000 rows of data each, and each chunk can easily be loaded as a pandas dataframe.
df1 = pd.read_csv('chunk1.csv')
df1.head()
You can read more about chunking in the article mentioned below:
Loading large datasets in Pandas: Effectively using Chunking and SQL for reading large datasets in pandas. 🐼towardsdatascience.com
5. compression
A lot of times, we receive compressed files. Well,
pandas.read_csv can handle these compressed files easily without the need to uncompress them. The compression parameter by default is set to
infer, which can automatically infer the kind of files i.e
gzip ,
zip ,
bz2 ,
xz from the file extension.
df = pd.read_csv('sample.zip') or the long form:df = pd.read_csv('sample.zip', compression='zip')
6. thousands
Whenever a column in the dataset contains a thousand separator,
pandas.read_csv() reads it as a string rather than an integer. For instance, consider the dataset below where the sales column contains a comma separator.
Now, if we were to read the above dataset into a pandas dataframe, the
Sales column would be considered as a string due to the comma.
df = pd.read_csv('sample.csv')
df.dtypes
To avoid this, we need to explicitly tell the
pandas.read_csv() function that comma is a thousand place indicator with the help of the
thousands parameter.
df = pd.read_csv('sample.csv',thousands=',')
df.dtypes
7. skip_blank_lines
If blank lines are present in a dataset, they are automatically skipped. If you want the blank lines to be interpreted as NaN, set the
skip_blank_lines option to False.
8. Reading multiple CSV files
This is not a parameter but just a helpful tip. To read multiple files using pandas, we generally need separate data frames. For example, in the example below, we call the
pd.read_csv() function twice to read two separate files into two distinct data frames.
df1 = pd.read_csv('dataset1.csv')
df2 = pd.read_csv('dataset2.csv')
One way of reading these multiple files together would be by using a loop. We’ll create a list of the file paths and then iterate through the list using a list comprehension, as follows:
filenames = ['dataset1.csv', 'dataset2,csv']
dataframes = [pd.read_csv(f) for f in filenames]
When many file names have a similar pattern, the glob module from the Python standard library comes in handy. We first need to import the
glob function from the built-in
glob module. We use the pattern
NIFTY*.csv to match any strings that start with the prefix
NIFTY and end with the suffix
.CSV. The ‘
*’(asterisk) is a wild card character. It represents any number of standard characters, including zero.
import glob
filenames = glob.glob('NIFTY*.csv')
filenames
--------------------------------------------------------------------
['NIFTY PHARMA.csv',
'NIFTY IT.csv',
'NIFTY BANK.csv',
'NIFTY_data_2020.csv',
'NIFTY FMCG.csv']
The code above makes it possible to select all CSV filenames beginning with NIFTY. Now, they all can be read at once using the list comprehension or a loop.
dataframes = [pd.read_csv(f) for f in filenames]
Conclusion
In this article, we looked at a few parameters of the pandas.read_csv() function. it is a beneficial function and comes with a lot of inbuilt parameters which we seldom use. One of the primary reasons for not doing so is because we rarely care to read the documentation. It is a great idea to explore the documentation in detail to unearth the vital information that it may contain.
Originally published here | https://parulpandey.com/2021/05/07/there-is-more-to-pandas-read_csv-than-meets-the-eye/?shared=email&msg=fail | CC-MAIN-2021-43 | refinedweb | 1,037 | 57.77 |
Solution for
Programming Exercise 12.3
THIS PAGE DISCUSSES ONE POSSIBLE SOLUTION to the following exercise from this on-line Java textbook.
Exercise 12.3: The fact that Java has a HashMap class means that no Java programmer has to write an implementation of hash tables from scratch -- unless, of course, you are a computer science student.
Write an implementation of hash tables from scratch. Define the following methods: get(key), put(key,value), remove(key), containsKey(key), and size(). Do not use any of Java's generic data structures. Assume that both keys and values are of type Object, just as for HashMaps. Every Object has a hash code, so at least you don't have to define your own hash functions. Also, you do not have to worry about increasing the size of the table when it becomes too full.
You should also write a short program to test your solution.
Discussion
A hash table is just an array of linked lists. Each linked list holds all the items in the table that share the same has code. Initially, all the lists are empty (represented as null in the array). We need to be able to add and delete items in the list. Linked lists were covered in Section 9.2, and methods for inserting and deleting items can be found there. In a hash table, the order of items in a particular list doesn't matter, so I simply insert each new item at the beginning of a list. This makes the insert operation fairly simple. Deletion, however, is more complicated, since we need to be able to delete an item no matter where it occurs in a list.
A hash table really contains pairs of items, where each pair consists of a key and an associated value. Since we are working with a generic hash table, both the key and the value are of type Object. Each node in the linked lists contains a key, a value, and a pointer to the next node in the list. The end of a list is marked, as usual, by a null pointer. The nodes are defined by a static nested class:static private class ListNode { Object key; Object value; ListNode next; }
The array of linked lists that stores all the data is of type ListNode[]. Each item in the array is either null to indicate an empty list, or it is a pointer to the first node in a linked list.
Given any key, we can find the linked list that should contain that key by looking at the hash code of the key. The code computed by calling key.hashCode() is of type int. We need a value that is in the range of legal indices for the array. As noted in Section 3, the value can be computed as Math.abs(key.hashCode()) % table.length, where table is the array.
The hash code is used in all the methods that deal with keys to decide which linked list to look at. Once a list has been selected, the operations on the list are pretty straightforward (given all the information in Section 9.2), so I will not discuss them further here. You can look at the solution, below.
Although it is not required by the exercise, I defined a resize() method that is used to increase the size of the table when the table becomes too full. I call this method in the put() method when the table becomes more than 3/4 full.
A major part of developing a class for general use is testing. It's important to design a testing procedure that will test all aspects of the class. For this problem, I wrote a testing program that would allow me to test each of the methods in the class. I also added a method dump() to the class that displays the entire hash table. This method does not really belong in the class, since users of a hash table shouldn't care how the data is stored. But I needed it to make sure that my resize() method is working properly and that I could delete items correctly from all positions in the lists (beginning, middle, and end).
The Solution
The HashTable Class: /* This file defines a HashTable class. Keys and values in the hash table are of type Object. Keys cannot be null. The default constructor creates a table that initially has 64 locations, but a different initial size can be specified as a parameter to the constructor. The table increases in size if it becomes more than 3/4 full. */ public class HashTable { static private class ListNode { // Keys that have the same hash code are placed together // in a linked list. This private nested class is used // internally to implement linked lists. A ListNode // holds a (key,value) pair. Object key; Object value; ListNode next; // Pointer to next node in the list; // A null marks the end of the list. } private ListNode[] table; // The hash table, represented as // an array of linked lists. private int count; // The number of (key,value) pairs in the // hash table. public HashTable() { // Create a hash table with an initial size of 64. table = new ListNode[64]; } public HashTable(int initialSize) { // Create a hash table with a specified initial size. // Precondition: initalSize > 0. table = new ListNode[initialSize]; } void dump() { // This method is NOT part of the usual interface for // a hash table. It is here only to be used for testing // purposes, and should be removed before the class is // released for general use. This lists the (key,value) // pairs in each location of the table. System.out.println(); for (int i = 0; i < table.length; i++) { // Print out the location number and the list of // key/value pairs in this location. System.out.print(i + ":"); ListNode list = table[i]; // For traversing linked list number i. while (list != null) { System.out.print(" (" + list.key + "," + list.value + ")"); list = list.next; } System.out.println(); } } // end dump() public void put(Object key, Object value) { // Associate the specified value with the specified key. // Precondition: The key is not null. int bucket = hash(key); // Which location should this key be in? ListNode list = table[bucket]; // For traversing the linked list // at the appropriate location. while (list != null) { // Search the nodes in the list, to see if the key already exists. if (list.key.equals(key)) break; list = list.next; } // At this point, either list is null, or list.key.equals(key). if (list != null) { // Since list is not null, we have found the key. // Just change the associated value. list.value = value; } else { // Since list == null, the key is not already in the list. // Add a new node at the head of the list to contain the // new key and its associated value. if (count >= 0.75*table.length) { // The table is becoming too full. Increase its size // before adding the new node. resize(); } ListNode newNode = new ListNode(); newNode.key = key; newNode.value = value; newNode.next = table[bucket]; table[bucket] = newNode; count++; // Count the newly added key. } } public Object get(Object key) { // Retrieve the value associated with the specified key // in the table, if there is any. If not, the value // null will be returned. int bucket = hash(key); // At what location should the key be? ListNode list = table[bucket]; // For traversing the list. while (list != null) { // Check if the specified key is in the node that // list points to. If so, return the associated value. if (list.key.equals(key)) return list.value; list = list.next; // Move on to next node in the list. } // If we get to this point, then we have looked at every // node in the list without finding the key. Return // the value null to indicate that the key is not in the table. return null; } public void remove(Object key) { // Remove the key and its associated value from the table, // if the key occurs in the table. If it does not occur, // then nothing is done. int bucket = hash(key); // At what location should the key be? if (table[bucket] == null) { // There are no keys in that location, so key does not // occur in the table. There is nothing to do, so return. return; } if (table[bucket].key.equals(key)) { // If the key is the first node on the list, then // table[bucket] must be changed to eliminate the // first node from the list. table[bucket] = table[bucket].next; count--; // Remove new number of items in the table. return; } // We have to remove a node from somewhere in the middle // of the list, or at the end. Use a pointer to traverse // the list, looking for a node that contains the // specified key, and remove it if it is found. ListNode prev = table[bucket]; // The node that precedes // curr in the list. Prev.next // is always equal to curr. ListNode curr = prev.next; // For traversing the list, // starting from the second node. while (curr != null && ! curr.key.equals(key)) { curr = curr.next; prev = curr; } // If we get to this point, then either curr is null, // or curr.key is equal to key. In the later case, // we have to remove curr from the list. Do this // by making prev.next point to the node after curr, // instead of to curr. If curr is null, it means that // the key was not found in the table, so there is nothing // to do. if (curr != null) { prev.next = curr.next; count--; // Record new number of items in the table. } } public boolean containsKey(Object key) { // Test whether the specified key has an associated value // in the table. int bucket = hash(key); // In what location should key be? ListNode list = table[bucket]; // For traversing the list. while (list != null) { // If we find the key in this node, return true. if (list.key.equals(key)) return true; list = list.next; } // If we get to this point, we know that the key does // not exist in the table. return false; } public int size() { // Return the number of key/value pairs in the table. return count; } private int hash(Object key) { // Compute a hash code for the key; key cannot be null. // The hash code depends on the size of the table as // well as on the value returned by key.hashCode(). return (Math.abs(key.hashCode())) % table.length; } private void resize() { // Double the size of the table, and redistribute the // key/value pairs to their proper locations in the // new table. ListNode[] newtable = new ListNode[table.length*2]; for (int i = 0; i < table.length; i++) { // Move all the nodes in linked list number i // into the new table. No new ListNodes are // created. The existing ListNode for each // key/value pair is moved to the newtable. // This is done by changing the "next" pointer // in the node and by making a pointer in the // new table point to the node. ListNode list = table[i]; // For traversing linked list number i. while (list != null) { // Move the node pointed to by list to the new table. ListNode next = list.next; // The is the next node in the list. // Remember it, before changing // the value of list! int hash = (Math.abs(list.key.hashCode())) % newtable.length; // hash is the hash code of list.key that is // appropriate for the new table size. The // next two lines add the node pointed to by list // onto the head of the linked list in the new table // at position number hash. list.next = newtable[hash]; newtable[hash] = list; list = next; // Move on to the next node in the OLD table. } } table = newtable; // Replace the table with the new table. } // end resize() } // end class HashTable A Program for Testing HashTable: /* A little program to test the HashTable class. Note that I start with a really small table so that I can easily test the resize() method. */ public class TestHashTable { public static void main(String[] args){ HashTable table = new HashTable(2); String key,value; while (true) { System.out.println("\nMenu:"); System.out.println(" 1. test put(key,value)"); System.out.println(" 2. test get(key)"); System.out.println(" 3. test containsKey(key)"); System.out.println(" 4. test remove(key)"); System.out.println(" 5. show complete contents of hash table."); System.out.println(" 6. EXIT"); System.out.print("Enter your command: "); switch ( TextIO.getlnInt()) { case 1: System.out.print("\n Key = "); key = TextIO.getln(); System.out.print(" Value = "); value = TextIO.getln(); table.put(key,value); break; case 2: System.out.print("\n Key = "); key = TextIO.getln(); System.out.println(" Value is " + table.get(key)); break; case 3: System.out.print("\n Key = "); key = TextIO.getln(); System.out.println(" containsKey(" + key + ") is " + table.containsKey(key)); break; case 4: System.out.print("\n Key = "); key = TextIO.getln(); table.remove(key); break; case 5: table.dump(); break; case 6: return; // End program by returning from main() default: System.out.println(" Illegal command."); break; } System.out.println("\nHash table size is " + table.size()); } } } // end class TestHashTable
[ Exercises | Chapter Index | Main Index ] | http://www.faqs.org/docs/javap/c12/ex-12-3-answer.html | crawl-002 | refinedweb | 2,159 | 76.42 |
Overview
You can create a digital asset of any type (Object, Geometry node, Dynamics node) by converting a subnetwork of that type into a digital asset.
See how to install and use assets for how users can access the new asset after it is created.
How to create a digital asset
Create a subnetwork containing the nodes that will define the behavior of the asset.
Make sure you use relative references in expressions so they will still be valid inside the asset. For example, use
ch('../copy1')instead of
ch('/obj/geo1/subnet1/copy1').
To quickly put a bunch of nodes inside a subnetwork, select them, right click and choose Actions ▸ Collapse into Subnet.
Right click the subnet node and choose Create digital asset.
(Houdini will warn you if you have references inside the subnet to nodes outside it, since they won’t work in the digital asset.)
A dialog appears that lets you name the asset and decide where to save it.
The Operator label is the human-readable name that shows up in Houdini’s user interface, for example
Three Light Rig.
The Operator name is a unique internal name to distinguish it from other nodes, for example
threelightrig.
You can change the label later, but you can’t change the name without recreating the asset. In a studio environment, you may want to ensure your node names don’t collide with Houdini’s node names, or vendor node names, by using namespaces.
The Save to library path sets the name of a library file to save the asset definition into. See asset libraries below for more information.
Click Accept.
Houdini opens the Type properties window for the new node type to let you edit its options. (To open the type properties window again later, you can right click an instance of the asset and choose Type properties.)
You can use this window to set up the new node type’s parameter interface.
You can write documentation on the asset’s Help tab to explain its purpose and controls to users, using a Wiki markup language.
Serving assets and data over HTTP
Most places you can specify a file in Houdini will also accept a URL. You can use a web server to serve shared
.hda asset libraries as well as shared project data (such as textures).
Specifying files inside an asset using opdef:
You can embed files inside the digital asset, using the Extra files tab in the Type properties window.
In parameters where you can specify a file path, you can use the following syntax to instead specify the content of a section inside an asset:
opdef:/Network_type/asset_name?section_name
You can use this to include all the files an asset needs (such as textures) in the asset itself.
For example:
opdef:/Object/my_asset?Texture.png
You can also use
opdef:.?section (to get a file from the current asset) and
opdef:..?section (to get a file from the parent of the current node). For example, if you have a
File node directly inside an asset, you can load a geometry file from the asset using:
opdef:..?test.bgeo
Note that relative
opdef: may not work in all places. For example, in a shader you can say
texture("opdef:/Shop/foo/texture.rat") but you can’t use
opdef:.. because the compiled VEX has no concept of a parent node.
Tip
You can interactively choose embedded files from inside assets using the Houdini file chooser. Click the
opdef: root on the left side of the chooser window.
Asset libraries
Digital assets are stored in digital asset library files, with a
.hda extension. (Older versions of Houdini used a
.otl extension.) Houdini loads any
.hda (or
.otl) files it finds in
HOUDINIPATH/otls.
The default library save location when you create a new asset is in your user account’s Houdini preferences directory (under
HOUDINIPREFS/hda), so the asset will only be available to you. In a studio environment, you might save the asset in a central, network-accessible directory (that has been added to the Houdini path) so other users can install it.
A library file can contain multiple assets (this means you can keep saving new assets into the library for your account). However, we recommend you store each asset in a separate library file. This makes it straightforward to share some assets and not others, and keep track of which assets are in which files.
Embedding assets in the current scene file
You can embed an asset in the current
.hipfile instead of in an external file. This may be useful for testing or sharing example files. When you create an asset, set the Save to library field to
Embedded.
You can enable an option to save the definitions of any assets used in a scene file with the scene file. This allows you to use the scene file even in environments where the asset libraries are not available, but can cause confusion about which definitions are being used. Choose Windows ▸ Operator type manager, click the Configuration tab, and turn on Save node definitions to HIP file.
When an asset is both embedded in the scene file and available from a library file, by default Houdini will load the asset from the library. To reverse this, choose Windows ▸ Operator type manager, click the Configuration tab, and turn on Prefer definitions saved with HIP file.
Although Houdini lets you embed an asset definition in the scene file, we recommend you store all assets in library files.
One issue is that Houdini only embeds definitions for assets that are used in the scene. If you unknowingly delete the last instance of an asset and save, the definition of the asset will be lost. | http://www.sidefx.com/docs/houdini/assets/create.html | CC-MAIN-2019-43 | refinedweb | 957 | 63.8 |
26 February 2008 10:28 [Source: ICIS news]
(adds details throughout)
SINGAPORE (ICIS news)--Lanxess has selected Singapore's Jurong Island as the site of its largest investment - a €400m ($592m) plant - to cater to strong demand growth in Asia, the world’s second largest butyl rubber maker said on Tuesday.
The project will boost the company’s total capacity to 380,000 tonnes/year in early 2011.
“By setting up this new location in ?xml:namespace>
Demand growth in the greater China region was expected at 6.3% annually while that of India was at 8.7% although from a smaller base, said Ron Commander, head of Lanxess’ butyl rubber unit which has more than €500m of sales a year.
He added that global demand was 900,000 tonnes/year and butyl rubber was in very short supply. Annual global demand growth was expected to stay at around 3% in the next 15 years, the company said.
Construction on the 100,000 tonne/year project in
More than 100 engineers and 1,500 workmen will be employed on the building sites and the project will initially create around 200 new jobs, Lanxess said.
The proposed plant will produce butyl and bromobutyl rubber, synthetic rubbers that are used in the production of tyres.
Shell Eastern Petroleum will supply raffinate 1 from a proposed butadiene extraction unit to Lanxess, which will take isobutene for its rubber production.
Lanxess was looking at increasing the isoprene content of its butyl rubber, allowing it to open up new applications such as tyre treads, Commander said.
Halobutyl rubber demand was also expected to increase with the rise in radial tyres output, especially in
The company was expected to complete an expansion at its
These two plants could be expanded further to react quickly to market needs, Commander | http://www.icis.com/Articles/2008/02/26/9103552/Lanxess-invests-400m-in-Singapore-unit.html | CC-MAIN-2013-48 | refinedweb | 302 | 57.5 |
C# Tutorial
Hello World in C#Hello World in C#
using System; namespace CosmicLearn { class HelloWorld { static void Main(string[] args) { Console.WriteLine("Hello World!"); Console.ReadLine(); } } }
C# - Pronounced as C Sharp was created by a team led by Anders Hejlsberg.
The first version of C# as released in the year 2000.
C# is a general purpose, object oriented programming language.
C# works well with the .NET Framework in Windows. It was introduced with the .NET Framework when it was launched.
C# programs can be developed via IDE like Microsoft Visual Studio, SharpDevelop or MonoDevelop.
However for beginners it is recommended to start programming via a text editor so that you can get a firm grasp of the concepts of the language.
We have started this training module with a hello world program as displayed above.
You can use the left hand navigation to jump directly to a C# module you are interested in.
Alternative you can use previous and next links in a given module to go step by step. Happy learning! | https://www.cosmiclearn.com/csharp/index.php | CC-MAIN-2022-40 | refinedweb | 173 | 66.84 |
Ruby.
Very useful check! But where did you hear that `method_added` wasn’t inherited? I’ve used it before and just double-checked again right now, and it inherits just fine. I also don’t see the need for doing an alias method chain. Just override method_added in TestCase and call super at the end.
class Test::Unit::TestCase
class << self
def known_test_methods
@known_test_methods ||= Array.new
end
def method_added(method)
if method.to_s.starts_with?(“test”)
if known_test_methods.include?(method)
raise “Duplicate test #{self}##{method}”
else
known_test_methods << method
end
end
super
end
end
end
January 14, 2009 at 5:43 am
This is great stuff; the sort of thing that testing frameworks should have built in.
January 14, 2009 at 6:31 am
Great check! I added a similar hook to my [fixjour]() project to make sure people don’t define redundant builder methods: []().
January 14, 2009 at 7:19 am
just switch to rspec/should/anything better then pure test and do yourself+everybody reading your code a favor + no need to worry about such wacky details
January 14, 2009 at 8:47 am
Josh,
You’re absolutely right about method_added inheriting in Ruby. We initially tried your method but something in our code base (Ruby 1.8.6 / Rails 2.0.2) seems to break/override this inheritance for test cases.
The alias method chain was added because we didn’t want to make assumptions about whether or not these classes might already have a method_added, potentially blowing them away.
January 14, 2009 at 5:49 pm
Grosser,
We would love to use alternate frameworks, but we’re working with a large legacy code base and it’s not worth the time right now to introduce a new testing framework.
January 14, 2009 at 5:53 pm
Great piece of engineering work!
January 14, 2009 at 6:46 pm
ruby -w?
rake?
$-w = true?
January 14, 2009 at 8:28 pm | http://pivotallabs.com/duplicate-test-name-detection/?tag=iphone | CC-MAIN-2014-42 | refinedweb | 322 | 73.78 |
vtan posted on 03-31-2011 8:28 PM
Hi John,
I'm using Infer.NET to solve a decoding problem in GF(2). I am trying to do a logical XOR of Variable<bool>[ ] m, since there's no inbuilt logical XOR. I've attached the code to the end of this message. It compiles fine but the inference
results are not symmetric in the arguments of the array m, when I think it should be since GF(2) + (or logical XOR) is commutative. Any help would be appreciated.
Thanks,
Vincent Tan
s1 = gf2sum4vectors(m1);
private Variable<bool> gf2sum4vectors(Variable<bool>[] m)
{
Variable<bool>[] partialSums = new Variable<bool>[m.Length];
for (int i = 0; i < m.Length; i++)
{
if (i == 0)
partialSums[i] = m[0];
else
partialSums[i] = gf2sum(partialSums[i - 1], m[i]);
}
return partialSums[m.Length - 1];
}
private Variable<bool> gf2sum(Variable<bool> a, Variable<bool> b)
{
return ((a & (!b)) | (b & (!a)));
}
vtan replied on 04-01-2011 10:46 AM
Hi John,
(a!=b) is a nice way of implementing XOR. Strangely though, in my application, doing (a & !b) | (b & !a) yields results that are more correct. I can't explain why.
Vincent
John Guiver replied on 04-01-2011 10:18 AM
Hi Vincent
Have you seen the following post:? This discusses how to do XOR for single variables - i.e. your gf2sum method. Try that, and if there are still problems, please repost.
John | https://social.microsoft.com/Forums/en-US/15d4fff6-b61a-43d3-bc5b-4c6bf38830c4/logical-xor-migrated-from-communityresearchmicrosoftcom?forum=infer.net | CC-MAIN-2020-34 | refinedweb | 240 | 69.28 |
/* * pprocess.h * * Operating System Process (running program executable)process.h,v $ * Revision 1.71 2005/11/30 12:47:38 csoutheren * Removed tabs, reformatted some code, and changed tags for Doxygen * * Revision 1.70 2005/11/25 03:43:47 csoutheren * Fixed function argument comments to be compatible with Doxygen * * Revision 1.69 2005/01/26 05:37:54 csoutheren * Added ability to remove config file support * * Revision 1.68 2004/06/30 12:17:04 rjongbloed * Rewrite of plug in system to use single global variable for all factories to avoid all sorts * of issues with startup orders and Windows DLL multiple instances. * * Revision 1.67 2004/05/27 04:46:42 csoutheren * Removed vestigal Macintosh code * * Revision 1.66 2004/05/21 00:28:39 csoutheren * Moved PProcessStartup creation to PProcess::Initialise * Added PreShutdown function and called it from ~PProcess to handle PProcessStartup removal * * Revision 1.65 2004/05/19 22:27:19 csoutheren * Added fix for gcc 2.95 * * Revision 1.64 2004/05/18 21:49:25 csoutheren * Added ability to display trace output from program startup via environment * variable or by application creating a PProcessStartup descendant * * Revision 1.63 2004/05/18 06:01:06 csoutheren * Deferred plugin loading until after main has executed by using abstract factory classes * * Revision 1.62 2004/05/13 14:54:57 csoutheren * Implement PProcess startup and shutdown handling using abstract factory classes * * Revision 1.61 2003/11/25 08:28:13 rjongbloed * Removed ability to have platform without threads, win16 finally deprecated * * Revision 1.60 2003/09/17 05:41:59 csoutheren * Removed recursive includes * * Revision 1.59 2003/09/17 01:18:02 csoutheren * Removed recursive include file system and removed all references * to deprecated coooperative threading support * * Revision 1.58 2002/12/11 22:23:59 robertj * Added ability to set user identity temporarily and permanently. * Added get and set users group functions. * * Revision 1.57 2002/12/02 03:57:18 robertj * More RTEMS support patches, thank you Vladimir Nesic. * * Revision 1.56 2002/10/17 13:44:27 robertj * Port to RTEMS, thanks Vladimir Nesic. * * Revision 1.55 2002/10/17 07:17:42 robertj * Added ability to increase maximum file handles on a process. * * Revision 1.54 2002/10/10 04:43:43 robertj * VxWorks port, thanks Martijn Roest * * Revision 1.53 2002/09/16 01:08:59 robertj * Added #define so can select if #pragma interface/implementation is used on * platform basis (eg MacOS) rather than compiler, thanks Robert Monaghan. * * Revision 1.52 2002/07/30 02:55:48 craigs * Added program start time to PProcess * Added virtual to GetVersion etc * * Revision 1.51 2002/02/14 05:13:33 robertj * Fixed possible deadlock if a timer is deleted (however indirectly) in the * OnTimeout of another timer. * * Revision 1.50 2001/11/23 06:59:29 robertj * Added PProcess::SetUserName() function for effective user changes. * * Revision 1.49 2001/08/11 07:57:30 rogerh * Add Mac OS Carbon changes from John Woods <jfw@jfwhome.funhouse.com> * * Revision 1.48 2001/05/22 12:49:32 robertj * Did some seriously wierd rewrite of platform headers to eliminate the * stupid GNU compiler warning about braces not matching. * * Revision 1.47 2001/03/09 05:50:48 robertj * Added ability to set default PConfig file or path to find it. * * Revision 1.46 2001/01/02 07:47:44 robertj * Fixed very narrow race condition in timers (destroyed while in OnTimeout()). * * Revision 1.45 2000/08/30 03:16:59 robertj * Improved multithreaded reliability of the timers under stress. * * Revision 1.44 2000/04/03 18:42:40 robertj * Added function to determine if PProcess instance is initialised. * * Revision 1.43 2000/02/29 12:26:14 robertj * Added named threads to tracing, thanks to Dave Harvey * * Revision 1.42 1999/03/09 02:59:50 robertj * Changed comments to doc++ compatible documentation. * * Revision 1.41 1999/02/16 08:11:09 robertj * MSVC 6.0 compatibility changes. * * Revision 1.40 1999/01/30 14:28:10 robertj * Added GetOSConfigDir() function. * * Revision 1.39 1999/01/11 11:27:11 robertj * Added function to get the hardware process is running on. * * Revision 1.38 1998/11/30 02:51:00 robertj * New directory structure * * Revision 1.37 1998/10/18 14:28:44 robertj * Renamed argv/argc to eliminate accidental usage. * * Revision 1.36 1998/10/13 14:06:13 robertj * Complete rewrite of memory leak detection code. * * Revision 1.35 1998/09/23 06:21:10 robertj * Added open source copyright license. * * Revision 1.34 1998/09/14 12:30:38 robertj * Fixed memory leak dump under windows to not include static globals. * * Revision 1.33 1998/04/07 13:33:53 robertj * Changed startup code to support PApplication class. * * Revision 1.32 1998/04/01 01:56:21 robertj * Fixed standard console mode app main() function generation. * * Revision 1.31 1998/03/29 06:16:44 robertj * Rearranged initialisation sequence so PProcess descendent constructors can do "things". * * Revision 1.30 1998/03/20 03:16:10 robertj * Added special classes for specific sepahores, PMutex and PSyncPoint. * * Revision 1.29 1997/07/08 13:13:46 robertj * DLL support. * * Revision 1.28 1997/04/27 05:50:13 robertj * DLL support. * * Revision 1.27 1997/02/05 11:51:56 robertj * Changed current process function to return reference and validate objects descendancy. * * Revision 1.26 1996/06/28 13:17:08 robertj * Fixed incorrect declaration of internal timer list. * * Revision 1.25 1996/06/13 13:30:49 robertj * Rewrite of auto-delete threads, fixes Windows95 total crash. * * Revision 1.24 1996/05/23 09:58:47 robertj * Changed process.h to pprocess.h to avoid name conflict. * Added mutex to timer list. * * Revision 1.23 1996/05/18 09:18:30 robertj * Added mutex to timer list. * * Revision 1.22 1996/04/29 12:18:48 robertj * Added function to return process ID. * * Revision 1.21 1996/03/12 11:30:21 robertj * Moved destructor to platform dependent code. * * Revision 1.20 1996/02/25 11:15:26 robertj * Added platform dependent Construct function to PProcess. * * Revision 1.19 1996/02/03 11:54:09 robertj * Added operating system identification functions. * * Revision 1.18 1996/01/02 11:57:17 robertj * Added thread for timers. * * Revision 1.17 1995/12/23 03:46:02 robertj * Changed version numbers. * * Revision 1.16 1995/12/10 11:33:36 robertj * Added extra user information to processes and applications. * Changes to main() startup mechanism to support Mac. * * Revision 1.15 1995/06/17 11:13:05 robertj * Documentation update. * * Revision 1.14 1995/06/17 00:43:10 robertj * Made PreInitialise virtual for NT service support * * Revision 1.13 1995/03/14 12:42:14 robertj * Updated documentation to use HTML codes. * * Revision 1.12 1995/03/12 04:43:26 robertj * Remvoed redundent destructor. * * Revision 1.11 1995/01/11 09:45:09 robertj * Documentation and normalisation. * * Revision 1.10 1994/08/23 11:32:52 robertj * Oops * * Revision 1.9 1994/08/22 00:46:48 robertj * Added pragma fro GNU C++ compiler. * * Revision 1.8 1994/08/21 23:43:02 robertj * Added function to get the user name of the owner of a process. * * Revision 1.7 1994/08/04 11:51:04 robertj * Moved OperatingSystemYield() to protected for Unix. * * Revision 1.6 1994/08/01 03:42:23 robertj * Destructor needed for heap debugging. * * Revision 1.5 1994/07/27 05:58:07 robertj * Synchronisation. * * Revision 1.4 1994/07/21 12:33:49 robertj * Moved cooperative threads to common. * * Revision 1.3 1994/06/25 11:55:15 robertj * Unix version synchronisation. * */ #ifndef _PPROCESS #define _PPROCESS #ifdef P_USE_PRAGMA #pragma interface #endif #include <ptlib/mutex.h> #include <ptlib/syncpoint.h> #include <ptlib/pfactory.h> /**Create a process. This macro is used to create the components necessary for a user PWLib process. For a PWLib program to work correctly on all platforms the #main()# function must be defined in the same module as the instance of the application. */ #ifdef P_VXWORKS #define PCREATE_PROCESS(cls) \ PProcess::PreInitialise(0, NULL, NULL); \ cls instance; \ instance._main(); #elif defined(P_RTEMS) #define PCREATE_PROCESS(cls) \ extern "C" {\ void* POSIX_Init( void* argument) \ { PProcess::PreInitialise(0, 0, 0); \ static cls instance; \ exit( instance._main() ); \ } \ } #else #define PCREATE_PROCESS(cls) \ int main(int argc, char ** argv, char ** envp) \ { PProcess::PreInitialise(argc, argv, envp); \ static cls instance; \ return instance._main(); \ } #endif // P_VXWORKS /*$MACRO PDECLARE_PROCESS(cls,ancestor,manuf,name,major,minor,status,build) This macro is used to declare the components necessary for a user PWLib process. This will declare the PProcess descendent class, eg PApplication, and create an instance of the class. See the #PCREATE_PROCESS# macro for more details. */ #define PDECLARE_PROCESS(cls,ancestor,manuf,name,major,minor,status,build) \ class cls : public ancestor { \ PCLASSINFO(cls, ancestor); \ public: \ cls() : ancestor(manuf, name, major, minor, status, build) { } \ private: \ virtual void Main(); \ }; PLIST(PInternalTimerList, PTimer); class PTimerList : PInternalTimerList // Want this to be private /* This class defines a list of #PTimer# objects. It is primarily used internally by the library and the user should never create an instance of it. The #PProcess# instance for the application maintains an instance of all of the timers created so that it may decrements them at regular intervals. */ { PCLASSINFO(PTimerList, PInternalTimerList); public: PTimerList(); // Create a new timer list PTimeInterval Process(); /* Decrement all the created timers and dispatch to their callback functions if they have expired. The #PTimer::Tick()# function value is used to determine the time elapsed since the last call to Process(). The return value is the number of milliseconds until the next timer needs to be despatched. The function need not be called again for this amount of time, though it can (and usually is). @return maximum time interval before function should be called again. */ private: PMutex listMutex, processingMutex, inTimeoutMutex; // Mutual exclusion for multi tasking PTimeInterval lastSample; // The last system timer tick value that was used to process timers. PTimer * currentTimer; // The timer which is currently being handled friend class PTimer; }; /////////////////////////////////////////////////////////////////////////////// // PProcess /**This class represents an operating system process. This is a running "programme" in the context of the operating system. Note that there can only be one instance of a PProcess class in a given programme. The instance of a PProcess or its GUI descendent #PApplication# is usually a static variable created by the application writer. This is the initial "anchor" point for all data structures in an application. As the application writer never needs to access the standard system #main()# function, it is in the library, the programmes execution begins with the virtual function #PThread::Main()# on a process. */ 00366 class PProcess : public PThread { PCLASSINFO(PProcess, PThread); public: /**@name Construction */ //@{ /// Release status for the program. 00374 enum CodeStatus { /// Code is still very much under construction. 00376 AlphaCode, /// Code is largely complete and is under test. 00378 BetaCode, /// Code has all known bugs removed and is shipping. 00380 ReleaseCode, NumCodeStatuses }; /** Create a new process instance. */ PProcess( const char * manuf = "", ///< Name of manufacturer const char * name = "", ///< Name of product WORD majorVersion = 1, ///< Major version number of the product WORD minorVersion = 0, ///< Minor version number of the product CodeStatus status = ReleaseCode, ///< Development status of the product WORD buildNumber = 1 ///< Build number of the product ); //@} /**@name Overrides from class PObject */ //@{ /**Compare two process instances. This should almost never be called as a programme only has access to a single process, its own. @return #EqualTo# if the two process object have the same name. */ Comparison Compare( const PObject & obj ///< Other process to compare against. ) const; //@} /**@name Overrides from class PThread */ //@{ /**Terminate the process. Usually only used in abnormal abort situation. */ virtual void Terminate(); /** Process information functions */ //@{ /**Get the current processes object instance. The {\it current process} is the one the application is running in. @return pointer to current process instance. */ static PProcess & Current(); /**Determine if the current processes object instance has been initialised. If this returns TRUE it is safe to use the PProcess::Current() function. @return TRUE if process class has been initialised. */ static BOOL IsInitialised(); /**Set the termination value for the process. The termination value is an operating system dependent integer which indicates the processes termiantion value. It can be considered a "return value" for an entire programme. */ void SetTerminationValue( int value ///< Value to return a process termination status. ); /**Get the termination value for the process. The termination value is an operating system dependent integer which indicates the processes termiantion value. It can be considered a "return value" for an entire programme. @return integer termination value. */ int GetTerminationValue() const; /**Get the programme arguments. Programme arguments are a set of strings provided to the programme in a platform dependent manner. @return argument handling class instance. */ PArgList & GetArguments(); /**Get the name of the manufacturer of the software. This is used in the default "About" dialog box and for determining the location of the configuration information as used by the #PConfig# class. The default for this information is the empty string. @return string for the manufacturer name eg "Equivalence". */ virtual const PString & GetManufacturer() const; /**Get the name of the process. This is used in the default "About" dialog box and for determining the location of the configuration information as used by the #PConfig# class. The default is the title part of the executable image file. @return string for the process name eg "MyApp". */ virtual const PString & GetName() const; /*". @return string for the version eg "1.0b3". */ virtual PString GetVersion( BOOL full = TRUE ///< TRUE for full version, FALSE for short version. ) const; /**Get the processes executable image file path. @return file path for program. */ const PFilePath & GetFile() const; /**Get the platform dependent process identifier for the process. This is an arbitrary (and unique) integer attached to a process by the operating system. @return Process ID for process. */ DWORD GetProcessID() const; /**Get the effective user name of the owner of the process, eg "root" etc. This is a platform dependent string only provided by platforms that are multi-user. Note that some value may be returned as a "simulated" user. For example, in MS-DOS an environment variable @return user name of processes owner. */ PString GetUserName() const; /**Set the effective owner of the process. This is a platform dependent string only provided by platforms that are multi-user. For unix systems if the username may consist exclusively of digits and there is no actual username consisting of that string then the numeric uid value is used. For example "0" is the superuser. For the rare occassions where the users name is the same as their uid, if the username field starts with a '#' then the numeric form is forced. If an empty string is provided then original user that executed the process in the first place (the real user) is set as the effective user. The permanent flag indicates that the user will not be able to simple change back to the original user as indicated above, ie for unix systems setuid() is used instead of seteuid(). This is not necessarily meaningful for all platforms. @return TRUE if processes owner changed. The most common reason for failure is that the process does not have the privilege to change the effective user. */ BOOL SetUserName( const PString & username, ///< New user name or uid BOOL permanent = FALSE ///< Flag for if effective or real user ); /**Get the effective group name of the owner of the process, eg "root" etc. This is a platform dependent string only provided by platforms that are multi-user. Note that some value may be returned as a "simulated" user. For example, in MS-DOS an environment variable @return group name of processes owner. */ PString GetGroupName() const; /**Set the effective group of the process. This is a platform dependent string only provided by platforms that are multi-user. For unix systems if the groupname may consist exclusively of digits and there is no actual groupname consisting of that string then the numeric uid value is used. For example "0" is the superuser. For the rare occassions where the groups name is the same as their uid, if the groupname field starts with a '#' then the numeric form is forced. If an empty string is provided then original group that executed the process in the first place (the real group) is set as the effective group. The permanent flag indicates that the group will not be able to simply change back to the original group as indicated above, ie for unix systems setgid() is used instead of setegid(). This is not necessarily meaningful for all platforms. @return TRUE if processes group changed. The most common reason for failure is that the process does not have the privilege to change the effective group. */ BOOL SetGroupName( const PString & groupname, ///< New group name or gid BOOL permanent = FALSE ///< Flag for if effective or real group ); /**Get the maximum file handle value for the process. For some platforms this is meaningless. @return user name of processes owner. */ int GetMaxHandles() const; /**Set the maximum number of file handles for the process. For unix systems the user must be run with the approriate privileges before this function can set the value above the system limit. For some platforms this is meaningless. @return TRUE if successfully set the maximum file hadles. */ BOOL SetMaxHandles( int newLimit ///< New limit on file handles ); #ifdef P_CONFIG_FILE /**Get the default file to use in PConfig instances. */ virtual PString GetConfigurationFile(); #endif /**Set the default file or set of directories to search for use in PConfig. To find the .ini file for use in the default PConfig() instance, this explicit filename is used, or if it is a set of directories separated by either ':' or ';' characters, then the application base name postfixed with ".ini" is searched for through those directories. The search is actually done when the GetConfigurationFile() is called, this function only sets the internal variable. Note for Windows, a path beginning with "HKEY_LOCAL_MACHINE\\" or "HKEY_CURRENT_USER\\" will actually search teh system registry for the application base name only (no ".ini") in that folder of the registry. */ void SetConfigurationPath( const PString & path ///< Explicit file or set of directories ); //@} /**@name Operating System information functions */ //@{ /**Get the class of the operating system the process is running on, eg "unix". @return String for OS class. */ static PString GetOSClass(); /**Get the name of the operating system the process is running on, eg "Linux". @return String for OS name. */ static PString GetOSName(); /**Get the hardware the process is running on, eg "sparc". @return String for OS name. */ static PString GetOSHardware(); /**Get the version of the operating system the process is running on, eg "2.0.33". @return String for OS version. */ static PString GetOSVersion(); /**Get the configuration directory of the operating system the process is running on, eg "/etc" for Unix, "c:\windows" for Win95 or "c:\winnt\system32\drivers\etc" for NT. @return Directory for OS configuration files. */ static PDirectory GetOSConfigDir(); //@} PTimerList * GetTimerList(); /* Get the list of timers handled by the application. This is an internal function and should not need to be called by the user. @return list of timers. */ static void PreInitialise( int argc, // Number of program arguments. char ** argv, // Array of strings for program arguments. char ** envp // Array of string for the system environment ); /* Internal initialisation function called directly from #_main()#. The user should never call this function. */ static void PreShutdown(); /* Internal shutdown function called directly from the ~PProcess #_main()#. The user should never call this function. */ virtual int _main(void * arg = NULL); // Main function for process, called from real main after initialisation PTime GetStartTime() const; /* return the time at which the program was started */ private: void Construct(); // Member variables static int p_argc; static char ** p_argv; static char ** p_envp; // main arguments int terminationValue; // Application return value PString manufacturer; // Application manufacturer name. PString productName; // Application executable base name from argv[0] WORD majorVersion; // Major version number of the product WORD minorVersion; // Minor version number of the product CodeStatus status; // Development status of the product WORD buildNumber; // Build number of the product PFilePath executableFile; // Application executable file from argv[0] (not open) PStringList configurationPaths; // Explicit file or set of directories to find default PConfig PArgList arguments; // The list of arguments PTimerList timers; // List of active timers in system PTime programStartTime; // time at which process was intantiated, i.e. started int maxHandles; // Maximum number of file handles process can open. friend class PThread; // Include platform dependent part of class #ifdef _WIN32 #include "msos/ptlib/pprocess.h" #else #include "unix/ptlib/pprocess.h" #endif }; /* * one instance of this class (or any descendants) will be instantiated * via PGenericFactory<PProessStartup> one "main" has been started, and then * the OnStartup() function will be called. The OnShutdown function will * be called after main exits, and the instances will be destroyed if they * are not singletons */ class PProcessStartup : public PObject { PCLASSINFO(PProcessStartup, PObject) public: virtual void OnStartup() { } virtual void OnShutdown() { } }; typedef PFactory<PProcessStartup> PProcessStartupFactory; // using an inline definition rather than a #define crashes gcc 2.95. Go figure #define P_DEFAULT_TRACE_OPTIONS ( PTrace::Blocks | PTrace::Timestamp | PTrace::Thread | PTrace::FileAndLine ) template <unsigned _level, unsigned _options = P_DEFAULT_TRACE_OPTIONS > class PTraceLevelSetStartup : public PProcessStartup { public: void OnStartup() { PTrace::Initialise(_level, NULL, _options); } }; #endif // End Of File /////////////////////////////////////////////////////////////// | http://pwlib.sourcearchive.com/documentation/1.10.10-1ubuntu6/pprocess_8h-source.html | CC-MAIN-2018-17 | refinedweb | 3,510 | 57.27 |
[
]
Eric Payne commented on HDFS-2202:
----------------------------------
Hi Nicholas,
Thank you for reviewing this Jira. Your comments were clear, precise, and easily understood.
I appreciate that.
> Hi Eric, sorry that the refactoring breaks your patch. Could you update it?
Yes. It has been updated.
> In TestBalancerBandwidth, you may call MiniDFSCluster.getFileSystem() instead of creating
a DFSClient.
Done.
> We should update ClientProtocol.versionID and DatanodeProtocol.versionID.
> I think the BalancerBandwidthCommand.version is not needed. We have to change the DatanodeProtocol.versionID
in this case.
I did this in the 0.23.0 patch. However, one of the requirements for the 0.20.205.0 patch
was to not modify the DatanodeProtocol.versionID (please see).
The reason is that the operations team does not want to require all clusters in a colo to
be upgraded for 0.20.205, which would have to be done if the DatanodeProtocol.versionID changed.
This is because there are some cross-cluster use cases.
In 0.20.205, I left the BalancerBandwidthCommand.version.
In the case of 0.23, the DatanodeProtocol.versionID has to change anyway, so it makes sense
there.
> You may use for-each statement for the following (... foreach example code here...)
Done
> The initial capacity does not really matter. How about removing it?
Done
> Please add getter/setter and do not use public field DatanodeDescriptor.bandwidth.
Done
> Please add javadoc (or change comments to javadoc) to all new public classes/methods/fields.
Done
> Changes to balancer bandwidth should not require datanode restart.
> ------------------------------------------------------------------
>
> Key: HDFS-2202
> URL:
> Project: Hadoop HDFS
> Issue Type: Bug
>.23.0.v1.patch,
HDFS-2202.patch
>
>
> Currently in order to change the value of the balancer bandwidth (dfs.datanode.balance.bandwidthPerSec),
the datanode daemon must be restarted.
> The optimal value of the bandwidthPerSec parameter is not always (almost never) known
at the time of cluster startup, but only once a new node is placed in the cluster and balancing
is begun. If the balancing is taking too long (bandwidthPerSec is too low) or the balancing
is taking up too much bandwidth (bandwidthPerSec is too high), the cluster must go into a
"maintenance window" where it is unusable while all of the datanodes are bounced. In large
clusters of thousands of nodes, this can be a real maintenance problem because these "mainenance
windows" can take a long time and there may have to be several of them while the bandwidthPerSec
is experimented with and tuned.
> A possible solution to this problem would be to add a -bandwidth parameter to the balancer
tool. If bandwidth is supplied, pass the value to the datanodes via the OP_REPLACE_BLOCK and
OP_COPY_BLOCK DataTransferProtocol requests. This would make it necessary, however, to change
the DataTransferProtocol version.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201107.mbox/%3C718042741.16222.1311877569571.JavaMail.tomcat@hel.zones.apache.org%3E | CC-MAIN-2014-49 | refinedweb | 461 | 61.43 |
for the laws." - William Milonoff ***** TABLE OF CONTENTS 1....Introduction 2....Disclaimer 3....Revision History 4....List of Topics 5....Actual Topics ***** INTRODUCTION The purpose of the Frog Farm is to discuss issues which involve a Free People and their Public Servants, and how to deal with the various problems that can arise between a free person who exercises and demands Rights and errant public servants who exceed the scope of their powers. Topics covered include the rights of Man and subsequent obligations, the nature of the contract for government, the Federal and State Constitutions of the United States and their Amendments, various types of Jurisdiction, and defending rights in the courtroom. The newsgroup alt.society.sovereign has recently (May 1993) become relatively active recently in providing relevant information. Those interested in the topics presented are highly encouraged to thoroughly read this document before posting or requesting subscriptions to the mailing list. The Frog Farm's FAQ is unique among FAQ's in that the answers consist of information derived almost entirely from a single source, that being the courts of the fifty States and the federal Supreme Court (and thus the only authoritative source regarding the subject matter). The information in question is also in the form of legal citations, rather than a question and answer format. The Supreme Court and the lesser appellate courts have repeatedly ruled on many points, and these are rightfully described as "well settled". Unfortunately, most of the time this established law goes unused out of fear or ignorance. The Frog Farm is a clearinghouse for all information regarding defending one's rights in the courtrooms of America. With the recent expansions of the Internet's size and scope, and the millions of participants now discovering its vast, untapped potential which is even now struggling to throw off the last vestiges of its governmental umbilical cords, it is hoped that this information will find an appreciative audience. ***** DISCLAIMER The Frog Farm was created to provide participants with a forum with which to share their findings and opinions based on research and analysis of the subject matter covered, drawing from personal experience where applicable.
Information is not provided for the purpose of providing legal or any other professional services, which can only be provided by professionals. The material written by the host and other private participants on this message base is not intended to be construed as legal advice. Information contained herein that may pertain to tax or legal situations is for informational or descriptive purposes only and no attempt to advise is intended or implied. Information relative to such areas may be used in cooperation with competent jurists or otherwise at the discretion of the reader. As there is always an element of ris in exercising and defending one's lawful rights regardless of the country one chooses to live in, neither the moderator, author of any posted message or the administrator of any site involved in the transmission of any messages posted, assumes any responsibility or liability for any loss or damage incurred either directly or indirectly, as a consequence of the use of any information herein provided through the Frog Farm. All information provided is applicable, firstly, only to those living within the geographical boundaries of one of the fifty States of North America. (Those living in other countries would be well advised to educate their friends and neighbors regarding America's unique legal foundation, and perhaps loo into the possibility of moving here.) After that, whether or not you can exercise and defend Rights will depend on whether or not you have the following things: o Pencil and paper. A typewriter helps; a computer may also. o Access to a good law dictionary. (Bouvier's is the best; use Blac 's only if you have no other choice.) o The ability to competently read and write at least 10% of the English language. o The will to learn, change your Status appropriately and defend your position. The first is much more easily acquired than the others. ***** REVISION HISTORY 1.0: Released May 10, 1993. Uploaded to uglymouse.css.itd.umuch.edu in /pub/Politics/FrogFarm. 1.1: Released July 4th, 1993. Added new information on Jurisdiction and Venue; miscellaneous cleaning up and reorganizing. 1.2: Released September 1st, 1993. Cleaned up and reorganized a bit more; added new citations on the First Amendment and Civil Liability; added first-ever Frequently As ed Question, to wit, "Why the hec is it called the Frog Farm?" 1.21: October 28th, 1993. FTP host is now etext.archive.umich.edu in the directory /pub/Legal/FrogFarm. 1.3: Beta version, November 1st, 1993. Added a second actual traditional question-and-answer, some more citations and a new section on "Judgment Proofing". 1.4: March 25th, 1994. Added over sixty citations on various topics; reorganized and relabeled topic headers; added sections on "Strategy", "Property" and "Boo s".
***** LIST OF TOPICS The best way to view this file is to search for a given string. Each entry in the list of topics shows what string you should search for in order to find the beginning of each topic entry. All topics are listed in the order they are presented in. To find information about Search for this -------------------------------------------------------------------SO WHY THE HECK IS IT CALLED THE FROG FARM, ANYWAY? :whyfrog WHAT SHOULD I DO IF I'M ARRESTED? :arrest JUDGMENT PROOFING :jproof BASIC OVERALL STRATEGY :strat PUBLIC SERVANT'S QUESTIONAIRE :psq ASHWANDER RULES: QUALIFYING FOR THE SUPREME COURT :ashwand WHAT IS PROPERTY AND WHY IS IT SO IMPORTANT? :property RIGHT TO TRAVEL VS. PRIVILEGE TO DRIVE :drive RIGHTS OF JURIES, AND RIGHT TO JURY TRIAL :jury RIGHTS OF INDIVIDUALS VS. RIGHTS OF THE STATE :state SOME FOURTH AMENDMENT SPECIFIC CASES :4th SOME FIFTH AMENDMENT CASES :5th HALE VS. HENKEL: INDIVIDUALS ARE SOVEREIGN :hale WHAT IS SOVEREIGNTY AND WHO ARE SOVEREIGNS? :sov WHAT IS JURISDICTION AND WHAT ARE ITS LIMITS? :juris WHAT IS MONEY? :money SOME QUICK NOTES ON THE 2ND AMENDMENT :2nd INCOME TAXATION AND THE INFERNAL REVENUE SERVICE :irs WHAT ABOUT THE FIRST AMENDMENT? :1st THE RIGHT OF PARENTAL EDUCATION OF CHILDREN :educate MISCELLANEOUS :misc FOR FURTHER READING ON THESE AND OTHER TOPICS... :boo s
**** TOPICS :whyfrog There is a tale, possibly apocryphal or metaphorical, attributed by most to Mar Twain, of how to coo a frog. If you drop a frog in a pot of boiling water, so the story goes, he'll jump right bac out just as quic ly. But if you put him in a pot of cold water, and slowly heat it up, he'll stay right there...until it's too late, and he's boiled alive. The tale is usually mentioned in the context of gradualism, or the tendency of government to always increase its power at the expense of the governed. About four years ago, I met someone on an electronic bulletin board who called himself "Frog Farmer", who introduced me to the topics discussed herein. (Although I use the masculine, it should be noted that I never met FF in person, and it is equally li ely for FF to be female.) When I as ed him why he chose that particular handle, he told me the above story. In honor of his/her tireless sense of humor, and everything s/he introduced
me to, this FAQ is dedicated to him/her. :arrest QUESTION: "What does Frog Farmer advise that I should do (or not do) if I am arrested -- i.e., what cases should I cite? What rights should I demand? Basically, what should I say and not say?" ANSWER: An answer to these questions might be construed as legal advice. The Frog Farm does not provide legal advice, nor do any of its participants, who merely spea from their own experience and/or nowledge. However, it is possible to be less cryptic than this. There is far too much material under this heading to fit here, unless condensed greatly. An example might be: - Learn how to read and comprehend a percentage of the English language, as described earlier in this FAQ. - Obtain the relevant law boo s (your state's Code of Civil Procedure, Penal Code, Rules of Court and Evidence Code, or the appropriate federal boo s for a federal case) and read the relevant sections. - Type up and send Constructive Notices and Notice of Dishonor to all concerned parties within 3 days of receiving your tic et or being arrested. - Type up subpoenas for all arresting officers and the person who signed the complaint and schedule a motion hearing where you will ma e a Motion To Quash the Summons, at which time you obtain your depositions from the arresting officer. You should prepare your list of questions to as the witnesses when you have them on the stand at the motion hearing. Their answers will cause the judge to dismiss the charges, if your questions are the right ones. Your questions have to be based upon the codes, and the answers will go to show that the procedures followed by the police were all invalid and that the persons you subpoenaed are all guilty of perjury. The beauty of it is that you don't even have to testify yourself, and the cops are so uneducated in law that they will not even now that they are convicting themselves out of their own mouths. However, this ONLY WORKS when there is no VICTIM or INJURED PARTY. It also only wor s when you UNDERSTAND WHAT THE HECK YOU ARE DOING, and are prepared to FOLLOW THROUGH to the utmost (see Wisconsin v. Yoder, summarized below). We will assume that these two conditions are true, for the purposes of this answer. Every arrest, detention, stop and confrontation with government officials is different, and it is impossible to imagine (in order to pretend) all the different things that might be said, and when. Each and every situation is unique, and this is why the question cannot be answered. However, a general consensus has developed, which can be summarized relatively easily as follows: -:- Never as , "Am I under arrest?"; rather, always as , "Am I free to go?". An arrest is a certain procedure that must meet certain criteria in order to be done lawfully. You should not "help out" with your own arrest by merely leaving it up to them to say "Yes", but force those responsible to go through all of the due process you demand.
-:- Don't hassle people about your rights. Respect their point of view, and where they let you, educate them about the law. Ma e your demands with a quiet voice and a friendly smile. If someone is insensitive about your rights, have the patience to wait until you get to the court room to reclaim them. Always be friendly, regardless of how they treat you... you'll get even in the courtroom. It can be very ris y to their financial health to ignore or abuse your rights. But it can just as easily be ris y to your physical health if you ignore or abuse their overly inflated sense of importance. There's no sense in taunting bulls; just wait for them to get out of breath charging around before stic ing it to 'em. -:- Before as ing if you are free to go, try to ascertain just who it is who is accosting you by as ing their identity. If the answer is that it's a government agent of some ind, as if they will fully identify themselves. The answer is usually "yes", so pull out your PUBLIC SERVANT QUESTIONAIRE (included elsewhere in this FAQ) and as them to indly fill it out. They will usually decline, so offer to fill it out for them if they will answer the questions. If they refuse, inform them that their refusal to fully identify themselves and cooperate in your investigation will be reported to their superior, and suggest at that time that they call for bac -up, for several reasons: 1) They have violated your right to the requested information 2) They have proven themselves incompetent to understand the law 3) You now have reason to doubt whatever they say regarding their being an officer of the law (they could be a highway robber, trying to get you to drop your guard). If they can manage to get a few more people in police cars to the scene, you can probably rule out their being a robber in costume. When the superior officer arrives, go through the whole thing again. If things get nasty, you could demand their probable cause for believing you guilty of committing a crime, and demand a 4th amendment warrant and counsel present before you answer any in-custody interrogation. This would invo e the exclusionary rule. -:- Carry copies of Davis v. Mississippi, 394 U.S. 721, to ma e sure they've all been informed regarding the fact that your fingerprints are private property which cannot be ta en over your objection without a valid court order. -:- Don't refuse their offer of "counsel" straight off. It can be useful to get the counsel to refuse to help you on their own, or you can fire them in open court for refusing to obey your instructions, or for attempting to waive any of your rights, li e the right to a speedy trial. Always ma e it clear that they are not representing you, but merely serving as counsel. When you finally do go to court, ma e friends with the cler by conforming as much as possible to the cler 's demands for format, timeliness, etc. (provided that you don't give up any significant rights in the process). Respect the legal maxim, "The law does not bother with trifles." Don't necessarily exact the Sha espearian pound of flesh. Li e the government courts, you can put abusers on probation. They then now that when you could have put them in jail for their "white collar" offenses, you didn't. When dealing with errant public officials, you can refer to those situations, and let them now that although you were easy then, they shouldn't push their luc now. :-)
But the most important thing is attitude. If you're a free person, if you've rescinded all contracts with government, then act li e it. Exercise your rights, and when necessary, defend them as passionately as you exercise them. A relevant case is Wisconsin v. Yoder, 406 U.S. 205 (1972), which established the tests necessary to distinguish a belief based on CONVICTION rather than PREFERENCE. The importance of the distinction is that according to the Court's decision, only CONVICTIONS are protected by the Constitution. The test consists of five major circumstances you must maintain your belief in the face of, which are: 1) 2) 3) 4) 5) Peer pressure Family pressure Threat of lawsuit Threat of jail Threat of death
So one must be smart enough to understand all the responsibilities involved in having sovereign status, and maintain one's beliefs in the face of all opposition. Otherwise the court will view your stance as a convenient excuse (preference) to get out of the "legal duties" incurred by subjects ("income tax", "license fees", etc) -- i.e., they will assume you are lying, and that you really ARE a sheeple, one who is subject to their jurisdiction. Finally and most importantly, before engaging in any sort of hazardous conduct such as this, you must become "judgment proof". For more information on this topic, see the heading, ":jproof". :jproof [This section by A. Nonnymous]
ANSWER: The topic of judgment proofing practically deserves its own FAQ, but until that day comes, here are the basic elements. Judgment proofing is the process of protecting your property from thieves. It is a fundamental and necessary process that should always be followed before underta ing any potentially hazardous activity (such as defending Rights in the courtroom). To be judgment proof (proof against a legal judgment) means that even if you are fined or subject to a civil judgment, nothing can be collected from you. One essential aspect is Liquidity. As an example, if you go out and buy ten bags of jun silver coins, divide them up into ten parcels, and bury them in various secret locations in your area, you will have the ind of liquidity which the Jewish people found so helpful in Europe in the 1930's. With their wealth secure and mobile, those with gold and silver were much more able to stay one step ahead of those who would have destroyed their lives and stolen their property. Aside from the normal privacy protection strategies (you all receive all your mail at mail receiving services, or have a small networ of about 10 or 15 friends that can reforward mail for each other, don't you?), there are a number of steps that can be ta en to reduce your vulnerability to judgments. 1) Live in rented accommodations. Home ownership is a bad idea. It can be
QUESTION: "I hear a lot about 'judgment proofing'. What the hec and why is it so important?"
does it mean,
accomplished if you are very careful to construct a firewall between you and the nominal owner (see clean team-dirty team considerations below), but it is usually much safer to not own anything major or to own it in another jurisdiction. This isn't to say that you can never own a house...but real estate isn't such a hot investment these days in any case. 2) Drive an old car. Better yet, drive somebody else's car. Li e a house, a car is a big juicy target to government officials; it's property that they can seize. But they can't seize somebody else's house or car that the owner happens to be allowing you to use. 3) Keep financial accounts, if any, in other countries. Use small domestic accounts for day-to-day activities. Have your important accounts elsewhere. 4) Convert your job to contingent- or self- employment. Most people are controlled by their reluctance to leave their job. Even though garnishment is rare these days because it is expensive, you are still vulnerable to losing your job if you encounter legal difficulties. If you are self-employed you are unli ely to fire yourself and if you are a contingent wor er it is easier for you to change jobs/states/countries when things become hot. Contingent wor ers can also recover from legal difficulties more easily since they can disguise prison terms "I too two years sailing in the Pacific. Something I've always wanted to do." and employers chec up on them less since they can be fired at will.
"Studies of the effect of criminal conviction on income have shown that the average blue-collar wor er regains the same wage earned before imprisonment within one year after release. On the other hand, imprisonment dramatically reduces the wages of white-collar wor ers, whose jobs are more li ely to involve reputation and 'credentials'. If you are self-employed, you will be in better shape, because you are unli ely to fire yourself for 'criminal' activity. The change in employment towards contingent wor ers should reduce the effects of legal problems on white collar wor ers as well. Since employers can fire contingent wor ers at will, these wor ers undergo less vetting than 'permanent' employees." 5) Use the clean team-dirty team approach. Within a family or affinity group, you can divide the responsibilities. Some group members remain under equity/admiralty/etc jurisdiction and are vested with all the group assets. This is the clean team. The common law fol s in the group constitute the "dirty" team and have no assets. Obviously, you want to avoid accusations of fraudulent transfer. If any transfers occur before the dirty team rescinds any contracts binding them to the government, there should be no problem. It helps if it is difficult to prove the existance of a relationship between the clean and dirty teams. If different countries and names are involved, it can be very difficult for investigators to uncover the lin s. Translation of "judgment proof" for cryptography- nowledgeable types out there: ma e the eyspace occupied by your assets so large that an exhaustive search is beyond the resources of the investigators. Key elements of judgment proofing are: * Mobility: minimize economic and social dependence on a physical community. A possibility is to move to free virtual communities, untraceable to your physical location.
the Law:
* Liquidity. Get rid of attachable property, and as Charles Dic ens says in Great Expectations, "Get hold of portable property." Most property should either be mobile and concealable (e.g., precious metals), in the name of another entity with better political connections or protection, or offshore. Don't tie up your life savings in vulnerable property li e an expensive house; lease or rent, or sell your house to your children or to friends, and write up a contract stipulating that the new owner isn't allowed to ic you off the property as long as you're alive. (Political extortion can ta e many forms -- don't invest in real estate without local political connections.) As George Gordon puts it, "Get rid of your toys." * "Poison pills" that render attachable property worthless upon confiscation, or trigger harm to the confiscator in some way. These can ta e a wide variety of physical or legal forms depending on the property in question. * Avoid future blac mail by minimizing potentially harmful information about your personal habits going into the system. Use private means of payment. Aren't you amazed at how many people rent X-rated videos with their credit card, let their ban have their personal budget, etc? * Ta e steps to change you and your family's physical jurisdiction, and even your "True Name" identity if necessary. (Cf. _How to Disappear Completely and Never Be Found_, available from Loompanics or some survivalist mail-order places). * Cultivate the loyalty of close friends and relatives, and eep damaging information away from potentially disloyal ones. Most IRS turns-ins are by disgruntled spouses. There are many things that have not been touched on here. At this point, being judgment proof, li e claiming sovereign status, is out of reach for most people, both practically and as an intellectual exercise (remember how few people can even program their VCRs?), but as crypto-anarchy matures, it will become both easier and more necessary. The less intelligent who are incapable of protecting themselves will be hit quite hard. The poor are completely judgment proof. It is harder for the rich, but they can be *effectively* judgment proof, in the same way that data encryption can be effectively unbrea able without being absolutely unbrea able. Li e other aspects of claiming sovereign status, judgment proofing relies more on our ta ing the initiative to restructure our own part of the world, rather than on futile attempts to get "society" to do it for us. :strategy [ This was originally found on Bill Thornton's now-defunct BBS, Sovereign's Paradise. All additions and editing are by the moderator. Should you decide to claim and exercise your Rights, the following may help you maintain a consistent focus upon your ultimate objective, i.e., minimizing any possible damage to yourself. ] ELEMENTS OF A PREFERRED COURTROOM STRATEGY Research all pertinent statutes, rules, regulations, legislative history, court cases, and treatises; in short, become an expert on the narrow points
of law of the case. (Get a copy of "A Guide to Federal Agency Rule Ma ing.") Project the position of underdog, intelligence, honesty, fear, indignation, issue of principle or belief, determination, calm, non-antagonism and, most especially, non-arrogance. Don't do, say or write anything that may be used against you legally or politically (foul language, threats, radical invectiveness, and so on). It doesn't become the master to get angry at the slaves for being disobedient. Supposedly, these people are your public servants. Remind them of it politely. Give them fair warning. Let them ma e the admissions and confessions that will win your case, preferably in the presence of witnesses; don't open your mouth if you're not sure of anything. And if they continue to violate your rights after you've given them what's called "constructive notice", offensive lawsuits can help you obtain redress of greivances. Don't get personally involved. Put "first offenders" on probation unless they've really, really screwed you over in a big way; this way, they get let off with a warning not to do it again, which still ma es an impression when done right, and has the added advantage of not ma ing an uncessary enemy by sending them to jail or ta ing their life's savings to pay your damages. When possible, draw battle-lines on important political issue championed by many of the public, but don't push it if they are only of secondary importance to your case. Legally, public opinion doesn't matter; politically, it can be quite valuable, if only in helping to ma e sure your house and loved ones don't get firebombed in the night. Expose any wrong-doing by the government and court; remind the community that they are the ones in charge. Limit your opponent's options through unilateral discovery, FOIA demands, jurisdictional arguments when possible, impeccable behavior in public and condemnation of your adversaries' acts. Don't limit your options. Don't itemize your defenses. Don't give information enabling your opponents' preparation to meet defense or an amendment of charges to your detriment. Ensure your credibility by only selecting one adversary per battle, if possible. Don't add names or issues to the debate. Keep the focus narrow, the same way the Supreme Court does; you may end up alienating potential supporters. Don't appear to be a legal now-it-all. It may wor when preaching to the converted, but it won't fly with the general public. Ideally, the legal nowledge or other assistance should appear to come from "un nown" supporters; at least, it should appear that way to the Joe Sixpac s of the world, who are naturally suspicious of rationality and reason. If the public perceives that you are capable of handling yourself and are not the underdog, they will li ely withdraw their support, especially if you are filing offensive actions as opposed to defensive. And finally: DO NOT SIGN YOUR NAME FRIVOLOUSLY, DON'T SAY ANYTHING, KEEP QUIET AND *SHUT UP!*
:psq You will probably want to clean this up a little before printing a copy. It can be a very useful tool, irregardless of whether people actually fill it out; in fact, if two witnesses besides yourself can testify that someone
refused to provide the requested information, it's probably better for your case than if they had cooperated in the first place. P U B L I C S E R V A N T ' S Q U E S T I O N A I R E ma e that portion of the law authorizing the questions he will as ? 10. Are the answers to the questions voluntary or mandatory? 11. Are the questions to be as ed based upon a specific law or information which you see ?. ___________ Date: ____/_____ Witness:____________ Witness:_______________
Authorities for Questions: 1,2,3,4 In order to be sure you now exactly who you are giving the information to. Residence and business addresses are needed in case you need to serve process in a civil or criminal action upon this individual. 5 All public servants have ta en a sworn oath to uphold and defend the constitution. 6,7 This is standard procedure by government agents and officers. See Internal Revenue Manual, MT-9900-26, Section 242.133. 8,9,10 11 12,13 14 15 16 17,18 19 20,21 22 23 Title 5 USC 552a, paragraph (e) (3) (A) Title 5 USC 552a, paragraph (d) (5), (e) (1) Title 5 USC 552a, paragraph (e) (3) (B), (e) (3) (C) Title 5 USC 552a, paragraph (e) (3) (D) Public Law 93-579 (b) (1) Title 5 USC 552a, paragraph (e) (3) (A) Title 5 USC 552a, paragraph (e) (2) Title 5 USC 552a, paragraph (d) (5) Public Law 93-579 (b) (1) Title 5 USC 552a, paragraph (d) (1) Title 5 USC 552a, paragraph (e) (10)
:ashwand THE ASHWANDER RULES: QUALIFYING YOUR CASE FOR THE SUPREME COURT The Supreme Court has developed seven rules, called the "Ashwander Rules" (Ashwander v. Tennessee Valley Authority 297 US 288,346 (1935)) for qualifying a case to be heard there. According to Justice Brandeis: Trun RR v. Wellman, 143 U.S. 339,345. 2. The Court will not 'anticipate a question of constitutional law in advance of the necessity of deciding it.' Wilshire Oil Co. v. US, 295 US 188 'It is not the habit of the Court to decide questions of a constitutional nature unless absolutely necessary to the decision of a case.' Burton v. US, 196 US 283,295. 3. The Court will not 'formulate a rule of constitutional law broader than is required by the precise facts to which it is to be applied.' Liverpool N.Y. & P.S.S. Co. v. Emigration Commissioners, 113 US 33,39.. Light v. US, 220 US 523,538. 5. The Court will not pass upon the validity of a statute upon complaint of one who fails to show that he is injured by its operation. Tyler v. The Judges, 179 US 405 Among the many applications of this rule, none is more stri ing than the denial of the right of challenge to one who lac s a personal or property right. 6. The Court will not pass upon the constitutionality of a statute at the instance of one who has availed himself of its benefits. Great Falls Mfg. Co. v. Attorney General 124 US 581. 7. US 22,62." :property
PROPERTY, Bouvier's Law Dictionary: "The sole and despotic dominion which one man exercises over the external things of the world to the total exclusion of every other individual in the universe." "The right to property...is not ex gratia from the legislature, but ex debito from the Constitution...It is sometimes characterized judicially as a sacred right, the protection of which is one of the most important objects of government." 16 Am. Jur 2d, Sec 362. "The word 'property'...embraces all valuable interests which a man possesses outside of himself." 16 Am. Jur 2d, Sec 364. "The guaranty [to the right of property] refers to the right to acquire and possess the absolute and unqualified title to every species of property recognized by law, with all the rights incidental thereto. It relates not only to those tangible things of which one may be the owner, but to everything which he may have of an intangible value." 16 Am. Jur. 2d, Sec 167. "...a law is considered as being a deprivation of property [and therefore null and void] therefore seriously impairs its value." 16 Am. Jur. 2d, Sec 167.
:drive RIGHT TO TRAVEL V. PRIVILEGE TO." [Northwest Ordinances, Article 4] "Users of the highway for transportation of persons and property for hire may be subjected to special regulations not applicable to those using the highway for public purposes." Richmond Ba ing Co. v. Department of Treasury 18 N.E. 2d 788. "Constitutionally protected liberty includes... the right to travel..." 13 Cal Jur 3d p.416 In California, a license is defined as "A permit, granted by an appropriate governmental body, generally for a consideration, to a person or firm, or corporation to pursue some occupation or to carry on some business subject to regulation under the police power." Rosenblatt v. California 158 P2d 199, 300. "Operation of a motor vehicle upon public streets and highways is not a mere privilege but is a right or liberty protected by the guarantees of Federal and State constitutions." Adams v. City of Pocatello 416 P2d 46
. "The license charge imposed by the motor vehicle act is an excise or privilege tax, established for the purpose of revenue in order to provide a fund for roads while under the dominion of the state authorities, it is not a tax imposed as a rental charge or a toll charge for the use of the highways owned and controlled by the state." - PG&E v. State Treasurer, 168 Cal 420. "The same principles of law are applicable to them as to other vehicles upon the highway. It is therefore, the adaptation and use, rather than the form or ind of conveyance that concerns the courts." Indiana Springs Co. v. Brown, 74 N.E. 615. "The automobile is not inherently dangerous." Moore v. Roddie, 180 P. 879, Blair v. Broadmore 93 S.E. 632. "The use of the automobile as a necessary adjunct to the earning of a livlihood in modern life requires us in the interest of realism to conclude that the RIGHT to use an automobile on the public highways parta es of the nature of a liberty within the meaning of the Constitutional guarantees. . ." Berberian v. Lussier (1958) 139 A2d 869, 872 "Truc driver's failure to be licensed as chauffeur does not establish him or his employer as negligent as a matter of law with respect to accident in which driver was involved, in absence of any evidence that lac of such license had any casual or causal connection with the accident." Bryant v. Tulare Ice Co. (1954) 125 CA 2d 566 TRAVEL on the public highways is a constitutional right." Teche Lines v. Danforth, Miss. 12 So 2d 784, 787. "The right to travel is part of the 'liberty' that a citizen cannot be deprived without due process of law." Kent v. Dulles 357 U.S. 116, U.S. v. Laub 385 U.S. 475 . . ." California Vehicle Code ." California Vehicle Code "When a person applies for and accepts a license or permit, he in effect nows the limitations of it, and ta es it at the ris and consequences of transgression." Shevlin-Carpenter Co. v Minnesota, 218 U.S. 57.
:jury INFORMED JURIES OF BOTH LAW AND FACT law...you have nevertheless a right to ta e it upon yourselves to judge both, in controversy...both objects are lawfully within your power of decision." Justice John Jay to the jury, Georgia v. Brailsford, 3 Dall 1 (1794) "The jury has an unreviewable and irreversible power...to acquit in disregard of the instructions on the law given by the trial judge." U.S. v Dougherty, 473 F2d 1113, 1139 (1972). Other info related to Dougherty case: 16 Am Jur 2d, Sec. 177. "Jury lawlessness is the greatest corrective of law in its actual administration. The will of the state at large imposed on a reluctant community, the will of a majority imposed on a vigorous and determined minority, find the same obstacle in a local jury that formerly confronted ings and ministers." Dougherty, cited above, note 32, at 1130.
prerogative to disregard uncontradicted evidence and instructions to the judge. Most often commended are the 18th century of Peter Zenger of seditious libel, on the plea of Alexander Hamilton, and the 19th century acquittals in prosecutions under the fugitive slave law." Dougherty, cited above, at 1130. "The way the jury operates may be radically altered if there is alteration in the way it is told to operate." (Dougherty, cited above, at 1135.). "...the jury has the power to bring in a verdict in the teeth of both law and facts." Oliver Wendell Holmes, 1920 Horning v DC. 254 US 135 "...] "We recognize, as appellants urge, the undisputed power of the jury to acquit, even if its verdict is contrary to the law as given by the judge, and contrary to evidence. This is a power that must exist as long as we adhere to the general verdict in criminal cases, for the courts cannot search the minds the decision." U.S. vs. Moylan, 417 F2d 1002, 1006 (1969). "The People themselves have it in their power effectually to resist usurpation, without being driven to an appeal surely will pronounce him, if the supposed law he resisted was an act of usurpation." 2 Elliot's Debates, 94; 2 Bancroft, History of the Constitution, 297. Trial by 110 U.S. U.S. 440 Supp 112 jury is a right: Hill v Philpott, 445 F 2d 144; Juliard v Greenmen, 421; Kansan v Colorado, 206 U.S. 46 (1907); Reisman v Caplan, 375 (1964); US v Murdoc , 290 U.S. 389 (1933); US v Tarlows i, 305 F. (1969); Dairy Queen v Wood, 369 U.S. 469.
"The common law right of the jury to determine the law as well as the facts remains unimpaired." State v Croteau, 23 Vt. 14, 54 Am. Dec. 90 (1849). "It seems that the court instructs juries, in criminal cases, not to bind their consciences, but to inform their judgments, but they are not duty bound to adopt its opinion as their own." Lynch v State, 9 Ind. 541, 1857 Ind. "An instruction that the jury have no right to determine whether the facts stated in the indictment constitute a public offense is an error." Huddleston v State, 94 Ind. 426, 48 Am. Rep. 171.
"The jury have a right to disregard the opinion of the court, in a criminal case, even on a question of law, if they are fully satisfied that such opinion is wrong." People v Videto, L. Par er Cr. R. 603, NY 125. "Defendant cannot complain of an instruction that it is the duty of the court to instruct it as to the law of the case, but the instructions are advisory merely, and it has the right to disregard them, and determine the law for itself." Wal er v State, 136 Inc. 663, 36 NE 356. "In criminal cases, the jury are judges of the law as well as of the facts; and it is error in the court to restrict them to 'the law as given in charge by the court.'" McGuthrie v State, 17 Ga. 497 (1855).
:state INALIENABLE RIGHTS AND STATES' RIGHTS "There can be no sanction or penalty imposed upon one because of his exercise of Constitutional rights." Sherar v. Cullen 481 F. 946 "Where rights secured by the Constitution are involved, there can be no rule-ma ing or legislation which would abrogate them." U.S. Supreme Court in Miranda v. Arizona 380 U.S. 436 (1966) "Constitutional rights may not be infringed simply because the majority of the people choose that they be." Westbroo ta es the form of limiting the ability of a criminal suspect to consult with and be accompanied by a person upon whom he relies for advice and protection, he gravely transgresses..." US v Tarlows i, 69 -2 U.S.T.C. & D.C. EA. Dist. N.Y., 305 F. Supp 112 (1969). "If there is any truth to the old proverb that 'One who is his own lawyer has a fool for a client,' the Court, by its opinion today, now bestows a constitutional right on one to ma e a fool of himself." Faretta v California, 422 US 806 (1975). "Lac as ed and submitted to the court." US v Wat ind, Fed. Case. No. 16,649 (3 Cranch, C.C. 441). [So always submit instructions to the jury!] "A plaintiff who see s-ma ingoc." Pac ardbroo nowledge nowledge of the...agent, which in the judgment of the court would ma e.
:4th SOME FOURTH AMENDMENT SPECIFIC CASES "...whenever a police officer accosts an individual and restrains his freedom to wal away, he has 'seized' that person." Terry v Ohio, 392 US 1 (1968); also recognized in Brown v Texas, 443 US 47. Brown v Texas,443 U.S. 47 (1979): "Two police officers, while cruising near noon in a patrol car, observed appellant and another man wal ing away from one another in an alley in an area that had a high incidence of drug traffic. They stopped and as ed appellant to identify himself and explain what he was doing. One officer testified that he stopped appellant because the situation 'loo ed ma es it a criminal act for a person to refuse to give his name and address to an officer 'who has lawfully stopped him and requested the information.' Appeallant's motion to set aside lac ed;... Delaware v. Prouse, 440 U.S. 648. Here, the state does not contend that appellant was stopped pursuant to a practice embodying neutral criteria, and the officer's. Pp. 50-53. Mr. Chief Justice Burger delivered the opinion of the court; "This appeal presents the question whether appellant was validly convicted for refusing to comply with a policeman's demand that he identify himself pursuant to a provision of the Texas Penal Code which ma es it a crime to refuse such identification on request." "Appellant refused to identify himself and angrily asserted that the officers had no right to stop him." wal away, he has 'seized' that person... and the fourth Amendment requires that the seizure be 'reasonable'.' U.S. v. Brignoni-Ponce, 422 U.S. 873, 878 (1975)" "But even assuming that purpose (prevention of crime) is served to some degree by stopping and demanding identification from an individual without any specific basis for believing he is involved in criminal activity, the guarantees of the Fourth Amendment do not allow it..." "We need not decide whether an individual may be punished for refusing to identify himself in the context of a lawful investigatory stop which satisfies Fourth Amendment requirements. See Dunaway v. New Yor , 442 U.S. 200,210 n.12 (1979); Terry v. Ohio... the county judge who convicted appellant was troubled by this question, as shown by the colloquy set out in the appendix to this opinion." "Accordingly, appellant may not be punished for refusing to identify himself, and the conviction is Reversed." "APPENDIX TO THE OPINION OF THE COURT "THE COURT:...What do you thin about if you stop a person lawfully, and then if he doesn't want to tal to you, you put him in jail for committing a crime?" "MR. PATTON [prosecutor]: Well first of all, I would question the defendant's statement in his motion that the first amendment gives an individual the right to silence." "THE COURT:...I'm as ing you why should the State put you in jail because you don't want to say anything?" "MR. PATTON: Well, I thin there's certain interests that have to be viewed."
"THE COURT: O ay, I'd li e thin these Governmental interests outweigh the individual's interests in this respect, as far as simply as ing thin it tends to disrupt the goal of this society to maintain security _over_ its citizens to ma e sure they are secure in their gains and their homes." "THE COURT: How does that secure anybody by forcing them, under penalty of being prosecuted, to giving their name and address, even though they are lawfully stopped?" "MR. PATTON: Well I, you now, under the circumstances in which some individuals would be lawfully stopped, it's presumed that perhaps this individual is up to something, and the officer is doing his duty simply to find out the individual's name and address, and to determine exactly what is going on." "THE COURT: I'm not questioning, I'm not as ing whether the officer shouldn't as questions. I'm sure they should as everything they possibly could find out. What I'm as ing is what's the State's interest in putting a man in jail because he doesn't want to answer something. I realize lots of times an officer will give a defendant a Miranda warning which means a defendant doesn't have to ma e a statement. Lots of defendants go ahead and confess, which is fine if they want to do that. But if they don't confess, you can't put them in jail, can you, for refusing to confess to a crime?" Davis v. Mississippi, 394 U.S. 721 (1969): "Our decisions recognize no exception to the rule that illegally seized evidence is inadmissible at trial, however relevant and trustworthy the seized evidence may be as an item of proof." "Fingerprint evidence is no exception to the rule that all evidence obtained by searches and seizures in violation of the constitution is inadmissible in a state court. Pp.723-724. The Fourth Amendment applies to involuntary detention occurring at the investigatory stage as well as at the accusatory stage. Pp. 726-727. Detentions for the sole purpose of obtaining fingerprints are subject to the constraints of the Fourth amendment.. P.727. "Nor can fingerprint detention be employed repeatedly to harass any individual, since the police need only one set of each person's prints...the general requirement that the authorization of a judicial officer be obtained in advance of detention would seem not to admit of any exception in the fingerprinting context."
:5th SOME FIFTH AMENDMENT SPECIFIC CASES "You can, and must, eep your mouth shut for protection under the Fifth Amendment." Belnap v US, Et al, District Court (Utah), No C. 149-71. "The constitutional privilege [of the Fifth Amendment] was intended to shield the guilty and imprudent, as well as the innocent and foresighted." Marchetti v US, 390 U.S. 39, 51. "Government see ing to punish individual must produce evidence against him by its own independent labors, rather than by the cruel, simple expedient of compelling it from his mouth...Privilege against self-incrimination is fulfilled only when person is guaranteed right to remain silent unless he chooses to spea in unfettered exercise of his own will...Defendant's constitutional rights have been violated if his conviction is based, in whole or in part, on involuntary confession, regardless of its truth or falsity, even if there is ample evidence aside from confession to support conviction... Fifth amendment privilege is available outside of criminal court proceedings and serves to protect persons in all settings in which their freedom of action is curtailed, from being compelled to incriminate themselves...Prosecution may not use at trial fact that defendant stood mute or claimed his privilege in face of accusations...Any statement ta en after person invo es fifth amendment privilege cannot be other than product of compulsion...Any evidence that accused was threatened, tric ed or cajoled into waiver will show that he did not voluntarily waive privilege to remain silent." Miranda v Arizona, 384 U.S. 468. "The privilege against self-incrimination is neither accorded to the passive resistant, nor the person who is ignorant of his rights, nor to one indifferent thereto. It is a fighting clause. Its benefits can be retained only by sustained combat. It cannot be claimed by an attorney or solicitor. It is valid only when insisted upon by a belligerent claimant in person." US v Johnson, 76 F. Supp 538.
:hale HALE VS. HENKEL: INDIVIDUALS EXIST FOR THEIR OWN SAKE AND ARE SOVEREIGNS OVER GOVERNMENT Hale v. Hen el, 201 U.S. 43: "...we are of the opinion that there is a clear distinction in this particular between an INDIVIDUAL and a CORPORATION, and that the latter has no right to refuse to submit its boo s ta en from him by due process of law, and in accordance with the Constitution. Among his rights are a refusal to incriminate himself, and the immunity of himself and his property from arrest and seizure except under a warrant of the law. He owes nothing to the public so long as he does not trespass upon their
rights. Upon the other hand, the corporation is a creature of the state. ..." ." And this case also gives us one of the Frog Farm's Golden Rules: "Rights are only afforded the belligerent claimant in person." Some other lines of defense can be seen in the following cases: Powell v. Alabama, 287 U.S. 45: "In this court the judgements are assailed upon the grounds that the defendants, and each of them, were denied due process of law and the equal protection of the laws, in contravention of the Fourteenth amendment, specifically as follows... (2) they were denied the right of counsel, with the accustomed incidents of consultation and opportunity of preparation for trial;" ma e is whether the federal Constitution was contravened... and as to that, we confine ourselves, as already suggested, to the inquiry whether the defendants were in substance denied the right of counsel..." . People ex rel. Burgess v. Risley, 66 How.Pr. (N.Y.) 67; Batchelor v. State, 189 Ind. 69, 76; 125 N.E. 733." "It is not enough to assume that counsel thus precipitated into the case thought there was no defense, and exercised their best judgement." "It is vain to give the accused a day in court, with no opportunity to prepare for it, or to guarantee him counsel without giving the latter any opportunity to acquaint himself with the facts or law of the case." "As early as 1798 it was provided by statute, in the very language of the Sixth amendment to the Federal Constitution, that 'In all criminal prosecutions, the accused shall enjoy the right... to have the assistance of
counsel for his defence;" "What, then, does a hearing include? Historically and in practice, in our own country at least, it has always included the right to the aid and assistance of counsel when desired and provided by the party asserting the right." "The United States by statute and every state in the Union by express provision of law, or by the determination of its courts, ma e it the duty of the trial judge, where the accused is unable to employ counsel, to appoint counsel for him." [Frog Farmer sez: Be careful! Use the due process provisions of the 5th amendment, not the unlawful 14th! Powell claimed 14th amendment citizenship.] Almeida-Sanchez, 413 U.S. 266: Petitioner, a Mexican citizen and holder of a valid wor permit, challenges the constitution-ality of the Border Patrol's warrantless search of his automobile 25 air miles north of the mexican border. The search, made without probable cause or consent, uncovered marihuana, which was used to convict petitioner of a federal crime. . . Held: The warrantless search of petitioner's automobile, made without probable cause or consent, violated the Fourth Amendment. Pp 269-275. (a)The search cannot be justified on the basis of any special rules applicable to automobile searches, as probable cause was lac ing; nor can it be justified by analogy with administrative inspections, as the officers had no warrant or reason to believe that petitioner had crossed the border or committed an offense, and there was no consent by petitioner. Pp269-272. "The search in the present case was conducted in the unfettered discretion of the members of the border Patrol, who did not have a warrant, probable cause, or consent. The search thus embodied precisely the evil the court saw in Camara when it insisted that the 'discretion of the official in the field' be circumscribed by obtaining a warrant prior to the inspection." "Two other administrative inspection cases relied upon by the government are equally inapposite. Colonnade Catering Corp. v. U.S., 397 U.S. 72, and U.S. v. Biswell, 406 U.S. 311, both approved warrantless inspections of commercial enterprises engaged in businesses closely regulated and licensed by the Government...A central difference between those cases and this one is...petitioner here was not engaged in any regulated or licensed business." Just in case our rights are violated by some well-meaning but errant public servant, we have this handy little law to assist us in obtaining redress of our grievances: Title." Notice that this statute recognises the fact that "statutes, ordinances, and regulations" together with "custom", can be
unconstitutional and violate our rights. Where they do so, it is up to us to challenge their jurisdiction over us. Failure to challenge jurisdiction at the first instance of a rights violation can be fatal to your case, and will be seen as an admission that the law in question does indeed have jurisdiction over you. So you better now your rights, right? "To maintain an action under 42 USC 1983, it is not necessary to allege or prove that the defendants intended to deprive plaintiff of his Constitutional rights or that they acted willfully, purposefully, or in a furtherance of a conspiracy. . . it is sufficient to establish that the deprivation. . . was the natural consequences of defendants acting under color of law. . . ." Ethridge v. Rhodos, DC Ohio 268 F Supp 83 (1967), Whirl v. Kern CA 5 Texas 407 F 2d 781 (1968) Title 18 United States Code, Section 241, provides that... "any person who goes on the highway in disguise to prevent or hinder the free exercise and enjoyment of any right so secured by law...shall be fined not more than $10,000.00 or imprisoned not more than ten years or both. Further, Title 18, United States Code, Section 242, provides for one or more persons who, under color of law, statute, ordinance, regulation, or custom, willfully subjects any inhabitant of any state, territory, or district to the deprivation of rights, privileges, or immunities secured by the Constitution, or laws of the United States. . . shall be fined not more than $1,000.00 or imprisoned not more than one year or both. Title 18, United States Code, Section 242, with its color of law provision, gives a cause of action to apply Title 18, United States Code, Section 241, because Section 241 needs two persons in disguise and Section 242 provides the second person under color of law as the "QUASI SUMMONS" mentioned herein implies that a judge in the Municipal Court is acting in concert to commit an overt act of fraud and extortion for conversion. Further, United States Code, Title 18, section 242 provides for one or more persons who, under color of law, statute, ordinance, regulation, or custom, willfully subjects any inhabitant of any state, territory, or district to the deprivation of rights, privileges, or immunities secured or protected by the Constitution or laws of the United States. . . shall be fined not more than $1,000 or imprisoned not more than one year or both. Usually, it can be phrased something li e: "Demand is upon you to withdraw the invalid Notice #_____ within ten (10) days from receipt of this Notice and Demand or Action will commence in the United States District Court pursuant to Rule 7(a) and (c) of the criminal rules of procedure by the jurisdiction provided in Title 42, United States Code, sections 1983 and 1985; Title 28, U.S.C. sections 1331 and 1343 and others with Title 18, U.S.C., sections 241, 242, 872, 1621, 1622, and 1623 providing for the administration of the penalties." "...an...officer who acts in violation of the Constitution ceases to represent the government." Broo field Co. v Stuart, (1964) 234 F. Supp 94, 99 (U.S.D.C., Wash.D.C.) "..Jur2nd Sec. 50, VII Civil Liability.
"Decency, security, and liberty ali e demand that government officials be subjected to the same rules of conduct that are commands to the citizen. In a Government of laws, existence of the government will be imperiled if it fails to observe the law scrupulously. Crime is contagious. If government becomes a lawbrea er,. "Law and court procedures that are 'fair on their faces' but administered 'with an evil eye or a heavy hand' was discriminatory and violates the equal protection clause of the Fourteenth Amendment." Yic Wo v. Hop ins, Sheriff, 118 US 356, (1886). "Judges must maintain a high standard of judicial performance with particular emphasis upon conducting litigation with scrupulous fairness and impartiality." 28 USCA). "Government immunity violates the common law maxim that everyone shall have a remedy for an injury done to his person pr property." Firemens Ins. Co. of Newaw , N.J. v. Washburn County, 2 Wisc 2d 214 (1957) "No freeman shall be ta en, or imprisoned, or disseised, or outlawed, or exiled, or anywise destroyed...but by lawful judgment of his peers or by the law of the land." Magna Charta, Chapter 39. (Sometimes referred to as Chapter 29?) :sov WHO ARE SOVEREIGN. 471-472. "The words 'sovereign people' are those who form the sovereign, and who hold the power and conduct the government through their representatives. Every citizen is one of these people and a constituent member of this sovereignty." Scott v. Sandford, Mo., 60 US 393, 404, 19 How. 393, 404, 15 L.Ed. 691. "Sovereignty itself is, of course, not subject to the law, for it is the author and source of law, but in our system, while sovereign powers are delegated to the agencies of government, sovereignty itself remains with the people, by whom and for whom all government exists and acts..ic Wo v. Hop ins, Sheriff, 118 U.S. 356. "'Sovereignty' in government to that public authority which directs or orders
what is to be done by each member associated, 28 P.2d. 982, 986, 52 Ariz. 1. "'Sovereignty' is a term used to express a supreme political authority of an independent state or nation. Whatever rights are essential to the existence of this authority are rights of sovereignty. The rights to declare war, to ma e treaties of peace, to levy taxes, and to ta e property for public uses, termed the 'right of eminent domain,' are all rights of sovereignty. In this country this authority is vested in the people, and is exercised through the joint action of the federal and state governments. To the federal government is delegated the exercise of certain rights or powetrs of sovereignty, and with respect to sovereignty, 'rights' and 'powers' are synonymous terms; and the exercise of all other rights of sovereignty, except as expressly prohibited, is reserved to the people of the respective states, or vested by them into their local government. When we say, therefore, that a state of the Union is sovereign, we only mean that she possesses supreme political authority, except as to those matters over which such authority is delegated to the federal government or prohibited to the states." Moore v. Smaw, 17 Cal. 199, 218, 79 Am. Dec. 123. "The 'sovereign powers' of a government include all the powers necessary to accomplish its legitimate ends and purposes. Such powers must exist in all practical governments. They are the incidents of sovereignty, of which a state cannot divest itself." Boggs v. Merced Min. Co., 14 Cal. 279, 309. "In all governments itself in but three eminent domain; and Sartoris Co., 22 P. of constitutional limitations 'sovereign power' manifests ways. By exercising the right of taxation; by the right of through its police power." United States v. Douglas-Willan 92, 96. 3 Wyo. 287.
"The term 'sovereign power' of a state is often used without any very definite idea of its meaning, and it is often misapplied. Prior to the formation of the federal Constitution, the states were sovereign in the absolute sense of the term. They had established a certain agency under the Articles. remar is true, both in reference to the federal and state governments." Spooner v. McConnell, 22 Fed. Cas. 939, 943. "Sovereignty means supremacy in respect of power, domination, or ran ; supreme dominion, authority or rule." Brandes v. Mitteriling, 196 P.2d 464, 467, 657 Ariz 349. "'Government' is not 'sovereignty.' 'Government' is the machinery or expedient for expressing the will of the sovereign power." City of Bisbee v. Cochise
County, 78 P.2d 982, 986, 52 Ariz. 1. "The 'sovereignty' of the United States consists of the powers existing in the people as a whole and the persons to whom they have delegated it, and not as a seperate personal entity, and as such it does not posssess the personal privileges of the sovereign of England; and the government, being restrained by a written Constitution, cannot ta e property without compensation, as can the English government by act of ing, lords, and Parliament." Filbin Corporation v. United States, D.C.S.C., 266 F. 911, 914. "'Sovereignty' is the right to govern.. Their princes have personal powers, dignities, and pre-eminences. Our rulers have none but official, nor do they parta e in the sovereignty otherwise, or in any other capacity than as private citizens." Chisholm v. State of Georga, Ga., 2. U.S. (2 Dall.) 419, 471, 1 L. Ed. 440. "States and state officials acting officially are held not to be 'persons' subject to liability under 42 USCS section 1983." Wills v. Michigan Dept. of State Police, 105 L.Ed. 2nd 45 (1989). "Statutes employing the word 'person' are ordinarily construed to exclude the sovereign." 56 L.Ed. 2d. 895 "A foreign sovereign power must in courts of United States be assumed to be acting lawfully, the meaning of 'sovereignty' being that decree of the sovereign ma es law." Eastern States Petroleum Co. v. Asiatic Petroleum Corporation, D.C.N.Y., 28 F.Supp. 279, 281. "The very meaning of 'sovereignty' is that the decree of the sovereign ma es law." American Banana Co. v. United Fruit Co., 29 S.Ct. 511, 513, 213 U.S. 347, 53 L.Ed. 826, 19 Ann.Cas. 1047. "'Sovereignty' means that the decree of sovereign ma es law, and foreign courts cannot condemn influences persuading sovereign to ma e the decree." Moscow Fire Ins. Co. of Moscow, Russia v. Ban of New Yor & Trust Co., 294 N.Y.S. 648, 662, 161 Misc. 903.
:juris JURISDICTION AND LAWS VOID _ab initio_ "When any court violates the clean and unambiguous language of the Constitution, a fraud is perpetrated and no one is bound to obey it." State v. Sutton, 63 Minn. 147, 65 N.W. 262 "No one is bound to obey an unconstitutional law...Since an unconstitutional law is void, the general principles follow that it imposes no duties, confers no rights, creates no offices, bestows no power or authority on anyone, affords no protection, and justifies no acts performed under it...and no courts are bound to enforce it." 16 Am Jur 2d 177. "The general principle that legal effect should not be given to unconstitutional laws has been applied to statutes creating criminal
offenses which are in violation of the Constitution. It has been decided that an offense created by an unconstitutional law is not a crime. A conviction under it is not merely erroneous, but is illegal and void and cannot be a legal cause of imprisonment; the courts must liberate a person imprisoned under it just as if there had never been the form of a trial, conviction and sentence. Thus, one imprisoned by the judgment of a court under an unconstitutional law may be discharged by the writ of habeas corpus." 11 Am. Jur., Sec 150. it's language is unambiguous, for it is the mandate of the sovereign power." Coo e v. Iverson 122 N.W. 251 "Under our form of government, the legislature is not supreme. . . li e other departments of government, it can only exercise such powers as have been delegated to it, and when it steps beyond that boundary, its acts, li e those of the most humble magistrate in the state who transcends his jurisdiction, are utterly void." Billings v. Hall 7 CA 1 "The powers of state government are legislative, executive, and judicial. Persons charged with the exercise of one power may not exercise either of the others except as permitted in this Constitution." Article III, Section 3, Constitution of the State of California "If the legislature clearly misinterprets a Constitutional provision, the frequent repetition of the wrong will not create a right." Amos v. Mosley, 77 SO 619. Also see Kingsley v. Merril, 99 NW 1044 "Where the meaning of the Constitution is clear and unambiguous, there can be no resort to construction to attribute to the founders a purpose or intent NOT MANIFEST IN ITS LETTER." Norris v. Baltimore 192 A 531 16 Am. Jur. 2d 256: [my emphasis], and no courts are bound to enforce it." "To constitute [jurisdiction] there are three essentials: First, the court must have cognizance of the class of cases to which the one to be adjudicated belongs; second, the proper parties must be present; and third, the point decided upon must be in substance and effect within the issue." Reynolds v Stoc ton, 140 U.S. 254, 268. "Once jurisdiction is challenged, it must be proven." Hagens v Lavine, 415 U.S. 533, note 3. "Where jurisdiction is not squarely challenged, [it] is resumed to exist." Bur s v Las er, 441 U.S. 471. "No sanction can be imposed absent proof of jurisdiction." Standard v Olsen, 74 S.Ct. 768. "...mere good faith assertions of power and authority (jurisdiction) have been abolished." Owens v The City of Independence (CITE MISSING! Please send mail if you can trac this one down!) ; La e County v. Rollins, 130 U.S. 662; Hodges v. United States, 203 U.S. 1; Edwards v. Cuba R. Co., 268 U.S. 628; The Poc et "The necessities which gave birth to the constitution, the controversies which precede its formation and the conflicts of opinion which were settled by its adoption, may properly be ta en into view for the purposes of tracing to its source, any particular provision of the constitution, in order thereby, to be enabled to correctly interpret its meaning." Polloc v. Farmers' Loan & Trust Co., 157 U.S. 429, 558. "The values of the Framers of the Constitution must be applied in any construing the Constitution. Inferences from the text and history of Constitution should be given great weight in discerning the original understanding and in determining the intentions of those who ratified constitution. The precedential value of cases and commentators tends
increase, therefore, in proportion to their proximity to the adoption of the Constitution, the Bill of Rights, or any other amendments." Powell v. McCorm ta en spo e loo. "A long and uniform sanction by law revisers and lawma ers, O al. 262; 25 P. 2d 666; 79 ALR 1018 "Disobedience or evasion of a constitutional mandate may not be tolerated, even though such disobedience may, at least temporarily, promote in some respects the best interests of the public." Slote v. Board of Examiners,
SEVEN ELEMENTS OF JURISDICTION 1. The accused must be properly identified, identified in such a fashion there is no room for mista en Superintentdent: ".... thereof to ma e a return (other than a return required under authority of 6015).....Indictment or information is defective unless every fact which is an element in a prima facie case of guilt is stated. Assumption of element is not lawful. Otherwise, accused will not be thoroughly informed. 26 USC 6012 is a necessary element of the offense. Since 6012 isn't cited, the information is fatally defective. Additionally, information did not negate the exception (other than required under authority of section 6015)." After reading 6012 and 6015, and nowing that the 7203 elements are: A. You were required to perform B. You failed to perform C. Your failure was willful you may wish to as , "how often is a valid 7203 or other information or indictment brought? How many citizens have been convicted on a fatally defective process?" ta e responsibility for the ma ing of the accusation, not an agency or an institution. This is the only valid means by which a citizen may begin to face his accuser. Also, the injured party (corpus delicti) must ma e ris . apprehand danger from a direct answer. The mere assertion of a not immunize him; the court must determine whether his refusal and may require that he is mista en in his refusal." Hoffman v 486. cause to privilege does is justified, US, 341 U.S.
7. The court must be one of competent jurisdiction. To have valid process, the tribunal must be a creature of its constitution, in accord with the law of its creation, i.e., Article III judge. Lac ing lac elementary nowledge and incentive to comply with the mandates of constitutional due process. They will ma e mista es. ma es this assumption, he may learn, to his detriment, through experience, that certain questions of law, including the question of personal jurisdiction, may never be raised and addressed, especially when the accused is represented by the bar. (Sometimes licensed counsel appears to ta e. DIFFERENT KINDS OF JURISDICTION IN PERSONAM: Power which a court has over the defendant's person. It is absolutely required before a court may enter a personal judgment. Jurisdiction over a person may be waived by consent. In Personam jurisdiction may be acquired by an act of the defendant within a jurisdiction under a law or statute by which the defendant implies consent to the jurisdiction of the court over his person. Examples of how a court may acquire personal jurisdiction: Entry of appearance, proper service, or implication (e.g., the operation of a motor vehicle on the highways of a State may confer jurisdiction of the operator and owner on the courts of that State). For more info, see Hess v Pawlos i, 274 US 352. IN REM: Power of a court over a thing, so that its judgment is valid against the rights of every person in the thing. An action in rem is a proceeding that ta es no cognizance of the owner, but determines the right in specific property against all of the world, equally binding upon everybody. In this action, the court is required to have control or power over the thing. Examples: A boat or other vehicle inside of which narcotics are discovered; a judgment of registration of title to land. For more info, see Calero Toledo v Pearson Yacht Leasing Co., 416 US 663. Also loo at any cases which are in the form of "United States v X", where X is a thing instead of a person, e.g., "$20,000 in United States currency" or "Forty Barrels and Twenty Kegs of Coca-Cola". QUASI IN REM: The power of a court over the defendant's interest in property, real or personal, within the geographical limits of the court. The court's judgment or decree binds only the defendant's interest, and not the whole world, as in the case of in rem. This term is applied to proceedings which are not strictly in rem, but are brought against the defendant personally, though the real object is to deal with particular property, or to subject property to the discharge of asserted claims. Examples: Foreclosure of a mortgage, quieting title, effecting a partition. For more info, see Freeman v Alderson, 119 US 185. SUBJECT MATTER: The power of a particular court to hear a type of case. Three elements must be present for a court to have proper jurisdiction over the subject matter: 1) The court must have cognizance of the class of cases. 2) The proper parties must be present. 3) The point decided upon must be, in substance and effect, within the issue. See Reynolds v Stoc ton, 140 US 254. "The criminal jurisdiction of the United States is wholly statutory." U.S. v Flores, 289 US 137,,151 (1933).
"The legislative authority of the Union must first ma e an act a crime, affix a punishment to it, and declare the court that shall have jurisdiction of the offense." U.S. v Hudson, 7 Cranch 32,34 (1812). Subject matter jurisdiction, unli e personam and venue (see below), may NOT be waived or conferred by consent of the parties and the court. VENUE: Venue does not actually refer to jurisdiction at all. "Jurisdiction" means the inherent power of the court to decide a case. "Venue" designates the PARTICULAR GEOGRAPHICAL AREA (county, city, district, state, etc) in which a court with jurisdiction may properly hear a case. In federal cases, the prosecutor's discretion regarding the location of the prosecution is limited by Article III, Section 2 of the federal Constitution, which requires trial in the State where the offense "shall have been committed", and the Sixth Amendment, which guarantees an impartial jury "of the State and district wherein the crime shall have been committed". The addressing of venue in reference to an accusation of failing to "file a document" can be seen in U.S. v Lombardo, 241 US 73,76-7. Here, interestingly, the court stated that "filing is not complete until the document is delivered and received...to the office and not sent through the United States mails." A challenge of venue may be waived, so as always, it is crucial that if a challenge is to be made, that it be timely. [Further review of the topics of jurisdiction and venue should be made prior to submitting any Motions. Good sources that will lead to other sources are the law encyclopedias _American Jurisprudence_ and _Corpus Juris Secundum_.] Things to thin about and ta e care of in a typical case: (partial list) The act or omission in question: Is it declared by law to be a crime? Research the law/code/ordinance The victim: Who? What Life, Liberty or Property was harmed? Is the person Natural or Juristic? Is he At Law, or in Equity? Is the person competent to testify? The complaint: Verified by affidavit signed by victim? If no victim, serve & file constructive notice on gov't agent and judge Ten days later, file Suit Grand jury indictment/information Grand Jury represents the People District Attorney = The State Object to prosecution by information, Demand Grand Jury Indictment. Warrant - Made out for the party arrested? Chec spelling-Joe Blow is not Jo Bloe! Signed by a judge? Chec "judge's" Oath of Office/compare with required oath in Constitution Arrest - You have the right to remain silent You have the right to counsel present Not required to give fingerprints [Davis v. Mississippi] Give Miranda/Titles 18,42 warning Writ of Habeas Corpus Arraignment - Starts calendar for speedy trial Appear specially, not generally Demand all rights at all times Disclaim equity jurisdiction
Give Miranda/Titles 18,42 warning Demand to see a verified complaint - Must be sworn to by complainant within 15 days of Notice to Appear Must have the seal of the court Defendent cannot understand charges without counsel Demand counsel of choice Object to denial by judge Cite cases File written Demand for Counsel of Choice If judge appoints Public Defender, object! You have to tal with Public Defender before you can accept him as counsel. You cannot relate to him. You have no confidence in him You cannot be forced to employ counsel beholden to your adversary Stand "mute" Judge will enter "Not guilty" plea Object! Let the record show that defendant stands mute File "Arraignment & Plea" File Demand for Plaintiff to Show Constraining Need or in the Alternative to Dismiss File Demand for Jury Trial in which the jury decides both the law and the facts At Law File Notice of intention to tape record the proceedings per Rule 980(f) "unless otherwise ordered for cause" File Demand for court reporter to ta e transcripts at all hearings File Demand for transcripts of all proceedings File Demand for Evidentiary Hearing File/serve Declaration-Petition for Redress of Grievances The Preliminary (Evidentiary)Hearing Appear specially, not generally Claim all rights at all times Challenge jurisdiction ADMINISTRATIVE AND PROCEDURAL MATTERS Demand formal, verified complaint You intend to challenge jurisdiction but you need counsel to adequately argue jurisdiction Appearing pro per, not pro se Get judicial notice of demand for counsel of choice and supporting brief Get judicial determination for the record that the court is denying unfettered counsel of choice [final judgement on the matter] Demand that hearing be postponed so that denial of counsel may be appealed to higher court Does court honor demand for rights sua sponte? Demand that the court prove both agency's and court's jurisdiction on the record. "Jurisdiction cannot be assumed & must be decided" Maine v. Thiboutot 100S.Ct.2502 (1980) "Jurisdiction cannot be presumed" Smith v. McCullough 46S.Ct.338(1926) Examine/cross-examine witnesses Discovery:File/serve Demand Suppression hearing file Demand to Supress Evidence Formulate jury instructions They must have foundation in the record in the Evidence Exhibits in the Testimony of Witnesses Formulate questions for witnesses For Cross-exam For Direct exam
Keep Proposed Jury Instructions in mind Subpoena Witnesses Expert witnesses Gov't agents Witnesses at scene of arrest Alibi Motion [Demand] Hearing Give equity disclaimer/Demand rights Challenge ensign v. flag Give Miranda/Title 18 warning File Constructive Notice Demand Counsel of choice File paper Demand Dismissal for Lac of Jurisdiction File jurisdiction briefs on Status, Status of Citizens, Merchant At Law, Rights, Memorandum of Law, Equity, The Monetary System Demand Rights Sua Sponte File paper Demand jury trial w/12 jurors File Notice & Demand Jury Selection Questions for Jurors Prosecution's Opening Statement Defense Opening Statement (may wait) Prosecution Examines Witnesses Object! Object! Object! Defense Cross-examines Defense may testify Not required to ta e Oath Prosecution Closing Statement Prosecution rests Defense challenges Prima Facie Case Code Pleading Defense moves for directed verdict of aquittal Defense Opening Statement if delayed Defense Examines Witnesses Prosecution cross-examines Object! Object! Object! Defense Closing Statement Defense rests Prosecution 2nd Closing Statement Judge's Instuctions to Jury Object! Object! Object! Jury Deliberations Jury Verdict Defense Motion for Verdict of Aquittal Notwithstanding Jury Verdict Motion for New Trial if appropriate Notice of Appeal Demand for Stay of Execution Pending Appeal and Order If denied, file Writ of Habeas Corpus Demand for transcripts at gov't expense Proposed statement on Appeal Use court's form as a cover sheet Fill blan s with "see Proposed Settled Statement [Attached] Don't put signature on form Prosecution's Amendments Defense Revised Proposed Statement Settlement conference Opening Brief on Appeal Prosecution's Rebuttal to above
Prosecution's Opening Brief Defense rebuttal Defense Closing Brief :money THE FEDERAL RESERVE, MONEY, AND DEBT Current Law: No State Shall Ma e Any Thing But Gold And Silver Coin A Tender In Payment Of Debt. (U.S. Constitution, Art. 1, sec. 10) Current Law: 31 United States Code 371: , and all accounts in the public offices and all proceedings in the courts shall be ept and had in conformity to this regulation." The question was put to an attorney: Is Article 1, section 10, of the United States Constitution, particularly the words "No state shall ... ma e any Thing but gold and silver coin a tender in payment of debt..." still binding on a State? He replied, in writing, "...the only lawful answer is Yes. Meant to 'crush paper money' by unanimous consent of the constitutional Convention of 1787, this section prohibits the States from imposing upon the people a paper currency, paper money, or anything else other than gold or silver coin as a medium of exchange in the discharge of debts. Since the Constitution can be changed by amendment only, and since no amendment has changed this section, no federal action can excuse a State of this prohibition. The effect of this section is thus: If a paper FRN is delivered to, or received from a State-authorized party without particular objection to its being an unlawful tender under Article 1, Section 10, no Constitutional question has arisen, and the payor/payee, in remaining silent, has renounced his individual rights flowing from the Constitutional prohibition. Those rights are the following: A. Discharge of the debt in gold or silver coin, if provided for in the debt; B. Dismissal or forgiveness of the debt altogether, if the debt is not denominated in gold or silver coin, since any rule or judgement that is repugnant to the Constitution is void, invalid, and without effect. As with other rights, the right to gold and silver coin, and the right to be forgiven of any debt not denominated in same, are considered waived unless properly and timely asserted." Specifically regarding "notes" and such, the courts have had some equally interesting things to say: "They had a certain contingent value, and were used as money in nearly all the business transactions of many millions of people. They
must be regarded therefore, as a currency imposed on the community by irresistable force." 75 U.S. 11 ." 75 U.S. 13 "One is said to act in a fiduciary capacity when the business that he transacts, or the money or property which he handles, is not his own..." A "fiduciary relation" can include "informal relations which exist whenever one man trusts and relies upon another--it exists where there is special confidence reposed in one who in equity and good conscience is bound to act in good faith and with due regard to interests of one reposing the confidence." Blac 's Law Dictionary, 4th ed. The Federal Reserve itself tells you that it is "confidence" that is the reason that anyone at all accepts FRNs! By accepting the government's obligations in good faith and confidence, besides becoming a fiduciary (with a corresponding duty, ma ing you "subject" to specific performance, you then become an "accomodation party", in effect becoming li e a co-signor for the government's debts. Until the Federal Reserve has been fully paid for use of it's special paper, it has a lien upon all that you have acquired with it. Thus that man that passed the FRN to you does not really own your goods - now the Fed owns them, although they do not have possession of them. It is li e the plantation owner, who owns the clothes on the bac s of his slaves. Don E. Williams Co. v. Commissioner of Internal Revenue, 429 U.S. 569 (1977): Notes cannot pay debt, debt cannot pay debt. "No state shall ma e any thing but gold and silver coin a tender in payment of debts." U.S. Constitution, Article 1, section 10, never amended. Thus, any other form of promised money is a fraud. "Federal Reserve Notes are not legal money." Justice Martin V. Mahoney, Credit River Township, Dec. 7-9, 1968 in Jerome Daly vs. First National Ban of Montgomery, Minn. :2nd SOME QUICK NOTES ON THE 2ND AMENDMENT In a recently decided U.S. Supreme Court case, United States versus Verdrigo-Urquidez, 110 S. Ct. 1056, 1060-61 (1990), the Court referred to the Second Amendment and specifically addressed the meaning of the words "the people" as used in the First, Second, and Fourth Amendments to the U.S. Constitution. While the specific case involved only the protections afforded to individuals under the Fourth Amendment, the Court did clearly state that the words "the people" in the Second Amendment have the same meaning as they do in the First and Fourth Amendments, i.e., the rights of individuals. While the dicta doesn't define how the Supreme Court would rule on
a particular Second Amendment case, it does indicate the Court believes that the "right to eep and bear arms" is an _individual_ right rather than a _collective_ right as the anti-gun movement and the mass media would li e everyone to believe. In 1856 the U.S. Supreme Court declared that local law enforcement had no duty to protect a particular person, but only a general duty to enforce the laws. [South v. Maryland, 59 U.S. (HOW) 396,15 L.Ed., 433 (1856)]. (1982) See also Reiff v. City of Philadelphia, 477F.Supp.1262 (E.D.Pa. 1979)]. There are a few, very narrow exceptions. in 1983, the District of Columbia Court of Appeals remar ed that: "In a civilized society, every citizen at least tacitly relies upon the constable for protection from crime. Hence, more than general reliance is needed to require the police to act on behalf of a particular individual. ...Liability is established, therefore, if the police have specifically underta en to protect a particular individual and the individual has specifically relied upon the underta ing. ..2d 1306 (D.C. App. 1983)]. As a result, the government - specifically, police forces - has no legal duty to help any given person, even one whose life is in imminent peril. In a New Yor case, a Judge Keating dissented, bitterly noting that Linda Riss was victimized not only because she had relied on the police to protect her, but because she obeyed New Yor laws that forbade her to own a weapon. Judge Keating wrote: "What ma es the city's position particularly difficult to understand is that, in conformity to the dictates of the law, Linda did not carry any weapon for self-defense. Thus, by a rather bitter irony she was required to rely for protection on the City of New Yor , which now denies all responsibility to her." [Riss v. City of New Yor , 293 N.Y. 2d 897 (1968)]. The California Court of Appeals held that any claim against the police department: "...is barred by the provisions of the California Tort Claims Act, particularly Section 845, which states: `Neither a public entity nor a public employee is liable for failure to establish a police department or otherwise
provide police protection or, if police protection service is provided, for failure to provide sufficient police protection." [Hartzler v. City of San Jose, App., 120 Cal.Rptr 5 (1975)]. The Superior Court of the District of Columbia held that: "...the fundamental principle (is -ed.) that a government and its agents are under no general duty to provide public services, such as police protection, to any particular individual citizen...The duty to provide public services is owed to the public at large, and, absent a special relationship between the police and an individual, no special legal duty exists.")]. ".. individuals from the criminal acts of others; instead their duty is to preserve the peace and arrest law brea ma e the safety of the public their first concern; for permitting dangerous criminals to go unapprehended lest particular individuals be injured or illed would inevitably and necessarily endanger the public at large, a policy that the law cannot tolerate, much less foster." [Lynch v. N.C. Dept. of Justice, 376 S.E. 2nd 247 (N.C. App. 1989)]. "....a distinction must be drawn between a public duty owed by the officials undertoo, 389 S.E.2nd 902 (Va. 1990)]. :irs THE IRS, INCOME TAXATION, AND THE 16TH AMENDMENT "Income is realized gain." Schuster v. Helvering, 121 F 2d. "Reasonable compensation for labor or services rendered is not profit." Laurendale Cemetary Assoc. v. Matthews, 245 Pa. 239. "The general term 'income' is not defined in the Internal Revenue Code." US v. Ballard, 535 F. 2d 400 (1976) "...it becomes essential to distinguish between what is, and what is not 'income'...Congress) "...'income,' as used in the statute should be given so as not to include everything that comes in. The true function of the words 'gains' and 'profits' is to limit the meaning of the word 'income'." So. Pacific v. Lowe, 2389 F. 847 (US Dist Ct. S.D. N.Y., 1917); 247 US 330 (1918) "Income within the meaning of the Sixteenth Amendment and the Revenue Act, means 'gain'... and in such connection 'Gain' means profit...proceeding from property, severed from capital, however invested or employed, and coming in, received, or drawn by the taxpayer, for his seperate use, benefit and disposal." Staples v. US, 21 F. Supp 737 (US Dist. Ct. ED PA, 1937) "...the definition of 'income' approved by this court is: The gain derived from capital, from [not by] labor, or from both combined, provided it be understood to include profits gained through sale or conversion of capital assets." Eisner v. Macomber, 252 US 189 (1920) They define the IRS income tax in Title 26 of the US code in Section 1: "there is hereby imposed on the taxable income of every... individual, a tax..." This is clearly a direct tax, even if we new what they were taxing, in direct violation of the constitution. This is confirmed by the courts: "such a tax would be by nature a capitation rather than excise tax." Pec & Co. v. Lowe, 247 US 165 (1918) "Our tax system is based upon VOLUNTARY assessment and payment, not upon distraint." - U.S. Supreme Court in Flora v. U.S. (1959) ["Voluntary" means "acting or done without any present legal obligation to do the thing done" Webster's Third World International Dictionary] "Statutes levying taxes should be construed, in case of doubt, against the government and in favor of the citizen." Miller v. Gearing 258 F. 225 "The legal right of a taxpayer to decrease the amount of what otherwise would be his taxes, OR ALTOGETHER AVOID THEM, by means which the law permits, cannot be doubted." Gregory vs. Helvering 293 U.S. 465 "The explanations and examples in this publication reflect the official INTERPRETATION by the IRS of tax laws enacted by Congress and...Court decisions...The publication covers some subjects on which CERTAIN COURTS HAVE TAKEN POSITIONS MORE FAVORABLE TO TAXPAYERS THAN THE OFFICIAL POSITION OF THE SERVICE. Until these interpretations are resolved by higher court decisions, or otherwise [li e when there is no higher court, in the case of a Supreme Court decision!--FF], the publication will continue to present
the viewpoint of the Service." IRS, Publication 17 "One does not derive taxable income by rendering services and charging for them. IRS cannot enlarge the scope of the statute." Edwards v. Keith, 231 F 110,113 ." Sims v. Ahrens, 271 SW 720 (1925) "Income is realized gain." Schuster v. Helvering, 121 F 2nd "Decided cases have made the distinction between wages and income and have refused to equate the two." Central Illinois Publishing Service v. U.S., 435 U.S. 31, p.90 "Income, as used in the statute should be given the meaning so as NOT to include everything that comes in. The TRUE function of the words 'gains' and 'profits' is to LIMIT the meaning of the word 'income'." So.Pacific v. Lowe, 238 F. 847 "...the provisions of the Sixteenth Amendment conferred no new power of taxation but simply prohibited the previous complete and plenary power of income taxation possessed by Congress from the beginning from being ta en out of the catagory of indirect taxation to which it inherently belonged and being placed in the category of direct taxation..." Stanton v. Baltic Mining Co., 240 U.S. 103. "A tax laid upon the happening of an event, as distinguished from its tangible fruits, is an indirect tax..." Tyler v. U.S. 281 U.S. 497 "The conclusion reached in the Polloc "Excises are taxes laid...upon licenses to pursue certain occupations, and upon corporation privileges...The tax under consideration may be described as an excise upon the particular privilege of doing business in a corporate capacity. The requirement to pay such taxes involves the exercise of privileges." Flint v. Stone Tracy Co., 220 U.S. 107. "The individual, unli e the corporation, cannot be taxed for the mere privilege of existing. The corporation is an artificial entity which owes its existence and charter powers to the state; but the individuals' rights to live and own property are natural rights for the enjoyment of which an EXCISE cannot be imposed." Redfield v. Fisher, 292 P. 813.
"The right to labor and to its protection from unlawful interference is a constitutional as well as a common-law right. Every man has a natural right to the fruits of his own industry." 48 Am Jur 2d, section 2, page 80. :1st Well, o ay, yes we do have some quic quotes on the First Amendment. Freedom of speech per se doesn't usually come up too often, but note that this amendment also gives us freedom of (OR FROM) religion, the right to spea or not spea (i.e., remain silent), etc. Bear this in mind when reading the following... ). :educate Pierce v. Society of Sisters of the Holy Names of Jesus and Mary, 268 U.S. 510 (1925): "The child is not the mere creature of the State; those who nurture him and direct his destiny have the right, coupled with the high duty, to recognize and prepare him for additional obligations...We thin it entirely plain that the Act of 1992 unreasonably interferes with the liberty of parents and guardians to direct the upbringing and education of children under their control." [Note that this case touched on the "Fourteenth Amendment" as well. Ignore it, just li e you ignore all "amendments" past the ten that came first. They're the only ones that count.]." Griswald v. Connecticut, 381 U.S. 497 (1965): "The right to educate one's child as one chooses is made applicable to the states by the First and Fourteenth Amendments to the U.S. Constitituion." [Again, ignore the 14th Amendment reference.] Perchemlides v. Frizzle, CA-16641 (Massachusetts Superior Court, 1979): "Without doubt, the Massachusetts compulsory attendance statute might well be constitutionally infirm if it did not exempt students whose parents prefer alternative forms of education. Under our system the parents must be allowed to devide whether public education, including its socialization aspects, is desirable or undesirable for their children." :misc
MISCELLANEOUS "The word 'shall' in a statute may be construed to mean 'may', particularly in order to avoid a constitutional doubt." Fort Howard Paper Co. v Fox River Heights Sanitary Dist., 26 NW 2d 661 "If necessary, to avoid unconstitutionality of a statute, 'shall' will be deemed equivalent to 'may.'" Gow v Consolidated Copper Mines Corp., 165 Atl 136. "'Shall' in a statute may be construed to mean 'may' in order to avoid constitutional doubt." Grover Williams College v Village of Williams Bay, 7 NW 2d 891. "Because of what appears to be a lawful command on the surface, many citizens, because of their respect for what only appears to be law, are cunningly coerced into waiving their rights due to ignorance." U.S. vs. Min er, 350 U.S. 179 at 187
>From the Roger Sherman Society: The question is often as ed, "How can one individual stand alone against 'City Hall'?" After serious practice combined with continued faith, study, and prayer, our answer came: 1. Obtain, and study carefully, a copy of West's Annotated California Codes, Government Code, Title 2, Div. 3, Ch. 5 Administrative Adjudication sections 11500-11528. If you have difficulty understanding it, as a lawyer to explain it. If the lawyer discourages you and tells you it does not apply to the letter, bill, tic et, or other accusation you received from the IRS, DMV, FTB, Licensing Agency or other ABC government administrative agency/officer, then find another lawyer, or a paralegal, or even a teacher of the English language. Find someone who can help you UNDERSTAND this legal procedure; not necessarily someone to do it for you. (For those living in other states, see #7 below.) 2. Upon receipt of the accusation, send the Agency Hearing Board a NOTICE OF DEFENSE (sec. 11506) and be sure to as for a hearing. (Bender form 15.) 3. The Administrative Hearing is the place where you will put ON THE RECORD your Evidence of substantive Rights. This is the place where you enter your Recisions and Waivers and Claims and Declarations, etc., on the RECORD. You may also enter questions of Discovery such as "Where does the Agency have an Interest in Respondent (that's you) to convert his right to travel/contract into the privilege to drive/be employed?" or "What evidence does the Agency depend upon to show that Respondent is subject to the licensing requirements and state administrative police powers in this instant case?" or "Is a Tax Identification Number mandatory or voluntary and what section of the Code says that?" or "As I do not have a license, by what section of the Code does the Licensing Agency claim it may regulate Respondent?" (Do not become angry with any answers you may receive, as all of this information is entered here for the Record.) 4. If/When the Administrative Hearing Board rules against you, you may ta e their Decision for a review in the Superior Court of your County by
a Petition for Writ of Mandate (CCP secs. 1085, 1086) to Review Administrative Decision (CCP sec. 1094.5) cost of bringing this Writ of Mandate is included in the Petition. There is no charge to file it. 5. due the and If you are denied the Administrative Hearing, you have been denied process of law (Gov't Code Sec. 11506) and you might want to file Mandate for Review of the Administrative Decision (Bender form 35) claim some damages.
6. If you followed the above instructions you may have eliminated any or all of the following: going to Justice Court, Municipal Court, Tax Court, losing your property, and even going to jail; AND you may be rewarded for being vigilant and claiming your Rights just by following the Forms. Be sure to read carefully the instructions following each Form, and Govt Code secs. 11500-11528. 7. Evry State in the Union must have equivalent statutes and Forms. You legal researchers out there get busy and find your state's codes which are equivalent to Calif. Govt. Code 11500-11528 and the procedural code sections for the Review Mandate--Calif. CCP secs. 1085, 1086, and 1094,5, and the equivalent to Bender Form Numbers 15 and 35. Let the people in your state now the forms they can use to stand up and claim their Rights so that the agencies will get the message to do their job of regulating the business of the state and nothing more. Maybe we should begin to entertain the possibility that we, the individual sovereigns, DID SOMETHING to change our sovereign status to that of a 14th Amendment subject who is in debt (the validity of which cannot be questioned). We do have the right to contract (somehow) out of the jurisdiction of sovereignty secured (though not granted) by the Constitution; and maybe we did exercise that right to contract into a commercial status and abandoned our sovereign status. We submit that we were registered at birth into an eleemosynary corporate estate which made us eligible to apply for benefits and privileges. Did we not ma e application for the benefits of the social security insurance policy and other benefits which are in the commercial jurisdiction? REMEMBER: Commerce is a subject of the U.S. If you are registered in commerce, you are registered as a subject. Birth Certificates are registered in the U.S. Department of Commerce; ALSO, the commercial jurisdiction is the one that uses NOTES (which are evidences of debt) and not SUBSTANCE to pay debts. (For purposes of this discussion we will not address the validity of the NOTES or PROMISES TO PAY nor will we address the subject of discharge of debt in contrast to the extinguishment of debt. However, to discourage perpetual debt, always offer to pay debts with unborrowed subtantive money, and afford those indebted the opportunity to do li ewise.) We at the Judge Roger Sherman Society have concluded that if an individual has sovereign status, he may simply BAR the state/legislative courts-for-subjects (see Art.1, sec.8, cl.9 & Art.3, sec.1, cl.1 U.S. Constitution) from exercising the jurisdiction of THEIR courts ("COURT-the person and suit of the sovereign" Blac 's Law dictionary 3rd Ed. pg. 457) against another sovereign. They recognize that the law does not give them jurisdiction over another sovereign. But they HAVE jurisdiction over their subjects (those who signed in and showed a birth
certificate). :boo s
_Words and Phrases_ ?: _The Forgotten Ninth Amendment_ [This boo is supposedly long out of print, and the master plates are rumored to be destroyed. Information regarding its existence is most welcome.] _Fight, Flight, Fraud: The Story of Taxation_
Adams, Charles:
Barnett, Randy?: _The Rights Retained by the People_ Bec man, M.J. & Benson, Bill: Benson, Bruce: Bec man, M.J.: Bouvier, John: Browne, Harry: Coo , Peter: _The Law That Never Was_ _The Enterprise of Law_ _Born Again Republic_ _Bouvier's Law Dictionary_ (older editions are preferable; try to find an edition prior to 1900) _How I Found Freedom in an Unfree World_ _A Historical Landmar Document on the Magic of Reserve Ban ing and How 'Ban Deposits' Become 80 to 666% Plus Tax-free Profits_ [Library of Congress Catalog Card #72-78516]
Dic stein, Jeffrey: _Judicial Tyranny and the Income Tax_ Jen ins, Merrill: _Everything I Have Is TheIR$_ Nestmann, Mar : Schiff, Irwin:
_How to Achieve Personal and Financial Privacy in a Public Age_, 2nd edition _The Biggest Con_ _The Federal Mafia - How It Illegally Imposes and Unlawfully Collects Income Tax_ _The Best Kept Secret: Taxpayer vs. Non-Taxpayer_ _If You Are the Defendant_
S inner, Otto:
Spooner, Lysander: _No Treason: The Constitution of No Authority_ Thayer, James Bradley: _A Preliminary Treatise on Evidence at the Common Law_
*****
-----BEGIN PGP PUBLIC KEY BLOCK----Version: 2.3a mQCNAiuhO1QAAAEEAOuUGP0QKhow6Fao1yAZ lOAoU+6sXt8978TaJYQQ+NTHMx7 zlnmG6d6LWarPgwIwyCyygEMU+2zAClde08YHOSI/zH+2rvLSaddgPcGJlf7V7+K uhu3nBJM6dhEBKY2P3UfO+CmQQemQ3Q8yR4m8HEpno1VRzUIh2QAFfmIg8VVAAUR tDNJYW4gTSBTY2hpcmF byA8aW1zQHRodW5 ZXItaXNsYW5 LmthbGFtYXpvby5t aS51cz4= =WIMt -----END PGP PUBLIC KEY BLOCK------ | https://ru.scribd.com/document/85887652/Frog-Farm-Faq | CC-MAIN-2019-39 | refinedweb | 16,964 | 62.07 |
An optimised vector class of elements from a given ring T. More...
#include <maths/nvector.h>
An optimised vector class of elements from a given ring T.
Various mathematical vector operations are available.
This class is intended for serious computation, and as a result it has a streamlined implementation with no virtual methods. It can be subclassed, but since there are no virtual methods, type information must generally be known at compile time. Nevertheless, in many respects, different subclasses of NVector<T> can happily interact with one another.
This class is written with bulky types in mind (such as arbitrary precision integers), and so creations and operations are kept to a minimum.
aand
bare of type T, then
acan be initialised to the value of
busing
a(b).
=,
==,
+=,
-=and
*=.
ais of type T, then
acan be initialised to a long integer
lusing
a(l).
tof type T can be written to an output stream
outusing the standard expression
out << t.
Creates a new vector.
Its elements will not be initialised.
Creates a new vector and initialises every element to the given value.
Creates a new vector that is a clone of the given vector.
Destroys this vector.
Adds the given multiple of the given vector to this vector.
Negates every element of this vector.
Calculates the dot product of this vector and the given vector.
Multiplies this vector by the given scalar.
Adds the given vector to this vector.
Subtracts the given vector from this vector.
Sets this vector equal to the given vector.
Determines if this vector is equal to the given vector.
trueif and only if the this and the given vector are equal.
Subtracts the given multiple of the given vector to this vector.
The internal array containing all vector elements.
A pointer just beyond the end of the internal array.
The size of the vector can be computed as (end - elements). | http://regina.sourceforge.net/engine-docs/classregina_1_1NVector.html | CC-MAIN-2015-22 | refinedweb | 315 | 60.21 |
C# 2.0 Spec Released 634
An anonymous reader writes "Microsoft released the design specifications document for C# 2.0 (codenamed 'Whidbey') to be released early next year. New features of the language include generics similar to those found in Eiffel and Ada, anonymous methods similar to lambda functions in Lisp, iterators, and partial types."
gc#? (Score:2, Insightful)
Re:gc#? (Score:2, Insightful)
gcc is Free software; so download the source and add c# or visual basic support. Once you get the ball rolling others will join in and help.
Re:gc#? (Score:3, Funny)
actually i would doubt it
Re:gc#? (Score:5, Informative)
Re:gc#? (Score:4, Insightful)
What colleges are teaching C#? At my school we had one Pascal course then went into C followed by C++. I believe we could have taken Assembly right after Pascal, but I'll take that after I finish C++. I've heard of other schools starting with java or even python. I'm not arguing that schools don't teach C#, I just want to know which ones do so I can be sure not to transfer there.
ugh (Score:2)
At our school C# is an elective.
Re:ugh (Score:4, Insightful)
Wrong.
Pascal is not meant for serious programming like C is, but Pascal has sorta grown into this business application language, and is far from obsolete.
You also cannot do anything in C++ that you can in C. You can do this in C, but not C++:
Or...
These examples were shamelessly ripped from Bjarne's FAQ, which is available Here. [att.com]
Code name (Score:5, Informative)
Re:Code name (Score:3, Informative)
still not even a compiler warning.. *sigh*
Re:Code name (Score:2)
Cheap update (Score:2)
The cheap update deal expired at the end of September IIRC, at least here in the UK. It was being featured fairly prominently on the MS web site and doesn't appear to be there now, so presumably it's gone elsewhere as well.
moving towards bloatware or are these important? (Score:2, Insightful)
Its great that they are adding new features. But are they removing anything that was decided to be a bad idea? Now is the time to do it, in the early versions shortly after its birth, before there is too much legacy code...
Will MS begin to use this for its own products like Office in the near future?
Re:moving towards bloatware or are these important (Score:2, Interesting)
Bill
Re:moving towards bloatware or are these important (Score:4, Informative)
The best thing to do is to "phase" out the undesired feature by not recommending it, not featuring it prominently in books, shifting features into optional components that must be installed, etc.
I know this isn't exactly the ideal way to do things but I see no other way. I mean, if I was responsible for Visual Studio (or C# specifications), I would not remove features. Who knows who is using a particlar feature?
Sivaram Velauthapillai
Not quite the same thing. (Score:4, Informative)
The quote that the parent AC plagarized is from Antoine de Saint-Exupery, the French aircraft designer living in the first half of the 20th century. (And author of The Little Prince, if that hasn't been banned in America yet.) He was speaking in the context of original design, not individual features.
While the plane is still on paper, that's the time to remove all the unneccessary cruft. That's de Saint-Exupery's point. Not after the plane has been built; then the dependancy problems you mention arise. That's not the proper time. Certainly not in midflight.
Re:moving towards bloatware or are these important (Score:2)
Yup. Java does this. It is called "deprecated". For instance parts of the Date class have methods which are deprecated. The method's functionality has been moved to the Calendar class.
It still works, but the compiler gives you a warning.
follow the link (Score:2, Funny)
Seems like a pretty limited spec.
All it says is:
Plugger: No approperiate application for type application/msword found!
whatever...
Code Name (Score:3, Insightful)
Why C# doesn't Totally Suck (Score:5, Interesting)
I guess even within these circumstances, I'd have refused to open Visual Studio for this project, if it didn't say ".NET" as well. I mean, think of it: previous versions of VS only supported C++ or VB, with APIs to cry for (admittedly, I don't know about MFC, only about Win32).
I actually happen to dislike C++, but on top of that, it doesn't suit my project, because the low-levelness makes it harder to program without errors (e.g. null pointers, memory leaking). I'd rather have a language at a scripting level -- and NO, that's NOT VB. I hope I don't have to explain why I hate VB if only on very first sight.
So with
There's really no need for anybody to pick on C#, long as it's realized that it's just finally a nice programming environment for Windows, and nothing (well, not much) more. (BTW, it's not much different from NeXT (now Apple)'s use/ takeover of Objective C.)
Re:Why C# doesn't Totally Suck (Score:5, Informative)
come on, where are the real differences
I thought the same thing. It's actually lots of little things that make C# nicer all 'round (in comparison to Java): Most pleasant for me is the fact that I can use enumerations without (a) declaring a new class/interface (b) placing a ridiculously long "public static final int" before EACH member of the enumeration and (c) being able to use the newly declared enumeration's new type name for parameters instead of just "int" - remember semantics?
Integrating legacy shit is also a snap with C#. Sure, managed C++ is better, but have you tried doing the same thing in Java? Yuck.
Lots of little things like this, IMHO, make C# better than Java.
I hate the fact that Microsoft charges an arm and a leg for Windows/MSVS/everything. But I like C#.
If only it was cross platform from the word go. Mono's nice, but the MSVS IDE is what keeps Microsoft/Windows up and above Linux as far as ease of development goes.
Python's better than everything else anyway. *hides*
;)
It's already there in Java 1.5 (Score:5, Informative)
Re:It's already there in Java 1.5 (Score:2, Flamebait)
True, it hasn't been released yet (the first Java 1.5 betas are due next quarter), neither is Whidbey, and the JSRs have been out for some time, and the prototype compiler with generic support has been available for months.
Re:Why C# doesn't Totally Suck (Score:3, Interesting)
For some people, perhaps. I find the MS development tools so nauseatingly bad that they are one of the main reasons that I don't do anything with Windows.
Fortunately, on Linux you get a choice: excellent command line tools and IDEs. On Windows, unfortunately, you don't: Windows command line tools simply are completely useless.
Ask a stupid programmer.... (Score:3, Insightful)
It's not surprising that a poor programmer likes C#--it's designed for people who can't design and code well. It's a continuing trend of giving more band-aid's to a language to compensate for lazy and/or incompetent programmers.
Here's a clue: null pointers and memory leaks are not "low level" problems--they're logic error
Re:Ask a stupid programmer.... (Score:2)
Re:Ask a stupid programmer.... (Score:3, Insightful)
Gee, and I thought my MSDN Enterprise subscription would be sufficient. Why should I plonk down $200 to report a bug? I didn't want an immediate fix, we already had a workaround. I just wanted to report a bug. I had sample data and exact steps to reproduce it.
Instead I simply stopped using C# and went on to be
Re:Ask a stupid programmer.... (Score:3, Insightful)
Um, no. I was responding to the so-called "low-level problems" of C++ (not C). Null pointers are useful in a function to represent an object, or the fact that an object is not available. Dereferencing a null pointer is a logic error, because it means the object isn't available. Der
innovation (Score:3, Insightful)
Re:innovation (Score:2)
more info (Score:4, Informative)
Secondly, generics, partial types, and such are being added to the CLR, as well as Microsoft's "first-class" languages, meaning that yes VB.NET will include them. VB.NET also gets operator overloading, native support for unsigned types, and in-line XML commenting.
You can read it all at the roadmap here:
It tells about some of the changes to the IDE, the CLR, and the languages. One interesting new "feature" is a sort of grammatical analyzer for writing code that will suggest improvements or corrections, similar to the way word underlines misspellings or grammar errors.
Whether it will be a great tool or a bloody nuisance remains to be seen.
Re:more info (Score:2)
If you cannot turn it off choice two is the obvious answer!
the kitchen sink too? (Score:3, Insightful)
The next version will of course have features from Esperanto, Mandarin, and Martian.
I'm all for extending a language, but they haven't had C# around enough to be larding new stuff on. The language already had several ways to do most things, now they're adding more?
If we wanted ten ways to do anything, we'd use perl. If we're not using perl, that usually means we like to be a little more constrained.
-andy
Re:the kitchen sink too? (Score:3, Funny)
booooring (Score:2)
It's good to see commercial competition adding new features to commercial languages, although I hope they don't get so feature bloated they become like Perl.
Re:booooring (Score:4, Interesting)
Java generics are broken because they don't guarantee type safety across compilation units. That requires VM changes, changes that Microsoft was willing to make but Sun wasn't.
Java is more and more turning into an accumulation of evil kludges. Instead of type-safe generics, we got a hack. Instead of lexical closures, we got nested classes. Instead of structs, we got some half-hearted promise of optimization under some nebulous set of circumstances that can't work in general. Instead of multidimensional arrays, we got some classes with a horrendous syntax that, on some theoretical JIT, might actually run faster than a snail.
I don't know whether C# will grow up into a well-designed general purpose programming language, but it is crystal clear that Java has missed the boat.
Re:booooring (Score:4, Informative)
-- kryps
Re:booooring (Score:3, Informative)
You do know that java is faster than C# for non-GUI apps, right? source [dada.perl.it]. I suspect that if you dump swing and go with the eclipse SWT, you probably equalize the GUI speed issue too, which would mean that on windows platforms Java is faster than C#.
The "java is slow" reputation was earned with java 1.1 and was fixed long ago when the JIT VM's came out (they are part of all modern JVM's). Memory use issues might give you a real issue to knock
C# generics on built-in types do not use boxing... (Score:5, Interesting)
."
Java uses Object boxing for built-in types in their generics implementation.
Re:C# generics on built-in types do not use boxing (Score:4, Informative)
c# and Stdin/Stdout anyone? (Score:4, Interesting)
StandardInput and StandardOutput, in
Piping binary data from one app to another is a very non-trivial task.
These are the small "features" that make c# unsuitable for anyone "thinking UNIX". Of course piping through stdout/stdin is not needed: you can use remoting, sockets or whatever - but those make easy things hard.
Anyone who has written a c# program that uses stdin/stdout for binary data?
BTW, you definately does not need Visual Studio to program
'unsuitable for anyone "thinking UNIX"' (Score:4, Interesting)
Re:c# and Stdin/Stdout anyone? (Score:4, Informative)
The Console class has indeed two properties: In and Out that are respectively TextReader and TextWriter objects, but there are also the OpenStandardInput and OpenStandardOutput methods that will return you a nice Stream that you can then write directly to (using byte arrays, for example).
And this is all easily done using command line compilers included with the SDK, or in Mono.
See, that wasn't too hard?
Dave
Re:Whidbey? (Score:2, Informative)
Re:Whidbey? (Score:2)
Re:That's great (Score:2)
Re:That's great (Score:2, Insightful)
Re:That's great (Score:2)
But you don't have to use
Re:That's great (Score:2)
Oh wait... [madtasty.com]
Re:Who gives a shit about C# (Score:4, Insightful)
Perhaps there are potential submarine patents, but Java is absolutely vendor-tied while C# is at least relatively open.
Re:Who gives a shit about C# (Score:3, Funny)
BWhahahaha, you fell for that, huh? Propaganda makes trolls all over the world.
Who gives a shit about the ECMA? (Score:2)
Look at thier site.
Aparently people cannot be members, only companies and universities (non-profit companies)
Why not an IETF [ietf.org] standard?
They've served very well so far.
Sun has repeatedly balked at standardizing Java due to the inherent loss of control.
I was pretty sure that Sun had published a Java standard. How is a standards org comprised of Micriosoft and it's vassals any better? I was under the impression that the only company that had problems with the Java standard was
Re:Who gives a shit about the ECMA? (Score:4, Insightful)
Of course not.
But those who don't remember history are doomed to repeat it.
Re:Who gives a shit about the ECMA? (Score:5, Informative)
Who the hell is the ECMA?
"Ecma International is an industry association founded in 1961, dedicated to the standardization of information and communication systems."
Here is a list [ecma-international.org] of their standards. It includes specs related to C, Ada, IDL, ECMAScript (JavaScript), C# and WSDL. Interestingly enough, Sun and Oracle are absent from their membership list.
Why not an IETF standard?
Hint: the "I" stands for Internet. What does C# have to do with the Internet?
Re:Who gives a shit about C# (Score:3, Informative)
Ruby Continuations (Score:3, Informative)
-----.
Re:Ruby Continuations (Score:2)
Re:Ruby Continuations (Score:4, Informative)
Okay, no offense, but that's the worst description of continuations I've ever heard. It seems to be giving people ideas that it's like goto, which is a common reaction people have when they first hear about continuations. But it's not accurate. Goto manipulates the instruction pointer alone; continuations manipulate the entire stack in much more interesting ways.
There's some good stuff on continuations out there. They have little use in imperative programming styles like C++ encourages. In functional styles, they're used to implement exceptions, non-determinism, coroutines, generators, and a host of other control features that can open up whole new worlds of programming.
The crack about "ways of confusing people" doesn't mean that continuations tend make your code unreadable, like goto. It means that continuations are a confusing concept, but if you understand continuations, you can make much clearer code.
Continuations are very useful in AI (Score:2)
Continuations are very useful in AI; often they are the only clean way of implementing a "hold that thought", which comes up very frequently in (for example) natural language processing. If you are trying to evaluate a sentence "as the words are coming in" by building a semantic structure that represents the meaning so far, you generally will want to be able to rearrange the semantic fragments based on later information. For example, if asked to name "the tallest person you know" vs. "the tallest person
informative (Score:2)
Perl 6 will have continuations (Score:3, Informative).
Re:Does C# have continuations? (Score:5, Informative)
Continutions are, roughly speaking, a generalization of setjmp and longjmp in C. However, to have true "first-class" continuations they need to be objects that you can pass around, store in data structures, etc. In C this isn't true, because if you return from the stack frame that did the setjmp, the continuation is invalidated. Lisp has "call/cc", some implementations of ML have "calcc" (typed), and many scripting languages have it, because it's pretty easy to implement in an interpreted language.
Continuations can be used to implement exceptions, user-level thread packages, "early exits" from recursive code, and other cool stuff.
Re:Does C# have continuations? (Score:4, Informative)
Re:Does C# have continuations? (Score:2)
Re:Does C# have continuations? (Score:2)
When I was writing in Scheme (back in 1986,) I found myself using call/cc quite often. In CL, I rarely find myself missing continuations, perhaps because the hairy parts of my code don't use defun, but rather use m
Re:Does adding every ingredient make it better? (Score:5, Insightful)
Anyway, anonymous higher order functions and generics are two really glaring deficiencies in Java, C#, and many other modern OO languages, so adding them is a step in the right direction. It's not as if these are minor, useless features.
> Is this their plan to "lock in" universities to teaching microsoft programing to all levels, because it will take
> 4 years of classes just to cover it all?
That's crazy. Universities don't teach programming languages except as tools to teach more important concepts.
Re:Does adding every ingredient make it better? (Score:2)
Re:Does adding every ingredient make it better? (Score:2)
No, he was right the first time. It's undefined behaviour, which is pretty much always broken.
Sorry, couldn't resist.
Re:Does adding every ingredient make it better? (Score:2)
C++ - C-- == 0
I doubt it. Try running this:
int C;
printf("%d\n", C++ - C--);
I don't have a compiler on this machine, but I'll wager 10 to 1 the above code will print "-1". (At least if you run it on gcc.)
-a
Re:Does adding every ingredient make it better? (Score:4, Insightful)
Thats a great idea. Sounds great on paper, sounds great in theory. Sounds great while you're playing around with a bubble sort.
After that, its a load of crap.
Tell you what: You learn your bubble sort however you want. Your assignment is to write a program that uses a row colored spheres with numbers texture mapped to the surface of the sphere to demonstrate how the bubble sort actually operates.
I learned to do this at my university, and was lucky enough to get a professor that hadn't bought into the Windows Thing, and tought graphics programming with OpenGL (available everywhere) instead of DirectX (available in windows, and if you're lucky, wine).
In fact, when you get out of your pretty little university, you can try and get a job on "I know my programming theory". If you don't know the language and APIs that Company X is using, you're sunk. These days they don't settle for learning on the job. I had a wonderful job interview for developing an interesting application, I wowed them all with my knowledge, except for one little thing: I didn't know Perl/GTK which was what they were writing their application in. A few weeks later I got a check in the mail for my flight, car rental, and hotel and a thank you letter for taking the time to interview them in person.
Re:Does adding every ingredient make it better? (Score:5, Insightful)
If you've got a Computer Science degree, and you payed attention, you can pick up the syntax for a new language within an hour. With a good API reference, you can be banging out code like an old pro with a weekend of study. It's not that hard.
What matters far more than how well you know a language is how well you know how to program. Any monkey with a keyboard can whip out a Visual Basic app.
But to write truly masterful code... that transcends skill with a language and approaches art.
That said, I'm going to contradict myself: it's important to know the basic capabilities of the language you're working with. Java would be a shitty language to write, say, a program that computes the sum of the two numbers input to it on the command line, because it takes so long for the VM to load -- far more than the actual execution time of the program.
Fortunately, things like that can be quickly learned.
Re:Does adding every ingredient make it better? (Score:2)
HAH
Re:Does adding every ingredient make it better? (Score:2)
What do you mean by broken? Broken implies to me "not working," and last I checked, both of those languages work predictably.
Complicated is mostly opinion-based, so I'd love you know what "not complicated" languages you're comparing them to.
Re:Does adding every ingredient make it better? (Score:4, Insightful)
A good language should support the development of code that doesn't contain common flaws. In my opinion, C and C++ are directly responsible for security flaws that cost trillions.
Re:Does adding every ingredient make it better? (Score:2, Informative)
Not true. Check out University of Waterloo [uwaterloo.ca] as an example of Microsofts approach to exposing C# to a new generation of developers. Well, engineer's, but close enough.
;-)
Re:Does adding every ingredient make it better? (Score:2)
Seriously, aside from the occasional muttering about "algorithmic complexity," I think most of my courses have focused entirely on the language being used. The system is boke. It would take a few million in grant money just to get the 'r' installed.
Re:Does adding every ingredient make it better? (Score:4, Insightful)
Is such a high level language such as one that is designed to run upon other protocals the same?
No.
Re:Does adding every ingredient make it better? (Score:2)
At first glance, this seemed like a troll. But then I got to thinking... you never know. It could be a plot by Microsoft to soak up all the time of programmers, so they don't have time for PHP & MySQL love.
No, but (Score:2)
Re:Does adding every ingredient make it better? (Score:2)
The good: Perl
The bad: C#
The ugly: PL/1
-transiit
Re:Sea Number/Sea Sharp (Score:3, Funny)
Re:Sea Number/Sea Sharp (Score:5, Funny)
Or as Microsoft execs like to pronounce it amongst themselves, cash.
Re:Sea Number/Sea Sharp (Score:4, Interesting)
The octothorpe symbol, '#', has slanted vertical strokes. The "sharp" sign has slanted horizontal strokes.
Re:Sea Number/Sea Sharp (Score:2, Offtopic)
don't you read telephone manuals!? (Score:3, Interesting)
Honestly why should one bother? It's neither portable nor natively executable. It's neither scalable to embedded systems nor to high-end servers. It has neither legacy code nor a bright future.
Mono is a good start, but M$ will fight it when it starts to show results.
Re:Sea Number/Sea Sharp (Score:3, Funny)
Re:More C++ (Score:2)
Neither are C++ templates.
Re:VB Programmers (Score:2)
You're missing the point; the new VB.NET spec will be out just as soon as the search-and-replace has finished.
Re:VB Programmers (Score:2)
Re:VB Programmers (Score:2)
For someone who has absolutely no clue what a programming language is, or how applications are developed, seeking out t
Re:Why should I care? (Score:5, Informative)
I'm not a full-time developer, I usually develop some basic web applications to enhance some of the new solutions I implement for Systems Administration. My experience with it is limited, but I'll give you my pro's and con's:
Pro's
Easier access to IO - just try it in Java and see. It's much faster in C#
Improved XML support - also a lot simpler in c#
Not as many third party specifications to learn. I remember having to learn Struts, Ant, Tomcat, and then Sophia after learning JSP - what a pain in the ass.
MSDN - The help system inside VS.NET is better than most languages' will ever be.
Con's
Not the best IDE in my opinon - IntelliJ smokes Visual Studio.NET in almost every respect(except for the help).
Can't use it on Linux or BSD - my applications are bound to fail more frequently than an equivalent Java/PHP/Perl app running on a secure box.
Most of the support I used to recieve about Java, Python, and other open source languages don't discuss c#. There just aren't the same amount of mailing lists, IRC channels, forums, to throw around C# ideas. The ones that do discuss it tend to cater to the Lowest Common Denominator.
I have to resort to Visual Studio 6 in order to create desktop applications that run on everyone's machine. The
.NET framework has been a hard sell for the enterprise I work in.
Re:Why should I care? (Score:2)
Prolog, Pascal, and Pike?
Re:Why should I care? (Score:3, Insightful)
I'm the last man in the world to support Java, but C# is optimized to windows, and probably matches the OS's file system better. I'm not sure if C# would do as well in an non-MS environment
This is a pet peeve of mine. LANGUAGES SHOULD NOT BE DEBATED BY THEIR STANDARD LIBRARIES. Don't like a library? Download another. Buy one.
Re:Why should I care? (Score:3, Interesting)
You would consider it a feature that there aren't third-party tools to improve development and deployment?
I have no idea what Sophia is, but I used java for a long time before hearing of struts and ant (you know, you can use Make with java). Struts takes the generic specs and makes things a lot better. They're both optional. Like many other misgui
Re:Why should I care? (Score:2)
Re:Why should I care? (Score:2, Informative)
Exception handling is a little looser, without the need to declare thrown exceptions or catch those declarations. There is a still an exception based error handling system, it's just more implicit than explicit.
I really like the properties, an idea they took from Visual Basic. At some point in Java history (birth of EBJs?) it was concluded that public class variables
Re:Why should I care? (Score:3, Informative)
I'm in web development ( full microsoft environment ) using C#, SqlServer2000, WinXP
Pros:
Re:this is fantastic (Score:2)
If you were not who you were, you would me modded down faster then you can say goatse. hehe
But seriously I am installing the c# learning edition now and plan to install your mono on FreeBSD4.8 or 4.9 if the rumors are true that Fbsd 4.9 will be done this monday!
I use to hate Microsoft with a passion. Their products use to suck until quite recently. The win32 api is quite nasty compared to gtk+ but with
What worries me is api closeness. Yes Mono the last time I liked supp
Re:this is fantastic (Score:2)
Oh great, so where can I download the specs for the MS document formats, the Windows
Parent is NOT Miguel de Icaza (Score:2)
This is Miguel de Icaza? I don't think so. Look for miguel [slashdot.org].
Re:Summary of changes: not much new (Score:3, Informative)
1 is only in the spec stage here, whereas for Java there is already a technology preview, i.e. a more or less working implementation
There has been a working implementation of generics for over a year now (for rotor). | https://developers.slashdot.org/story/03/10/25/188234/c-20-spec-released | CC-MAIN-2016-40 | refinedweb | 4,723 | 65.22 |
Schulwandkarte Beautiful Old Londonkarte City Map 42 7/8x52in Vintage From 1970
This item has been shown 3 times.
Schulwandkarte Beautiful Old Londonkarte City Map 42 7/8x52in Vintage From 1970:
$416
Schulwandkarte Beautiful Old Londonkarte City Map 42 7/8x52in Vintage from 1970 The description of this item has been automatically translated. If you have any questions, please feel free to contact us.
lake them in the list under number No. 836 London (WM3) 2 detail photo from this card
This special offer with shipping price is from Germany to GB without postcodes KW/IV/PH/PA only, shipping costs for weight and length and postcodes, countries
size YYYxZZZ plus wooden sticks
my system: all maps with a type number for a better collecting, stock number in brackets, questions?, mail or call me please look at it on my site dominus.de
1. Australia and Oceania Westermann 15th Edition 1965 1:6 000 000 may 12 Edition if I find my 9.A, weight 3,1 kg 208 x 170
2. West Germany Haack Painke 2.AuFlage 1974 1:200 000 190.5 x 211
3. Northern Germany Westermann 7.AuFlage 1973 1:500 000 top with borders of 1937 239 x 155, metal rods, re Brown in the bar and in the Baltic Sea
4. Eastern Europe peoples and States (language distribution) after the 1st World War Paul list Verlag Wentschow relief map 1:1 000 000 163 x 232 specifically German population 167 x 229
5.. Pen entries and slight damage, 11EUR shipping
7 migration and State formation in the 4.-8.Jahrh. Westermann 1974 red rods stickers 198 x 19 7.5 new price at publishing house 249EUR
8 Europe in the XX.Jahrhundert Flemming Verlag Hamburg 6.AuFlage min.. 1958 4 small crack 20 4.5 times 1:4 500 000 very smoothly, in Türkei1.Karte
12..Karte down
15. four times Germany in the 20th century, Paul list Verlag 4.AuFlage? 4 x 1:500 000, without caption new in the bar attached 236 x 171
16th Asia economic harms Paul list Verlag, 1:8 000 000, 1.AuFlage, veiling, shipping 11EUR 161 x 114. Islands 229Bx183H
19 Middle East and India Westermann 1:4 000 000 5.AuFlage 1965 194Bx155H
20. the first World War 1914-1918 Westermann 1:3 500 000 210Bx180
21 Europe in the 16th century 1580 Westermann 1:2 500 000
22 Europe after the 2.Weltkrieg 1945-1970 Justus Perthes Darmstadt 1:2 000 000 240Bx200.A 206x187H
31 North America VEB Haack lower bracket missing Gotha 6Mio 1954, map battered Center, 159x146H
32. North America Westermann 6Mio 1973 18A. 155x170H
33. South America JPD 6Mio 1976 1A 160x200H
34. South America Westermann 6Mio 1968 12A 148x179H
35. United States of America and South Canada Westermann 3Mio 1965 5A. 201x193H
36. Western Hemisphere Westermann 12Mio 1960 13A. about 165x168H
37. Asia Westermann 6Mio 1968 26A. 206x220H
38. Palestine Westermann 1:250 000 1963 top state secondary map of Jerusalem at the time of Jesus 119x158H
39. Türkiye Dogal Haritasi Wenschow 1.5 million plastic wrap instead of canvas or paper, 1 bar off, 137x97H
40. Australia and Polynesia Diercke 6Mio 1955 8.AuFlage, CA 175x160H
41. Germany Central Europe Westermann 1: 700,000 1974 210x218H, little Metallstäbung, small damage on Kolberg glued,
42. Germany Federal Republic of Germany and the GDR JPD 4 50,000 1982, 3A, 189x218H top
43. Germany Central Europe Westermann 700,000 1977 29A 205x222H
44. Germany and neighbouring countries Westermann 700,000 1960 4.AuFlage 202x189H
45. German central uplands JPD 4 50,000 1956 little cracked down right about 215x161H
46. Lower Saxony Westermann 200,000 1975 196x172H
47. North-West JPD 2A. 200,000 1980 191x223H
48. Alpine and Alpine countries Westermann 600,000 1973 2A. 180x128H
49. Apennine peninsula (Italy) Westermann 900,000 1970 15A metal rods approx. 154x184H
50th Europe Westermann 3Mio 1953 204x157H
51. Europe soil conditions Westermann 3Mio 1971 31A. 196x178H
52. Europe States Westermann 3Mio 1980 16A 196 x 180
53. European economy Westermann 3Mio 1978 196x183H
54. Iberian Peninsula (Spain) Westermann 900,000 1969 5A. about 173x164H
55. Eastern Europe JPD 2 1971 4A 200x210H
56. East Central Europe JPD 7 50,000 1970 190x200H
57. Nordpolargebiet Südpolargebiet Wenschow 2x10Mio water stain 172x121H
58. the countries of the world Westermann 18Mio 13A 202 x 122
59. British Isles Westermann 1:600 000 9A. 183x229H
60. North Africa Westermann 6A. 1: 3.000.000 1973 254 cm wide x 149 cm high
61. the Western continents Justus Perthes of Gotha 1952 1: 10Mio 155x208H good copy
62. Central America and Northern South America Westermann 1977 245 x 154 7A 1: 3.000.000 NK States and transport top
63. United States and Central America Haack Gotha 1977 3.5 Mio 205x160H heading slightly damaged
64. Western Hemisphere Eduard Gabriel Gerog long before Versailles because German colonial possessions before 18.AuFlage 173 x 173 battered repainted 1: 12Mio in Micronesia
65. map to the biblical geography of Diercke Westermann 10A. 1967 good copy 183 x 148 2Nebenkarten
66. Middle East Haack Gotha 1: 2,000.000 pressure for JPD top 1984 227x180H
67. Kontor & traffic map of German Empire 1:700000 Eduard Gaebler broken but complete 8A. giant plate 1cm thick from the time
68. South Germany 1960 Diercke Westermann limes 1:250000 234x188H OK.
69. the flight of the people development of flying Jensen Hamburg 199 x 185 cracked close heading 50s 60s
70. bottom shape of the Earth Westermann 12A 1968 249 x 144 1: 15,000.000 top
71. the climates of the Earth Westermann 6A 1967 1: 18Mio 202 x 126 top
72. the landscape belt of the Earth VEB Haack Gotha 1953 1: 20Mio 1000Stk. Edition 211 x 119 cracked u repaired
73. Earth, vegetation VEB Haack Gotha linen 1975 204 x 138 1: 15Mio top
74. the economic exploitation of the Earth, Flemming 1zu15Millionen, somewhat worse, 224x162cm
75. the world of 1415-1615, Justus Perthes Darmstadt, 12 million, 1973, top, 268x168cm
76. the age of the discoveries, discoveries and travels in the 15.u.16.Jahrh. 18Mio the development of the Inca Empire 12Mio Mexico z: Z: the discovery of 3Mio, coastal discoveries in Caribbean 9Mio, discoveries in the South Pacific in the 17.u.18.Jahrh. 18Mio, Westermann, 204x130cm, top
77. the Pacific-Atlantic space, Justus Perthes Darmstadt, 1966, 1.AuFlage, 269 x 165 huge, top
78. the Earth, Justus Perthes Darmstadt, 7.AuFlage 1977, almost very good, 24 million, 139x82cm,.
79. the States of the Earth, Justus Perthes Darmstadt, 2A 1964, 138cmx82cm glue, otherwise good
80. Germany in the world, 15Mio, Jensen, 234 x 161, something stuck, very large
81. Africa Asia, Sandu publishing, North and South Vietnam prior to 1975, about rubberized PVC, 187, writable with washable pens, fine, 158 x
82. Africa, Westermann, 8A 1965, top, 162 x 175
83. Africa, Justus Perthes Darmstadt, 6M, 1980, 5A, ca. 158 x 198, top
84. North America, JPD above li easily def. , 2A. In 1963, 10Mio, 100 x 119, otherwise top
85.Nord-Amerika, Westermann, 6Mio, 1957, 8A., ok.
86. North America, Wenschow relief map, 6Mio, pasted response, this red, veiling
87. North America, harms, 5Mio, defective, according to 2CO, 188 x 210,
88. South America, Westermann, 6Mio, 13A, 1969, something stuck.
89. South America, climate and vegetation, 4x9Mio, Haack 1966, Holzstäbung, 182x193hoch
90th Western Hemisphere, Westermann, 1952, 12Mio, 10A, 165 x 171, very well, down slightly defective, li
91. Asia, Westermann, 6Mio, 206 x 222, 25A, very good
92. image map to the old testament, Becker, 1Mio and 300,000, with text, interesting.
93. map to the biblical explore, Westermann, 2.5 million, 1951, 6.A., glue
94. East and South-East Asia, Westermann, 4Mio, 1971, 9A., 186x194cm H, very good,.
95. Eastern hemisphere, Westermann, 1951, 8.A, above li u re holes, 166 x 168
96. Soviet Union, Justus Perthes Darmstadt, 3.5 million, 246 x 180, very good
97. Türkiye Dogal Haritasi, 1.5 million, supporting card 4.5 Mio, plastic, washable writeable, not linen, Holzstäbung, Wenschow, very good
98. Australia and Polynesia, front 5Mio, Arctic Antarctic, back each 16Mio, Sandu publishing, PVC rubber writable wiped off, not linen, 177x158Höhe, top
99. North - and Südpolargebiete, Wenschow relief map, each 1 to 10Mio, something stuck, 172x120Höhe
100. the States of the North Pole, Justus Perthes Darmstadt, 1.AuFlage 1959, 1 10Mio above li def., 144x116cm height
101. Federal Republic than Germany, economic priorities, 1 500,000, Jensen, 232x162cm height, something stuck
102. Germany, 1937, map of 1966, 2A JPD, ink entries in the Ruhr area and map li above 10 cm 128x98Höhe solved, otherwise good, approx.
103. Germany, borders of 1937 entered weak, 1979, 6A. JPD, scale 1 600,000 top, 213x163Höhe
104. I have not written on Germany and neighbouring countries, 2A, probably between 1955 and 1960, here with borders of 1937 and 1919!, Westermann, 1 700,000, glued
105.
106. Brockmann BB´s home map of the land of North Rhine-Westphalia, Bielefeld, 1zu1 40,000, top, top centered somewhat repaired, 201x176cm height
107. the Ruhr area and surroundings, design Störmer, Budde, Publisher 1 50,000, red is the reaction of the nachträgl. Foil repairs, there are large holdings, registered paper booths, etc. According to legend top, topographic map of large, 194 x 160,
108. home map Romerike mountains, wavy, approx. 250 x 200, 1 glued down by painting the Stäbung, 25,000, top,.
109. North Rhine-Westphalia, Westermann, 12A 1963, 50,000 1zu1, very good, 193 x 177
110. NRW Sauerland-Bergisches country winner, 1 200,000, 5 photos, 4 drawings, 3 maps, interesting water stains above, otherwise ok., 140x96cm height
111. NRW Weserland and Teutoburg Forest, Stockmann, 1 200,000, 4 photos, 4 drawings, 5 maps, interesting, slight water stains above, otherwise ok. , Water Street intersection near Minden, etc., 140 x 95
112. the journeys of the Apostle Paul, Becker, small defect in the Carpathians, otherwise top, 215 x 165
113. British Isles, Westermann, 1zu600T, London, Paris, 1965, 7AuFlage, top, 187 x 224
114. the Roman Empire, Westermann, 2.5 Mio, inset maps 5Mio, Caesar Augustus 1.5 Mio Germania, 1966, 210cmx196cm, top
115. democracy against Absolutismus-your impact (of some 1750-1850), the French Revolution, Stockmann, glued, 138x95cm
116. Europe, Sattaten, Westermann, 1964, 8.AuFlage, 197 x 181, 1 3,000. 000, top
117. Europe from 1815 to 1871, 3 x 1 2.5 Mio, Westermann, 1969, 204 x 133 height, top
118. Europe: Expansion Hitler (1937-1945), JPD, 2AuFlage 1975, under the right light incorrectly, 145x133cm height, ok.
119. United States and central america, h 1979, all English, on linen, real 204x162cm, 3.5 million worn, very authentic, really in the teaching profession, Blechstäbung
120. Asia, Haack, 1975, 6Mio, 182x214cm, room Japan cut, E.g. just with a different card from me with order...
121. Federal Republic of Germany get good economy, Westermann, 1991 500,000, 139x202cm,
122. Germany under the Hitler regime 1933-1945, JPD, 1978, 4A., top, and interesting, 241x193cm high
123. British Isles, Haack, 1966, 148x176cm, 1 7 50,000, top bar is missing, Holzstäbung is lower
124. early Christianity Paul 2, spread 4.5 Mio, Palestine land of the Bible of 200 T, JPD 2A 1976, top, 206 x 172
125. Europe, climate and vegetation, 4x6Mio, 245x180cm, Metallstäbung, Haack 1974, whether and below re veiling, linen,.
126. geology of Europe physical wall Atlas Justus Perthes of Gotha, pressure approx. 1941 as Litzmannstadt, instead of Lodz, heading lacks, well,.
127. Italy, Haack, 1959, 750T, Holzstäbung, error: header cracked, the Otler and Algeria paper slightly loose, ungeklebte map, 180x179cm
128. North Sea, 2x2Mio, above re easily compressed, 146x97cm height, Westermann, otherwise top
129. mining and industrial areas of the Earth, Haack, 1967, 241x142cm, ok.
130. Earth, political overview, Haack, 181 x 112, 20Mio, chalk hearts in the Pacific and chalk fog in the Atlantic, unt re somewhat triggered, OVR hole Alaska
131. vegetation of the Earth, Justus Perthes, Gotha CA 1951, 1zu20Mio, top, eye-catcher, 212 x 135, both lines are tree borders, Holzstäbung
132.
133. the Holy Land, Verlag Jensen Jackson, with NK Jerusalem, approx. 122x161cm, religious education, glued, but OK.
134. Germany boots sales Wenschow relief map, paper lift, no lines, State 1991, 99x135cm, top, 1 7 50,000
135. Germany, country groups political, boots sales Wenschow relief map, paper lift, no lines, State 1991, 98x135cm, top, 1 7 50,000
136. Germany, a kind of aerial, bird's eye, smallest towns are hinted at, interesting, Mairs geographical Publisher, is 1989 designed around 98x145cm but no scale so 1 because distorted until far in the horizon, < span 7 50,000</span>, to see Scandinavia
137. the seaport, clearly Bremen, Verlag Jensen Jackson, with legend, without damage, except anything small above re, top, interesting, partly with Toponomastic, huge, 188 wide, 256 cm high, plus bars!
138. circle Osterholz, Orion card, 1 25,000, PVC-like foil, 200x166cm, OK
139. Lesum Ihlpohl Ritterhude school Environment map geo-Porta Westfalica 1zu4000, photo paper wrapped, top, 192x121cm
140. Lower Saxony, Westermann, good copy, up broken 1zu200 000, 200 x 173, 1966
141. Lower Saxony, boots-Verlag, Wenschow relief map, 1987, 1 300,000, 140cmx105cm, lower staff fully front, back solved, blank map,.
142. North German lowlands, sea dikes Moore-Geest, Stockmann, series Germany, the map has interesting overview, 1zu1Mio, 99x66cm, OK
143. well preserved France and the Benelux States, Haack 1957, 1 7 50,000, all terms of german, Irish Sea, Norman Islands, etc., 155x193cm,
144. the Earth orbit - world map, Westermann, about 1974, 1zu33Mio, 120x83cm, above right 40 cm Holzstäbung, PVC, not linen, resolved from the bar, otherwise OK.
145. the States of the Earth, Justus Perthes Darmstadt JPD, 1965, 3.AuFlage, 1zu24Mio, 139x84cm, wasserfleckig, ajasi WD i.o picked off, otherwise.
146. building map, so far OK 168x113cm plus Holzstäbung
147. Dezimalmaße and weights, Herold Bad Godesberg, 119x78cm, so far OK Holzstäbung
148. development of life, C.Jaeger, small paper tear at the circle time water dinosaur, adhesive, top, for children as a learning tool to study, 161x225cm
149.,.
151. Latin output font and typeface, back simplified output font and typeface, Hubertus Publisher Recklinghausen, 135x106cm, PVC-like with chalk writable, top
152. map of Imperial weights VS overview map length RS, Vitus/Lingen, for the elementary school 109 x 126, beschreib-and washable
153. our planet in space, C.Jaeger, approx. 60s, 180x177cm, good copy
154. numbers space up to 500, teaching aid for primary school or just at home
155. figures room up to 100, hundreds Board, Vogt Publisher, PVC, single sided, writable with circles, 66x79cm, top
156. Africa, Westermann, orbit - wall card, Swedish in German version 1971, top, Metallstäbung, 1zu6Mio, 158x167cm
157. Africa: States, harms ca. 1960, 1A. top broken, NebenK15Mio, HK 7.5 Mio, 152x134cm, ok.
158. Central and South Africa, Westermann, 3Mio!, 1972, 6A. , Metallstäbung, li edge veiling, otherwise top 210x173cm
159. North Africa, Westermann, 3Mio!, 1977, 7A. top, 255x153cm
160. Canada, Westermann, 3Mio, 72, Brown stains, supporting card 1zu36Mio, Metallstäbung, 204x190cm, otherwise top
161. Central America and Nördl. South America, Westermann, 3Mio, 1968, 4A. Secondary card 9Mio States and transport, 245 x 154, top
162. South America, orbit - wall map, Westermann, Swedish Edition in German edition 1971, 6Mio, above li something dirty, Metallstäbung, 136x182cm, otherwise top
163. Australia and Oceania, Westermann, 6Mio, 1965, 15A. , some foil labels, otherwise top 208x170cm
164. Germany, Westermann, 1 700,000, 1964, 210 x 225, top
165. German Aland economy, Westermann, 1976, 9A. 1 600,000, borders of 1937, 244x194cm, top
166. industrial and traffic map of..., 1964, VISBrass Düsseldorf, 1zu50T, below re 40 cm paper, no canvas, 207 x 160, any large industrial place, solved by the bar, Garzweiler, etc, top interesting detail
167. Lower Saxony and the East, map to the reunification of Germany, on behalf of the Lower Saxony State Agency for home service, 50s 60s, 118x79cm
168 South Germany, Westermann, 1zu2 50,000, 1958, 13A. top, 234 x 193
169. South-Western Germany, Westermann, 1zu1 50,000, 1975, 7A. top, 196x221cm
170. the second world war, Westermann, 1959, main map 3.5 Mio, Nebenk. 1 7.5 Mio, 21Mio, 21Mio, interesting and serious, 208x178cm, with labels fixed
171. In 1977, upper bar broken British Isles, Haack Gotha, 750 T, lower bent metal rods, 148x174cm,
172. Iberian Peninsula, Westermann, 1962, 2A. about heading cracked, flyspeck in the Mediterranean, hole just below Word 2.AuFlage, otherwise top, 174x165cm height
173. learning map of Europe, JPD, 1977, 3A. top re 5cm out, otherwise ok., 201 x 185
174. in Central Europe coal..., Westermann, 1959, 1zu1Mio, 11A. top, 183x125cm
175. Mediterranean and Near East, Westermann, 2, 1968, top, 269x150cm
176. Netherlands Belgium Luxembourg, Westermann, 1 300,000, 1963, 4A. to Paris!, top, 135x209cm
177. Northern Europe Baltic Sea countries, Westermann, with Iceland, 1 1,2 Mio, 1974, 18A. NK also 1,2 Mio, no labels, top, 180x212cm
178. North Rhine-Westphalia and the East ChJaeger 1950 and 1960s, 168x110cm, something stuck, ok.
179. North Sea countries, Westermann, 1 900,000, 1962, 15A. , some stickers, otherwise ok. 224x208cm
180. Baltic Sea countries, Westermann, 1 1,2 Mio, 1959, 8A. , small defects between Arendal and Oslo and Hambourg Hamburg, otherwise ok, 180x214cm
181. Spain and Portugal, Haack Gotha, 1960, 1 7 50,000, heading a defective lines i.O., paper gone, sticker JPD, i.e. for West Germany produced, 160x149cm, brilliant picture
182. South-Eastern Europe, Haack Gotha, 1958, left the paper 30 cm from the bar solved, otherwise top 1 7 50,000, 180x179cm
183. North Pole States, JPD, 1zu10Mio, 1959, 1A. Canvas works 144x112cm bent, interesting
184. Nordpolargebiet, Haack, Gotha, 1956, retouched with Kö 6Mio, without limits, with pole trips, slight damage, Kaliningrad, Gdynia, Szczecin, Wroclaw. There. St. br., as well former West delivery, 198 x 200, otherwise top
185. Südpolargebiet, JPD, 6Mio, 1973, 1A. top, 196x220cm
186. the standard of living in the world, Westermann, 1zu25Mio, 1974, 5A. Metallstäbung, 138 x 207, ok.
187. the Earth - physical, Westermann, 1zu15Mio, metal rods, no label, good, 1973, 14A. 246x143cm
188. the States of the Earth, JPD, 1zu24Mio, 1973, 7A. very well maintained, 139x83cm
189. landscape belt of the Earth, JPD, 1zu12Mio. in 1972, easily knittrig, otherwise top 269x168cm
190.
191. world labour card, Publisher of modern equipment, 1zu18Mio, down li resolved, 15 cm foiled unhandled Holzstäbung, brownish to Neptune, probably to writable, 193x134cm, ok.
192. Earth, vegetation, Haack-Gotha, 1zu15Mio, 1960, 240x138cm, nor Holzstäbung, good copy
193. Rheinisch Westfälisches industrial area, Westermann, 1 50,000, up settlement, down mining, 1971 136x206cm, top
194. British Isles, Westermann, 1 600,000, 1975, 12A. Metal rods, top, almost very good, if less damage in the Bristol sound (channel) would not be 182x230cm.
195. Europe after the 2.Weltkrieg 1945-1975, JPD, 1zu2Mio! IN 1975, 3A. top, 245x193cm
196. Italy, Westermann, 1 600,000, 1974, 2A. Secondary card ROM 10 T, top, no canvas, 194 x 208
197. Westermann, 1 40,000, Moscow 1976 5A. 40T10T2T City Centre, Kremlin, paper, no lines, 109x133cm, top
198. history fries 800-1498, Tellus-Verlag, above re and li easily damaged, approx. 122x82cm, interesting
199. British Isles, Haack, 1968, 1zu750T, metal rods, lines, veiling, cracked, otherwise ok. 149x174cm
200th British Isles, Haack, defective, rodents, above li, 1987, 1zu750T, metal rods, paper, no lines, 123x169cm
201. the emergence of the Roman Empire, CataldiVerlag, 1 2.8 Mio, Holzstäbung, pre-1990 good copy, 158x108cm manufactured,
202. the Greek economy and culture in the 5Jh. BC, StiefelVerlag, 1zu750T, NK Athen5T, laminated cardboard, 158x109cm, good ex.
203. Slate cloth, roll Blackboard, teaching Verlag Leipzig, 1zu3Mio, chalk writable and with a damp cloth to clean, fun!, innerdt. Border probably permanently retouched, 5kg!, 180x124cm, top
204.
205. London, Haack, 1974, 1zu10T, cracked, 40Jahre old, bottom Rod bent, Metallstäbung, 157x106cm
206.
208. Roman Empire, Haack, 2.5 million, NK 5Mio, 1976, paper, no lines, 2 holes, without heading up provisional 4Kant wood, below original metal rod, 215x150cm
209. Earth, climates, Haack, 1zu20Mio, 1982, paper, no lines, Metallstäbung, 183x105cm, top, brilliant colors
210. the Earth, physically, Haack, 1zu15Mio, 1980, knittrig, cracked, without borders drawn below re 33cm and above re 10cm out, 240x133cm, intensive image
211. similar to paper, no lines, 136x95cm, received very good world, boots work card, 1zu30Mio, Holzstäbung,
212.
213..
215. Asia, JPD, 1 2.5 Mio, 1965, 2A, Kulli strokes at scale and Sunda trench, not striking, the not so intrusive scale, outweighs it loose and still rich in detail how the large maps,. 95x107cm, top
216. Apennine peninsula, Westermann, 1zu900T, 1964, 11A. 154x185cm, ok.
217. Africa, 1 6Mio, 1942 in Gotha, Thuringia print without borders, above cracked, 160x196cm
218. Africa, 1 6Mio, printed in 1971 in Darmstadt, Germany, with borders, top, Holzstäbung, 158x202cm
219. Africa Asia film double-sided printed by Sandra PVC, respectively 1zu6Mio, handschriftl. Entry at Cologne Rotterdam and Wilhelmshaven, 157x185cm
220. Africa politically/historically, Westermann, 6Mio, inset maps 1zu15Mio, 1972, Metallstäbung, 7AuFlage, 206x186cm
221. the new world, 1942 printed in at Justus Perthes of Gotha, with boundaries, some stickers, 1zu10Mio, 153x208cm
222. Central America and nördl. South America, Westermann, 1zu3Mio, 1963 2.AuFlage, top sheet, 245x155cm, print,
223. North America, Westermann, 1 to 6Mio, 1965, 13.A. crack from Baja California, 156x172cm
224. North America in 1954 with the dispossessed Perthes of Gotha printed, glued paper something down in the Pacific Ocean, 1zu6Mio, 159x146cm
225. North America AV VS South America RV, RS, Sandra-Verlag, double sided, 1 6Mio, up li little damage, PVC-like, no lines, 160x183cm
226. North America, climate and vegetation, Haack, 1zu12Mio 4 x, 1981, top, 210x168cm
227. South America, JPD, 1zu6Mio, 1967 1A. or 1977 3A. 160x200cm
228. South America, Gotha/Thuringia, 1954, 1zu6Mio, top glue, 118x186cm
229. South America, Westermann/Braunschweig, 1962, 8A. 1zu6Mio, glued, 148x178cm
230 United States industry, Haack, 1970, with inset maps, Metallstäbung, 203x161cm
231. United States and Central America, Haack 1955, 3.5 million, top besides top staff by half, pencil SantoDomingo Kulli Cap Canaverel Florida, 200x143cm
232. United States of America and South Canada, 9A, 1973, Westermann, Metallstäbung, above re 12cm out, otherwise top 200x194cm
233. United States of America: economy, top, Velhagen and Klasing, 1zu3Mio, 231x160cm
234. Western Hemisphere, Westermann, 1958, 12A. top get 165x170cm
235. Asia. Westermann, 6Mio, 1964, 23A. almost very good, 206x220cm
236. Asia orbit wall map, Westermann 1975, original is in 1973 from Sweden Metallstäbung, 6Mio, 183x193cm,
237. the Empire of Alexander the great, JPD, 1 2.5 Mio, about 1956, repairs, 220x111cm
238. the old Oriental, Westermann, 1958, 9Mio 3x3Mio, 210x130cm cards
239. the Eastern continents, JPD, 10Mio, 1961, 3A. CA 223x192cm
240 Indochina and Indonesia, Gotha, 1955, 3Mio, glued, up cracked, down 2 x 194x211cm
241. Japan, printed at Denoyer-Geppert, United States for Westermann, master card 1.5 Mio, 1970, 109x156cm
242. old testament, defective map!, cut out heading, Jackson Jensen, ca1964, 170x117cm
243. North Asia (USSR), 1zu4Mio, Westermann, Metallstäbung down bent, 1970, 10A. 239x182cm
244. Eastern hemisphere, Westermann, 1953, 1zu12Mio, some defects, Antarctica spotty, 165x159cm
245. Russia's rise as a great power, Flemming's Hamburg, 1zu8Mio, ca 60's, top, 254x189cm
246. People's Republic of China, 1963, still!, 1zu3Mio, Haack for JPD made, top, under Taiwan probably the GDR notice blue made, 214x160cm
247. to the history of the ancient of Orient, haack 1985, Metallstäbung, 1xNager 12cm hole!, paper, 121x85cm
248. Federal Republic of Germany and the GDR: Mining and industry, JPD, 1zu4 50,000, top, 1981, 187x202cm
249.
251. Germany political outline Keyer wall Atlas with countries in the Soviet occupation zone, ca. 1952-1958, 1 1.2 5Mio, 112x70cm
252. Germany and neighbouring countries, Westermann, 7A. 1964, 1 700,000, get top, 201x189cm
253. Germany: Federal Republic of Germany and German Democratic Republic, JPD, 1zu4 50,000, vastly, 1979, 2A. probably top 188x217cm
254. the Germans 300v. Z. W. 200 n. Z.W., Westermann, probably 40s, 1zu1Mio, stained, NK 2.25 million u 3Mio, 188x116cm
255. the Weimar Republic 1918-1933, JPD 2A, not glued, 1 600,000, 239x186cm
256. school wall map of Germany after the years 1648 Westphalian Peace, Carl Flemming in Glogau in Silesia, probably 30 years, 1 damaged 800,000, repairs, Dr.hermann strike, 193x153cm
257. four times Germany in the 20.Jahrh., of harms, 4A. , top, 236x176cm
258. migration period after 1945, Stockmann-Verlag, 100x71cm
259. the Rhineland, Westermann, 1zu1 75,000, 1954, 3A. 147x212cm
260 Lower Saxony land between the Weser and Elbe, Stockmann-Verlag, map 1 285,000, pictures, 139x96cm
261. Lower Saxony North Sea coast and Islands, Stockmann-Verlag, pictures, 139x96cm
262. lower Weser, Jackson-Verlag, 1 60,000, around 1957, without title, label, 188x170cm
263. North-Western Germany, JPD, 1960, no header, 1zu4 50,000, 216x153cm
264. Saarland, JPD, 1985, almost very good 1 50,000, 1A. 170x134cm
265. Schleswig-Holstein, Westermann, 7A. in 1966, little knittrig, otherwise good, 1zu1 50,000, 186x202cm
266. Switzerland, manufactured by Haack in Gotha for JPD, 1962, unfortunately the lower bar is off, the map is itself almost i.O., and the madness above heading cracked, on request I send a bar, 1 185,000, 208x160cm
267 South Germany, Westermann, 1zu2 50,000, 1965, 19A., 234x194cm
268. South-Western Germany, Westermann, 1zu1 50,000, 1965, 4A. , def heading. , otherwise top 196x222cm
269. Alpine countries, Haack Gotha, 1zu4 1962, 50,000, impressive card, without borders, heading slightly def., 215x160cm
270. Apennine peninsula, Westermann, 10A. between 1960-1965, def top, heading. , 155x185cm
271. image map: the journeys of the Apostle Paul, Becker-Verlag, 216x167cm, very good, Bible customer
272 formation and decay of the medieval Empire, Flemming Hamunrg, 4 x 4,5 m, 2A. good ex., ca. 205x162cm
273. the classical Greece, JPD, 1 500,000, 3A, 1974, top, NK 1.5 Mio, 198x188cm
274. the first world war 1914-1918, Westermann, main map 3.5 million 3 x, and 24Mio, oh. Heading, slightly damaged, 207x179cm
275. the Reformation, Becker-Hamburg, image map, probably 60's 207x165cm
276. the economic exploitation of Europe's, Flemming's Hamburg, 3,000 1.000, 1954, up stickers, 207x158cm
277. Danube countries, GJP, so probably produced between 1945 and 1952, 1 7 50,000, Gerlach instead of Stalin Peak, Wroclaw Wroclaw both map without borders , top, 261x159cm
278. Danubian and Balkan peninsula, Westermann, 1 900,000, 1958, 7A. OK, 178x216cm
279. Europe, Westermann, produced before 1954, there free Trieste, 1zu3Mio, patina, 192x140cm
280. Europe, JPD, 1979, 7A. stained, bent, bekritzelt, 1zu6Mio, rare, great summary there including the Sinai Peninsula, 98x90cm
281. Europe 1917/18 until 1939, Haack Gotha, 1980, 3Mio, paper No linen, Metallstäbung, up slightly broken, 212x178cm
282. Europe soil conditions, Westermann, 1961, 21A, 3Mio, 196x180cm, some tape, authentic
283. Europe in the 16.Jahrh. Westermann, 2.5 million, showing in 1975, 1580, 2NK, faith divisions in Central Europe 2.5 M and Russia 5 M 1462-1667, 204 x 133, top
284 Europe in the 19th century, Haack, 3Mio, 7Mio, NK in 1956, somewhat damaged large historical wall Atlas, reprint of Haack-Herzberg glued, 190x158cm
285. Europe in the 20.Jahrhundert, Fusbahn, 4.5 million 4 x, ca50er, broken Bay of Biscay, 208x173cm
286. Europe States, Westermann, 1zu3Mio, 1958, 3A. 197x183cm
287. Europe in the 20.Jahrhundert, Fusbahn, only 2 out of 4 cards of the original card, 3 to 1 and map 4 according to 2CO, was cut off, 209x85cm, 1 strip off
288. France, Westermann, approximately 50 years, 1 900,000, patina, foil stickers, no header, 158x116cm
289. France and the Benelux States, JPD, 2A. in 1975, top copy, 169x200cm
290. geology of Europe, Gotha: Justus Perthes, Haack physical wall Atlas, was Abt2 basic and floor, this card by Dr. Haack and Dr. 1921 edited rein, 1941 printed, still in 1951 by released the DDR to the publication. Litzmannstadt after retouched, is this retouching again, glued back with help lines, def. Part of the maps is again ok, top, 197x160cm
291. Italy, Haack, 1966, 1 7 50,000, Holzstäbung, smooth lines, good, 180x179cm
292 Mediterranean, Westermann, 1976, 1zu2Mio. , Bay of Biscay glued, impressive, 264x140cm
293. Moscow, Haack Gotha, schematic plan of 1978, everything in the plan in Russian, Cyrillic, worn card, obnen li and upper bar re below re torn, bent metal rods, paper, no lines, 156x156cm
294. Northern Europe Baltic Sea countries, Westermann, 1 1,2 Mio, 1974, top, no header, NK also 1,2 Mio, Holzstäbung raw, 180x201cm
295. Eastern Europe, glued JPD, down, torn paper, 1954, 195x210cm
296. Baltic Sea countries, Westermann, 1.2 million, 1962, 11A. top, 180x213cm
297.
299.18Mio,. 1953, 214x113cm, heading half off, the decorations for your home
302. the Earth, JPD, 1zu16Mio, 1982, 9A. top copy, 210x126cm
303. the Earth, JPD, 1zu24Mio, 1978, 8A. really good, 1 entry to Mexico City, 138x84cm, not as big as the other 18Millioner 16Millioner 15Millioner 12Millioner...
304. the Earth, climatic zones, 1zu20Mio, 1982, Metallstäbung, paper, no lines, 2 holes of which 1 x in Australia, 1 x small Indian Ocean, fades, 182x106cm, no limits, no cities
305. the States of the Earth, JPD, 1zu24Mio, 1975, 8A. 139x83cm, relatively small and super fine top ex,.,.
306. the economy of the Earth (I), Gotha, 1zu12Mio, may 1955, unreadable as legend below damage, wood varnish glued, 224x174cm, huge
307. the economy of the Earth (II), Gotha, 1zu12Mio, 1955, easily glued, trading goods, no header, 217x164cm
308..
310. physical map of the world, Justus Perthes Darmstadt, something rarer, 1zu12Mio!, 1987 made 4
311.
312. vegetation areas of the Earth, Westermann, 1zu18Mio. IN 1970, 7A. top, 202x124cm
313. map of the world, Flemming's Hamburg, without limits, in 1946, many German cards of this time no limits, have produced da the cartographers nothing wrong wanted, print the cards were not even Germany but central Europe often, < span</span> lower Rod completely solved, other damage, 122x88cm
314. Atomkernreatoren and Atomkraftanlagen, Velcro, or Dr. Elliott, 1957, 83x114cm
315. the Interior of the Earth, Verlag neuzeitl f. Work equipment, 60s, water stains, 80x116cm
316. the English vowels, Tower-Verlag, 119x79cm
317. the major climate zones, Verlag neuzeitl f. Work equipment, 60, 78x115cm
318. the plants completely protected in Germany, Reich Office for nature protection in Berlin 30s, 133x101cm, slightly damaged, top
319. the water supply, Tellus-Verlag, water in nature, 118x82cm
320. the water storage, tellus-Verlag, defect in the reservoir, 117x81cm
321. third element: f neuzeitl humidity rain, Publisher. Work equipment, 60, 80x116cm
322. domestic songbirds, Graser/Eßlingen, Panel No. 9, 98x65cm, some stickers
323. development of life, Hunter-Hannover, 60s, top card, who talked with 5 year olds, hangs in the Museum of natural history in Berlin 161x225cm
324. development of high Moor in Lower Saxony Ministry of food agriculture, 71x91cm
325. the first element of the weather: temperature, f neuzeitl. Equipment, 80x116cm
326. history of mankind, Hunter-Verlag Hannover, top copy, 159x212cm
327. history of humanity 2.Teil, Hunter publishing, 172x216cm
328. Gothic 2, Westermann style Customer Panel, about 1959, 88x103cm
329. vivid civics T2, Hagemann, 116x81cm, slightly broken
330. vivid civics T7, Hagemann, 117x82cm
331. vivid civics T9, Hagemann, 116x82cm
332. natural nuclear decay, Velcro, 1955, 84x115cm
333. latest story T1: the Empire of up some paper from the Weimar Republic, 116x83cm
334. oil seeds vegetable oil margarine, CA 118x82cm
335. as paper, 118x80cm is created
336. history frieze T1, 3000vor Christi - 711nach Christi, 120x82cm
337. history frieze T3, 1500-1789, 119x81cm
338. history frieze T4, 1800-1950, Tellus, above re damaged, 122x83cm
339. sensation and control by the nervous system, Hagemann, upper bar completely off, 168x114cm
340. our weather, Westermann, 1971, 211x172cm
341. changes of the Earth's surface by external forces, Verlag neuzeitl f. Equipment, 80x117cm
342. weather character, Verlag neuzeitl f. Equipment, 80x116cm
343. numbers space up to 500, Albogast/Ottersbach Rörake, 178x39cm
344. the second element of the weather: the air pressure, Verlag neuzeitl f. Equipment, 80x115cm
345. Norwegian Fjord landscape, Westermann picture cards, 72x52cm
346 East - and South-East Asia, Westermann, 6A. In 1967, Sakhalin fully on it, about 185x195cm, top copy
347. the Empire of Charlemagne, 768-814, Westermann, 1zu2Mio, 3 inset maps 6/2/2 m, without title, 205x124cm
348. Agaricales and other types, people and knowledge 1982, 64x93cm, Metallstäbung
349 birth process, Deut.Hygiene Inst. Dresden, 1Halbrundstab is missing, 82x115cm
350th earthworm, people and knowledge, Holzstäbung, 80x114cm
351. Rosengewächs (cherry), people and knowledge, upper bar away!, Holzstäbung lower, 82x113cm
352. table of weather observation, PGH Leipzig, ski remote cloth roll map, back blue linen, heavy 5 kg, Holzstäbung, 110x128cm
353. Asia - political/historical, Westermann 1972, 3A, master card 9Mio, all NK 15Mio, Metallstäbung, 209x192cm
354. the Holy country, Jensen Jackson, NK Jerusalem, Biblical map, above re of small error, 122x165cm
355. the Soviet Union the present, Velhagen & Klasing, 1961, 1zu5Mio, 194x133cm
356th Israel and its Arab neighbors, JPD, 1zu1Mio, top, 2A088, 194x134cm
357. the career of the German people, so until 1945 producing Gotha, Justus Perthes, 1 1.5 Mio, glued, physical wall Atlas, 233x165cm
358. Bavaria, Wenschow relief map, 1 2.AuFlage, without title, damage in Ingolstadt and Pilsen, etc., nevertheless mislead or detail on the scale 1zu200T, 220x197cm
359.
361. Apennine peninsula, Westermann, 14A, 1968, top state, 154x185cm
362. Denmark, Westermann, 1 300,000, mistaken, 1964, NK Greenland Fäerör 2 M, 3 M, North Atlantic 9 M, back the lines completely painted, front ok, 210x192cm
363. the Roman Empire, Westermann, 1 2.5 Mio, 1968, 1Loch R, otherwise top, 210x197cm
364. the break of Europe's in the 20.Jahrhundert, harms 2A., 4 x 4.25 million, only the dead crosses are impressive and make very thoughtful dead according to statistics, 1964, today one counts alone 5Mio dead German Wehrmacht members, über20cm top re out, ca238x176cm
365. the evangelizing Europe, Becker, top, 196x164cm
366. the unification of Europe, ca. 1992, Westermann, 4 x 6,5 Mio, 138x194cm
367. Europe, relief map, Knoll Vlotho, top, molded soft plastic, so that all mountains unduly represented are, 1zu3Mio, all geographical terms in german, 214x153cm
368. Europe in the 14.Jahrhundert, Westermann, 1 1.7 5Mio , NK3M, ok, 1962, 203x132cm
369 Europe during the 30 years war, and until 1700, historical wall Atlas Spruner Bretschneider, Gotha Justus Perthes, this card was a catcher, sold in the 2nd half of the 19.Jahrhunderts on, and probably until 1945, a 150 year old wall scene, damage , hole, upper half bar is missing, 156x126cm, the edge coloring was still Default, the subsequent surface colour (a country has in the entire area a color) mode was later.
370 Europe: Alliance systems of Bismarck (1871-1890), JPD, 2A. full ok, 1zu3. 800,000, 1982, 145x134cm
371.
375. Alliance systems in the world, Westermann, 3x1zu25Mio, top, 1976, ca. 130x210cm
376. the world in the 19th century, Velagen & Klasing, 1961, 2 x 1zu2 1.5 million from Putzger page 108/109, top, 137x187cm
377 world resources of coal, iron, oil, gold and uranium, Westermann, 1zu18Mio, 1961, 4A. NK 9M, top, 183x117cm
378. structure of the Federal Republic of Germany, Hagemann, broken up re bracket for the photo, 117x77cm pulled out,
379th our Earth in space, Hunter, no header, ok, 179x169cm
380. female sexual organs, people and knowledge, Metallstäbung, 1988, 80x112cm
381..e. concrete front gradients and positions, no NAZI symbolism, production itself approx. 12/42
384 Central Europe, Wenschow relief map, well, veiling, glued, 4.AuFlage, 190x176cm, 1 7 50,000,.
385th German Democratic Republic, 1zu3
387.
392. Borough Steglitz Höpfel Berlin, 1zu5000, 155x164cm
393rd harms, list, 1 2.8 Mio, no header, probably after 1953, glued, 208x174cm
394 Europe between the world wars (1919-1939), Velhagen and Klasing, about 1961, stickers, 190x147
402. Germany Wenschow, 1zu600T, 8A. NK 3Mio, crazy little paper large map, lübben, 246 x 166!
403. Germany, mining and industry, harms list, 1zu900T, 1.AuFlage, stickers, 153x115cm
404th Germany political overview, Westermann, 1973, 2.AuFlage, top, 200x189cm
405 threatened Environment, Hagemann, 1972, top condition, 160x106cm
406th bomb large and small forms of T3 the sea surface, Tellus-Verlag, lower staff completely loose, 118x81cm
407. Dutch polder landscape, manufactured according to original recordings of KLM the new German Stuttgart, Fricke-Stuttgart, 88x60cm,
408 Mecklenburg Lake District, slightly damaged, approx. 120x76cm
409 Renaissance 1, Westermann style Customer Panel, cracked, about 1959, 104x75cm
410. Romanesque, Westermann style Customer Panel, slightly damaged, about 1959, 104x76cm
411. Africa, Westermann, receive top, 1965, 8A. 6Mio, 162x176cm
412th Africa, JPD, receive top, 1977, 5A. 158x200cm
413th Africa economy, Westermann, 1 6.5 million, 1989, small sticker under the legend, rods are straight!, 116x144cm
416.
420th Germany 1815-1918, in the age of national unification and social movement, JP Darmstadt, 1zu750T, 1961, 1.a , NK German customs agreement, 195x207cm, shadow a tree photographed!
421. Germany: Federal Republic of Germany and East Germany, JP Darmstadt, 1zu830T, 1986, 5A. nearly good ex. down slightly cracked , 99x120cm
422. Brandenburg-Prussia until 1807, JP Darmstadt, 1zu750T, 1961, 1A. mint condition, 191x133cm
423 South Germany, JP Darmstadt, 1zu200T, 1973, 1A., top receive, about 253x195cm
424 Alpine countries, JP Darmstadt, 1zu450T, 1974, at Marseille sticker, otherwise top , 216x162cm
425 alpine panorama..., Mairs, just top, could decorate 80 or 90 years just the apartment, 215 x 59 cm, great detail
426.., 214x180cm
430 Europe to the new stone age, JP Darmstadt, 3Mio, received in 1966, top, 195x135cm
431st Europe: Reordering by the Congress of Vienna (1815-1829), JP Darmstadt, 3.8 million, 1A. in 1970, several stains, 146x134cm
432 North Europe, Justus Perthes Darmstadt, 1zu1Mio, 1969, 1Aufkleber above, some in the Baltic Sea, mad map, on a scale rarely!, CA. 190 cm wide and 224 cm tall!
433rd economy in Europe, Westermann, after 1992, da, no Soviet Union, 1 3.25 Mio, 228x156cm
434. the Pacific-Atlantic space, JP Darmstadt, 1zu12Mio, 1A. 1968, almost full foiling, this writable top, 269x168cm
435. the Earth, JP Darmstadt, 1zu24Mio, 1979, 9A. dusty, stained in the Pacific Atlantic u u South Pole, otherwise ok, not as big as other world maps with me, yet class, fits everywhere, 139x84cm the pure canvas plus Holzstäbung
436. the world 1415-1615, JP Darmstadt, 1zu12Mio, get top, 1A. in 1973, 270x168cm!
437th Moon, Haack, Metallstäbung, front in the word even 1 hole, back, 1zu8Mio, 163x106cm
438. physical world map, JP Darmstadt, 1zu12Mio!, 2A. In 1974, Stäbung wood, 1 defect in Canada, 1 defect in heading 4 entries in the Pacific, good copy, who buys that, should have place also it 269x169cm
439. Tellus history frieze, no. 2, 120x82cm, ok
440. Africa: The new States after 1945, JPD, 7Mio, 1A. in 1969, 137 x 168, sticker
441. the African States, JPD, 10Mio, 1974, 4.a top, 96x119cm
442.. the development of the United States in the 19th and 20.Jh., JPD, 2.5 million, 1A. in 1968, top, 194x134cm
443. learning map of South America, JPD, 6Mio, 1A. in 1966, top, 134x153cm
444. North America, JPD, 6Mio, 1979, 3A. , full of foiling!,
448. the Kingdom of Alexander of the great, JPD, 2.75 million, 1966, existing cracked, otherwise top, small issue, 199x109cm
449. the Kingdom of Aleexander of the large-scale Diadochi kingdoms, Velhagen & Klasing, 3Mio, 1961/1967, NK 5,8 Mio, 194x149cm
450. the Orient and Hindustan, Gotha production 1953, no header, approx. 6 sticker in the Indian Ocean, 3 above, otherwise good 216x148cm
451. the cultural realms of the ancient of Orient, JPD, 2.75 million, 2A. in 1963, top, 199x127cm
452. the Soviet Union after 1939, JPD, 4.8 million, 1A. in 1970, top, 196x120cm
453. the Soviet Union 1917-1939, JPD, 4.8 million, 1A. In 1970, 1 enlightenment. about heading, 195x120cm
454. East Asia, harms list, 2.5 million, 3A. top, 221x191cm
455.. 166cm
458. Germany 1273-1437 in the age of the House of Wittelsbach and Luxembourg, JPG Gotha so before 1945 designed and printed, i.e. 1.Version of 1273-1437er map, later JPD new did, 750T,. Sticker above, cut, good ex., li, & re 187x199cm
459th Germany 1555-1648 in the age of the counter-reformation and the 30 years war, JPD, 2.Version for the period from the House almost Perthes, top, 194 x 208, 750T, 2A.
460. Germany 1648-1739 in the era of the decline of the Empire, 750 T, 1A. JPD, 1 hole Lake country DK, otherwise top 195x209cm
461. Germany 1740-1801 currently 750T, Frederick of the great, 2A. In 1972, JPD, kink in DK, top, 195x207cm
462. Germany greater card, Publisher Neuzeitl. Work equipment, 1 500,000!, plastic work card, top left veiling, 275x191cm, attention: note size!
463. Germany and Italy 1125-1273 in the age of the Hohenstaufen, JPD, 1zu1Mio, slightly damaged about Italy, 154x212cm
464 Germany and Italy 911-1125 in the age of the Saxon and salian Emperor 1Mio, JPD, 2A. in 1969, almost top, 154x215cm
465th Central Europe 1815-1866 restoration and revolution - German Federal Velhagen & Klasing, 875, 1968, top state, 194x136
469. learning map of North Rhine-Westphalia, JPD, 150 T, ca. 1964, left everything Brown, entries, re top knittrig 191x196cm
470th Lower Saxony, Flemming's Hamburg, 1zu200T, small damage, 190x158cm
471. North Germany, Flemming's, 1zu600T, enlightenment, beschäd. below re. , 228x153cm
472. Northern Germany, JPD, 1zu450T, some stickers in "Water", freely suspended somehow warped, hanging on the wall that looks slightly better, 269x165cm
473. Austria Alps, Freytag-Berendt, 1zu300T, 1963, below re 3 and li 1 sticker, at Steyr 1 enlightenment, enlightenment 1 heading, 232x190cm
474. Schleswig-Holstein, Flemming's, 1zu150T, 3 large tears, down at least 3 stickers, 165x171cm
475th British Islands, Haack, 750T, 1962, heading cracked, otherwise ok, 149x176cm
476. the classical Greece, JPD, 1zu500T, 2A. in 1970, top, 198x189cm
477. the Empire of Charles of the great (814)-disintegration of the Carolingian Empire, Velhagen & Klasing, 1.6 million, 1966, top, 183x146cm
478. the Roman Empire, JPD, 2, 1A. in 1971, top state, 256x171cm
479. the European revolutions, Flemming's, 3Mio, 28Bilder, top, 187x194cm
480. the Hanse, JPD, 1.2 million, 1A. in 1971, 186x134cm
48
484.
488.
490. Europe in the high middle ages (c. 1000), Velhagen & Klasing, 1 2.5 Mio, 1967, 194x142cm
491. Europe in the XX.Jahrhundert, Flemming's, 4 x 4.5 million, up stickers, almost very good ex, 204x161cm
492. Europe in the XX.Jahrhundert, Fusbahn, 4 x 4.5 million, from about 1954, no heading, above easy def., 207x158cm
493rd Europe in the age of absolutism (c. 1740), Velhagen & Klasing, 2.5 million, 1965, top, painted with pencil in the Adriatic Sea, 193x150cm
494. Europe in the age of Napoleon (1812), Velhagen & Klasing, 2.5 5 Mio, heading def., otherwise top, 192x150cm
495 Europe after the Congress of Vienna in 1815, Velhagen & Klasing, 2.5 5 Mio, 1961, top, 192x148cm
496 Europe before the first World War (1914), Velhagen & Klasing, 2.5 5 M
499th Europe: The July Revolution and consequences (1830-1847), JPD, 1 3.8 Mio, 1A. in 1970, top, 145x134cm
500th Europe: integration policy (1945-1967), JPD, 3.8 million, 1A. in 1968, no stickers, 146x135cm
501 Europe: Crisis of democracy (1919-1937), JPD, 3.8 million, 1A. in 1968, top, 145x134cm
502.
506. Italy and South-Eastern Europe, Wenschow, 1zu1Mio, without title, pasted more than 11 stickers, 223x163cm,
507.
510. South East Europe, Flemming's, 1.7 million, down almost complete bar , 137x95cm
511 West Europe, Wenschow list, 1zu1Mio, 4A. top, 167x239cm
512. Western Europe, JPD, 1zu2Mio, 3A. In 1967, almost sticker at Heligoland Bight, top, 158x211cm
513 Nordpolargebiet Wenschow, each 10Mio, Südpolargebiet, 3A. some stickers, 168x121cm
514. population of the Earth, Wenschow list, 4 x 1zu30Mio, at the time, Soviet Union 243MioEW., 4A has. , top, 247 x 163
515. the components of the Earth, Westermann, 1zu18Mio, 1965, NK 6Mio, top, 202x12cm, meant is: water, volcanoes, foldings revealed the continent structures, carbon panels, embankments, volcanic formations, etc.
516. the Earth orbit wall map, Westermann, 1971, 1zu20Mio, title defect, defect from Ceylon, otherwise good, 196x130cm
517. the States of the Earth, Westermann, 1zu18Mio, 11A. In 1965, 4 enlightenment. , good copy, 202x121cm
518. the world from 1789-1914, Westermann, 1zu18Mio, 1959, top, 202x133cm
519. reorganization of the world in the 20th century, list, 1zu20Mio, half heading, wavy, stickers, 174x186cm
520-physical world, Wenschow, 15Mio, without title, without borders, glued, cracked, i.o., 214x138cm
521. world history of modern times, Flemming's, 1zu35Mio, CA 1955, of 1450 1650 1830 1914 1955, glued without heading, 203x152cm
522. World: Alliances of present and world communism, JPD, 1zu16Mio, 1A. in 1969, top, 198x125cm
523 learning map of Asia, JPD, 1zu6Mio, 1A. in 1965, top, not foiled, 172x185cm
524th Orient and Hindustan, Haack Gotha for JPD, 3Mio, 1967, top, 216x157cm
525th Europe in the 16th century., Westermann, 2.5 million, NK faith Spaltung 2.5 Mio, Russia 5Mio, up some def., 203x135cm
526. Germany, JPD, 1zu1Mio, 1964, above re and miitel li damaged, etc., political disposition, after border location of 1937, beautiful detail, 128x99cm
527. North Rhine-Westphalia in the changing of times, Hunter-Verlag, 1zu450T, 6 panels, from the Roman time of Germans until 1945, bottom right through to paint of the rod Ruhr Pocket glued paper torn up roll, ok, 145x183cm
528 Germanic Vökerwanderung 200-600 N.z.. Westermann, 1zu3Mio, very good about 183x112cm with WestG blue
529 Germany stuck in 19.Jahrhundert (W37), Flemming's, 1 800,000, cracked, heading, 194x158cm, top map content!
530. Federal Republic - development & construction (S53), Flemming's, 1zu1Mio, CA 1969, 1x3Mio, good, glued, 139x188cm
531. the old Italy (G64), Westermann, 1 900,000, 1974, Rome towards domination, NK 1.5 M in front of the Celtic invasion, NK4M struggle between Rome and Carthage, NK 900 T the early Rome, 207x128cm, probably top
532nd Central Europe in the 16.Jahrhundert (C13) Westermann, 1965, 3A. 1.5 Mio, NK 750T, 199x135cm, excellent condition
533 German-speaking countries, Europe and the world, Steinberger boots, 35meuro 6Mio mounted 1.7 million, each half rods, this time no hook mounted, 115x152cm, top
534th North hemisphere, Flemming
535 Scandinavia (EM47), Wenschow 3A., 1 1Mio, 2NK, above some more veiling, 166x211cm
536th Africa (67 turquoise) Wenschow, 6Mio, probably before 1962, cracked Zambezi tip and Cape Horn, sticker, 160x153cm
537 South America (S98), harms, 5Mio, glued well, 182x214cm
538. biblical countries (S34), Haack Gotha 1957, 2, 300T, 125T, 8T, 80T, 450t, 217x159cm
539. Eurasia (SC162), Wenschow, 6Mio, veiling, ok, 213x211cm
540. Schropp displaced, present customer, 1 cross is called BB´s 60 000 dead, 1 figure is 60 000 living in the Western and central Germany arrived German or Direction Soviet Russia is spent people, most between 1945 and 1950...
541. our home – our neighbours (DE7), country headquarters can be Hessian Bavarian etc for home service, even at times of the Federal Ministry of displaced persons, ok sticker, 1zu700T, approximately 50 years, 168x155cm
542. wall map to the German history of the 17.Jahrhunderts, Germany in the 17.Jahrhundert (HD5), Gabriel Verlag long in Leipzig, 1900, 1zu800T, no header, totally cracked, 5 x secondary map / 5Mio, 4Mio, 8Mio, United States East Coast NeuYork, etc., 188x183cm
543rd Baden-Württemberg physical overview (W75), Westermann, 1983, 2 50,000, 97x134cm
544-German Democratic Republic (109 purple), Haack Gotha, 1962, 1zu2 50,000, much minor damage, 165x230cm
545 geology harms, Southwestern Germany (W72), top, 1 sticker, 1 200,000 across top, 151x176cm
546 Hamburg (HD71) Jensen-Verlag, 1 20,000, almost top 1 Strip 1 tear upper right 199x185cm
547. Hamburg (M14) city 129x156cm
548 (HD68) 1 Hamburg be 25,000, with all incorporations year, NK 1zu100T, top, 198x139cm
549 British Islands (HD60), harms, 1zu700T, above not so good, because cracked and stains, 173x217cm
550. the new time - Germany in the State of the resolution (W63), Stockmann history map No. 4, with stickers, 140x93cm
551. the States of Europe (HD174), Justus Perthes Darmstadt, 1zu6Mio, 1972, 4A. detail considered that after turning requirements , down stains, left and right broken 103x94cm
552. the partition of Poland 1772-1795 Poland in the 20.Jahrhundert (HD128) , V & K, 2 x 1,2 Mio, 1961, top state, 137x215cm
553. the trains and kingdoms of large germ Campana time (RU72), Gabriel long Leipzig, this publishing house went bust, 2.5 million, around 1930 by REDA, 168x120cm + fabric! + Rods
554 Europe (HD9), Haack Gotha, 1979, 1zu3Mio, cracked, defects, metal rods, 213x183cm
555 Europe 1848-1870 (OB48), Stockmann, top right 40 cm down, 140x96cm
556 Europe by 1815-1914 (HD38), Westermann, who has Dr. Krummeck controllers, cards for Westermann in the 1930s and early 1940s drawn, mentioned no date on the map, 1zu3Mio, 182x116cm
557 Europe 1918-1945 (OB25), Westermann, 2.5 million, NK 7.5 Mio, 3Mio, 5Mio, 1 sticker above cross, heading cracked, 210x133cm
558. Europe at the time of the Hohenstaufen (NN247), Westermann, 2.5 million, 3 x inset maps, the Empire of Frederick 1, Crusader States, approximately 205x133cm
559. Europe (HD6), me unknown manufacturer, Europe between 1922 and 1938 politically possible Perthes, no header, shown, totally cracked, 209x155cm
560. France (OB8), Westermann, 600,000 1, 6A, 1967, several labels among others, Paris and Lille, 207x196cm
561. Central Europe (HD74), Flemming, pressure in Bremen licence by Gotha after 1945,: Justus Perthes, which were probably the map itself is no longer print between 45-52, but is your Germany map, see catalog 1935, below black color,. minor damage, 196 x 199
562. the history of the United States up 1783 (BW2) JPD 1A 1965, 1 sticker up crosswise and 1 sticker top middle to upper Lake, Randverstärkt top right and left, top, 195 x 132 entered Indianerstämmer of time!
563. South America politically historical (NN38) Westermann 1977, 6Mio, 4 inset maps a 12Mio, 184x199cm
564. Australia: Economic (BW7) harms, 7,5 Mio, 1AuFlage, almost top except 1 crack in economy, 139x97cm
565 not solve Gaul, Germania and Britain to the Roman period (BW6) JPD, 1zu1Mio, 1A 1969, North and Baltic Sea stickers, and further south again 9 transparent sticker! but any loose material cut with cutter, 136x173cm
566 South America (BW1) Wenschow, 2AuFlage, 6Mio, up cracked, Mato Grosso and Maranhao easily damaged detail than most other manufacturers, also the 7.5 million variant of Wenschow is easy kept,.
567. the German law (ME94) lane publishing, series of German East and Europe, easy knittrig, 119x80cm
568. the Cistercian (NF34) lane publishing, in the same series as 567. top, with cord, 117x80cm
569th Europe / Germany (ED10) Störmer-Publisher , probably 50s 60s, formerly quite now perhaps ingenious tablecloth as PVC, foil, something defective, only 1 loose Rod doing, who want cheaper have shipping without Rod so 4EUR, about 150x150cm
570. Germany 1789 and Europe until 1815 (S50) Westermann, 750T, 1953, NK 2.5 M and 6 M, glued, etc., 206x130cm
571 Germany (ED13) Flemming Hamburg under license from JPGotha, 1 7 50,000, upside down massive defects, for the photo hung up!, about 198x206cm, for many years only 750 map that after the war the East on detailed German shows
572. Schwann´sche school wall maps No3 camps at the time... (ED9) Schwann, Düsseldorf, defect, 199x118cm
573rd United States (S78) JPD, 1A. in 1965, 3.5 million, glued, 138x95cm
574th Africa: Economy (NF38) harms, 7.5 million, 3 cracked, corners, 2AuFlage 97x136cm
575th Europe produced in the age of Napoleon 1st (NF12) GJP, so before 1945, as lower bar after was painted and just curled up, it is glued, rolled out only up to 128 cm!, all stickers, however great, 1zu3Mio, 186 x 128 + XCM
576 economy Europe (NF2) JPD, 1954, top heading slightly damaged, otherwise good, 1zu3Mio, 212x158cm
577th Europe economy (NF8) harms, probably 1950s, 2.5 Mio, 2.A. knittrig, spotty top, almost no labels, top, 219x164cm
578th Europe States population economy (SB605) Jensen Hamburg, 2, heading slightly cracked, 1 sticker above, 190x194cm, probably 50 years
579. United States (FR18), J. Perthes Darmstadt 1970, 2.AuFlage, 1 2.5 Mio, top state, smaller enlightenment. at Winnipeg and right of it, then in Ontario and new Brunschwig, the largest US map, I know, 94x134cm
580. Africa (W17), harms, 1 5Mio!, no header, glued, 50s or 60s, 185x185cm
581. the Earth - orbit - wall map (NN301) GLA Sweden for Westermann, 1 20Mio, 1971, mint condition, except for the school operating this Green Edge reinforcements and in the header area a foliation, through common rolling since most one typical damage occurs when such cards, about 194x133cm , plus Holzstäbung, the is clear, only selling because I now a top foliertes ex. have, the one offered here is not wrapped.
582nd Africa (SB666) Wenschow, 7.5 million, cracked, stickers, 108x116cm
583 Africa, mining & industry (D8), h, Metallstäbung, linen, up some cracked, otherwise ok, 6Mio, 1968, 159x180cm
584 Africa, the colonial Division until 1939 (NN7), JPD, 1969, about 136x170cm
585. North Africa (SB313), h for JPD, 3Mio, 1968, top 45 cm heading defective, and top right 5 cm, 1 sticker above cross, 279x170cm
586th North America South America (SB318) Jensen, per 1zu5Mio, secondary map United States 15Mio, Titanic 12.4.1912 toll, Columbus xxx1492, 242x147cm
587. learning map of North America (SB459), JPD, 6Mio, 1AuFlage, 1966, slicks, 133x163cm
588. New York (FG44), Westermann, 1962, 1 40.000, Holzstäbung, 109x136cm, top decoration for your home, your Office
589. North America Orbit-Wandkarte(SL139), GLA Sweden for Westermann, 1973, 1zu6Mio, above cracked up sticker, otherwise top 140x172cm
590 North America: economy, top very defective, (NF40), harms, 7.5 million, new edition 97x136cm
591st South America: Northern part of Brazil and neighboring countries (SL88), Westermann, 1967, 1 and some? Edition, 1 to 3Mio!, above title easily cracked, laminated, writable, top, 182x199cm, detail photo from this card
592 South America: southern part La Plata countries (SL60), Westermann, 1967, 1.AuFlage, there was also a 2.AuFlage 1975, 1 3Mio!, cracked about heading, otherwise top foiled this writable, Detail photo of a different card, side map Antarctica 9Mio, 130x196cm
593. Southeast Asia, China and Japan (SL11) 1A, JPD, 3.5 million. 1972?, dast top, 1 enlightenment. unit. Hong Kong, flyspeck, 196x228cm
594.. the atmosphere of the Earth 2 heat transport by ocean currents (NU193) JPD, 1A 1963 and top edge sprains left , 1zu30Mio
595. the Christianity in the world (NU4), Becker Hamburg probably 60's, almost top, 234x157cm
596. population density of Earth (SB405) Westermann, 1967, 2.A. 1zu15Mio, 1 enlightenment. Europe, 1 enlightenment. Equator, supporting card 45Mio, 237x140cm
597th that Earth politically (14 turquoise) Westermann, 1zu15Mio, 18.A., 245 x 142
598. the States of the Earth (SB1) JPD 1zu16Mio, 1968, hole, above broken, about 45cm, below 1 small defect, 198x126cm
599. the 1860-1914 (W1) JPD 1zu12Mio world, 1A.,1983, ok, heavily taped, 269x167cm
600. the Western Hemisphere -Atlantic continents (TR) list, 2.AuFlage, 1 20Mio, top, label North Atlantic and northern Greenland, 210x166cm
601 plants spread and cultivation conditions on Earth (SB781) Walter 1zu30Mio, top state, 124x90cm
602 political outline of the Earth (RU5) Flemming, 1zu17Mio, 50s, above broken, at 3.3 billion people, that gets you out!, Alustäbung, 243x145cm
603rd North America political historical (NN30) Westermann, 6Mio 3x8Mio 2x12Mio, 1975, 5A. , 201x197cm
604th United States - will be a great power (W20) Flemming, 1 3.3 Mio, no header, 165x199cm
605 economy of the United States economy of U.S.A. (OB40) JPD, 1 2.5 Mio, 1A. in 1966, top, 1 enlightenment. Top Cross, 195x131cm
606 Asia economic focus groups population density (SB367) Jensen, 1zu5Mio, 50s early 60s, top light damage, otherwise good, 186x192cm
607 image map of the Holy Land the life of Jesus (HD121) Becker, glued down, therefore not fully rolled out, 120x161cm
608.. the States of Asia (S41) JPD, 1 1 2.5 million in 1965, 1A. top edges slightly bumped or knittrig, 93x109cm
609th Japan and Korea (BE1) JPD, 1A. in 1982, 1 glue, 1 million, approx. 4 stickers and 2 cracks 200x191cm
610th Eastern hemisphere political (SB104) Gabriel long Leipzig, about 1925, sticker, wavy, without title, German East Africa, etc. painted over, 1zu12Mio, 181x170cm
611 Soviet superpower in Europe and Asia (NU58) harms, 2.A. 1 7.5 million, above middle 3 cracks, ok, 241x174cm
612. Soviet Union, mining & industry (U135) Haack, 1zu4Mio, 1965, simpler representation than West delivery of the same publishing house in 1974, knittrig, 234x158cm
613 Soviet Union: Economy V & K, 1zu4Mio, 1 enlightenment (SB284). on the lower bar, down slightly dirty little damage in the headline, good copy, 232x160cm,
614 Federal Republic of Germany - physical (SL143) Westermann, 1993, 500,000, through enlightenment.-bottom wavy striefen , 138x201cm
615th Federal Republic of Germany - politics (ME76) Westermann, 1990, 500,000, no longer smooth, but ok, 137x201cm
616. the Third Reich socialism in Grossdeutschland (SB582) Becker, 1zu1Mio may be smooth, wavy by foiling, this but writable, 215x151cm
617 Germany 1790-1871: The way to the national agreement (SB105) Perthes JPD, 1 1,2 Mio, 1984, 1A. top, unfortunately blurred image made, 222x185cm
618 Germany 1802-1814 at the time of the Napoleonic wars (SB95) Perthes JPD, 1 7 50,000, probably 1969?, above sticker, slightly wavy, 195x207cm
619 Germany since 1871: of the agreement to the Division (W29) Perthes JPD, 1 1,2 Mio, 1A. in 1982, top, 1 typical enlightenment. Strips across top, 222x185cm
620. Germany's development throughout its history (BE6) Biner, Aachen or Fusbahn, Bückeburg, standards different, 9 cards from 925 until about 1933, edited by Dr.Tappe, broken, down no bar on it, can send on request one,. repair please yourself
621 school wall card Deutsches Reich (SB581) Columbus, 1 800,000, around 1939 printed for the teaching profession after the war back retouched on before 1938, concerns protectorate, Alsace Lorraine, Sudeten areas, Memelland, without Oder-Neisse line, above cracked 201x183cm
622. wall map to the German history from 1125 until 1273 Staufen Emperor (SB193) Gabriel long Leipzig, the Publisher with the claim, the best to produce school wall map of the world, went bankrupt in 1935 , restored, enlightenment. , no heading, 1zu1Mio, NK 2.5 Mio, 160x194cm
623. wall map to the German history of 911 until 1125 Saxon and Frankish Emperor (SB655) Gabriel long Leipzig, prior to 1935 produced, 6.AuFlage, with German Empire Kingdom of Italy and Burgundy, supporting card 1.Kreuzzug, 137x196cm
624. minerals & industry in Lower Saxony, Germany (Hambourg Part) (SB622) Störmer, 1 75,000, wavy through enlightenment. , cracked, glued 2.Karte to RS 162x187cm
625. the estuary of the Ruhr in Duisburg-Oberhausen Mülheim (SB652) Bahgat, 1 15,000, top, 140x182cm
626. the resin and its foreland (SB466) Störmer, 1 75,000, PVC, well, above knittrig, 185 x 180
627 German homeland in the East (SB455) Westermann, 1zu1Mio. in 1952, many supporting cards, very rich in detail, 131 x 78
628th the views. German Ostsiedlung (EM52) V & K, 1961, 1 900,000, above broad labels, 138x190cm
629 free State of Brunswick (SB608) Westermann, probably 30 years, 1 100,000, without data, top restored, painted post, no header, 179x136cm
630. Harz foreland (SB562) Sandre, 75,000 1, 1957, top center is the bonding of the individual plastic prints bad 162x186cm
631st map of vestes Recklinghausen (SB407) Mildenberger, 1 25,000, paper, no lines, 196x158cm
632 district Recklinghausen (SB667) Lindenhof 1 35,000, rubber card, top, 128x124cm
633rd learning map of Baden-Württemberg (SL142) Perthes JPD, 1963, foiled 1.a, 133x179cm
634th learn map of Lower Saxony and Bremen (RU75) Perthes JPD, 1964 1.a has a dark, 159x130cm
635. Central and Eastern Germany (HD46) Wenschow, 1 400,000, 3.a. approximately 1955, inset maps 3Mio, little knittrig, good, 242x170cm
636. Lower Saxony (SB443) Wenschow, 1zu2 50,000, 1 sticker up, cracked, 134x132cm
637. Lower Saxony the land to middle Weser and Leine (SB795) Stockmann, 1 285,000 and 4 photos, including Max and Moritz, above 10 cm defect, 139x96cm
638. North Rhine-Westphalia (SB141) Sandre, 1zu1 50,000, foil, this writable back blank map, top, 169x172cm
639. Regierungsbezirk of Osnabrück (ME62) Flemming, 1 80,000 supporting 500,000, up stickers, 168x169cm
640 Suisse Switzerland Svizzera (F13) Kümmerly Eccles gene. Country stop Bern, 11A. that left light damage, 1 200,000, 205x143cm
641 Columbus - School wall map of the Balkan peninsula (46 red) Columbus, 1zu1Mio, probably according to 2CO, 139x164cm
642. the Counter-Reformation (S95) Becker, probably 60's, 204x164cm
643. the Germanic migrations (RU51) Flemming, 1zu3Mio, probably 50 years, wavy, glued without heading, 198x147cm
644 Europa (NU112) Flemming, 1zu3Mio, probably 50 years, no header, label top and middle, solved paper in the Ukraine, please not how many other producers, 206x153cm House, great detail,
645 Europe 1815-1870 (SB575) Perthes JPD, 1zu2Mio, foils, this totally wavy 201x188cm
646 Europe in the 10.Jahrhundert (OL49) Haack, 1957, 1zu3Mio, Haack-Hertzberg series, top cracked, otherwise top 198x160cm
647th Europe in the 15.Jahrhundert (NU65) Westermann, 1968, 3.A., 1 2.5 Mio, Nebenkart. part and. Standards, small hole above left, almost top, 204x133cm
648 Europe in the time of the Ottonian and salian dynasty (SB588) Westermann, 1.5 Mio, inset maps of 4.5 million and 1.5 million, 4.A., 1966,. up defect, stickers, 204x134cm
649th Europe card ground cover (SL56) GLA Sweden for Westermann, 1969, 1zu3. Mio, about heading easily cracked, 2 typical sticker strips above and below, 170x176cm
650th Imperium Romanum (SB5) Haack for Perthes JPD, 1960s, ok, 198x170cm, all in Latin.
651st Central Europe (SB467) Perthes JPD, 1 7 50,000, label, veiling, no header, a few years ago, Germany was simply so a wall map , 195x208cm
652 Central Europe (SB583) Wenschow, 4.a, 1zu1Mio, stickers and wavy, 235x175cm
653. Central Europe from 1914 until today (SB89) Westermann, 2A. 1967?, without labels, top state, 206x200cm
654. Mediterranean countries (NU84) harms, 1zu2Mio, probably 60's, top state, typical foil Strip above, 224x152cm
655th Paris (SL3) Westermann, 5A. in 1972, main card 1 40,000, secondary 1 10,000, Cartes demurrage hostels, 1 typical sticker strips above and below, 109x133cm
656 Rome and Carthage the Roman Empire at the time of Caesar's (NU67) V & K, 1967, 1.7 5Mio and 3,285 Mio, in the Putzger page 23-25, the top sticker strips, 167x212cm
657th Spain and Portugal (SL80) Perthes JPD, 1 7 50,000, 1A. in 1971, original foiled almost top, above the title in and quite easily cracked, 195x168cm
658 wall map to the history of the migration of the peoples (SB214) Gabriel long Leipzig, before producers of the 'best' school wall map of the world, restored, damage, 2.5 million, inset maps 5Mio, 4.5 million. 4.5 Mio, 217x149cm
659 economic performance and trade links in the EEC and EFTA (SB106) Flemming, 2.5 Mio, very well, above 2 enlightenment. , 8 secondary cards, so 60s, 197x189cm
660. geology of the Earth (NU40) Perthes JPD, 1zu16Mio, secondary card Paleogeographic map of the Jura, very good 199x132cm
661st learning map of Africa (G81) Perthes JPD, 1zu6Mio, writable by foiling, beklebbar, but also wavy, 1A. in 1962, 155x196cm
662. South America: 1 7.5 economy (NF39) harms, Mio, new edition of the most 1960s up very broken 96x136cm
663. Australia and Oceania (H3) Wenschow, 1zu6Mio, 3.AuFlage, on the label above everything is good 239x166cm
664. Rhenish Westphalian industrial area (SL48) Wenschow, 1 75,000, almost top 2 sticker strips above and below, 244x168cm
665. District of helmstedt (SB725) No Publisher, no copyright notice, probably produced in the Helmstedt in the period 1945-1949, as the Russian occupation zone is highlighted, scale approx. 1zu2500, top left tear, for everyone, the elm Lappwald dorm Hase angle Louisville Heese mountain, etc. knows Reichsautobahn Imperial road, the authors knew yet not the Basic Law of the Federal Republic of Germany
666. China - by the Manchu Empire to the people's Republic (WU45) Flemming, 4x6Mio, CA. 1933, Alustäbung, almost top Ryukyu Islands is still US occupied, 1 hole in 2.kartei in Manchuria, 194x135cm
667. the Middle East and the Israeli Arab conflict since 1948 (WU61) V & K, 1.8 50Mio, 1970, supporting map 1.1 23Mio, something cracked in the heading for Israel, up 1 Aufkl.Streifen, 196x140cm
668 Turkey (SB906) Alexander map of Velcro, 1989, 1.3 million, RS physically, VS mute sw, Alustäbung, 139x90cm
669. Türkiye (SB847) Wenschow boots, 1.5 million, supporting card 4.5 Mio, PVC film, no lines, bottom left defect, pasted, ok, Alustäbung, 140x98cm
670 to penetrate Russia to Asia until 1914 (WU50) V & K, 1961, 1zu5Mio, well, Putzger page 124, 193x134cm
671 wall map of biblical countries (SB834) brockhaus Leipzig CA 1914 inset maps made, etc, cracked, small defects, wavy, main map 1zu2Mio, 190x147cm
672 Germany and neighbouring countries (SB930) boots, after 1990 produced rest left and middle, and bottom left damaged, ok, back German spelling, 114x152cm
673. the Germans... 1000-300 before the new era (SB240) at Westermann before the war ended in 1945 under the direction of Dr. Kumsteller in 1zu1Mio, inset maps 3 6 and 3Mio produced no readable header, decaled, 182x105cm
674 Central Europe since the second World War (HD66) V & K 1961, Putzger page 141, top left, Center and right broken 1 8 75,000, 197x142cm
675 reformation and Catholic renewal (WU87) V & K 1961, Putzger page 65, 1zu1Mio, top, and bottom sticker strips, 136 x.
676. wall map to the German history of the 16.Jahrhunderts (WU44) printed Gabriel from Leipzig, Verlag long from Leipzig, 1 800,000, 1901, totally cracked, this paper on the linen is but with adhesive super well fixed, there is nothing auseinand
677 wall map to the German history of the 19th century in two parts, 1.Teil Germany and Italy at the time Napoleon (1800-1815) (SB823) Gabriel - lang, restored, cracked, 1zu800T, NK 3Mio, 2, 8.AuFlage CA. 1910?, photo by dirty lens blurred lower right, 196x191cm
678. wall map to the history of the Frankish Empire (481-911) (SB824) Gaebler-long, 1zu1Mio, 6.AuFlage, CA. 1910?, 1Aufkleber trier-Mainz, cracked, much stuck by side map 2.5 Mio, Photo bottom right dirty lens blurred 203x161cm
679. work card district of Osnabrück (OL19) Jensen, 1 40,000 nature map, laminated, writable, 162 x 186
680th relief map (SB872) Rhein Ruhr Knoll Publisher Vlotho, 1 100,000, photo by filthy lens blurred lower right, 147x190cm
681. North Rhine-Westphalia (S54) Orion publishers, 1975, 2seitg PVC, back mute, writable with wax or chalk, 188x196cm
682 North Rhine-Westphalia the Eifel and its northern foothills (SB843) Stockmann, top right 40 cm from the bar out, lens was at the photo make dirty, 140x96cm
683 Apennine and Balkan peninsula (SB168) JPD, 1 7 50,000, 1.AuFlage 1971, only the 40cm above the title defect, 269x192cm
684. the old Greece (G70) Westermann, 1975, probably 1.AuFlage, metal rods, some 600,000 top, 1, inset maps 2M,2M,6M,2M, 0.3 M, 210x131cm glued,
685. the imperialism and its consequences (NN203) Stockmann, history map 6, top left defect, CA 140x96cm
686. the Alps (WU23) Wenschow without heading 1 400,000, supporting card 1.6 Mio, sticker, Kulli entry in the Gulf of Genoa, ok, lens was dirty, so blurry, 236x144cm
687. the States Europe (Sb836)-Klett-Perthes 1992, 3.AuFlage, 1zu3Mio, above the title 40cm cracked, muffling, above wavy, lens of the camera was dirty, 204x185cm
688th geology of Central Europe (WU67) Justus Perthes of Gotha, without title, 1 7 50,000, printed before May 1945, released by the GDR in 1951 for sale, the Place names (Eastern territories, etc.) are held on German!, sticker above and in Berlin, the map was 1922 edited and printed at the time, 196x196cm
689. Italia (WU85) 1953, first edition long before 1945 for the history and Latin instruction by Justus Perthes of Gotha, without title, was that Haack after the Perthes version where here the Balaton is above veiling, and stickers, 1 < span 7 < span 50,000</span></span>, secondary card, 140x157cm
690 States of Europe (SB923) boots, about 1992, 1 4.5 Mio, 155x109cm
691.. the change of the political picture of the world (SB896) Westermann, 3.AuFlage 1971, above typical sticker strips, lens was shooting dirty thus blurred, 137x205cm
692 help for the world (SB884) Westermann, 1zu25Mio, 2A. In 1970, Metallstäbung, top, lens was dirty, 137x209cm
693rd economy of the Earth (NT263) JPD, 1989, 1.AuFlage, 1 12Mio, 270x170cm, without any blemish
694th States of the Earth (red 58) Wenschow, 1 15Mio,?. Approx. 60s, Stalingrad, printed so ago 1962, 245x156cm
695th political world map (SB103), Justus Perthes, Gotha, JPG, 1zu20Mio, 210x107cm, no header, strong glue wavy, yellowed
696. political world map time zones of the world (DB55) Sandre, 1zu18Mio, rubbery, or PVC, older by 2 versions, 1 Nick, ok, 189x113cm
697 crisis rooms of our time (OB51) no further information, possibly from Lindenhof, 4 scale bars, 140x96cm
698. climate of Earth (NT163) Wenschow, 3.AuFlage, 4 x 1zu30Mio, top 1 label, and wavy, ok, 248x163cm
699. the world in the 20.Jahrhundert (BU41) VK V & K, 1 18.3 Mio, 195x193cm, heading broken, otherwise top seams above slightly risen
701st the Earth - tectonics (NT260) h for Westermann, almost top, approx 1969 produced without stickers, 241x168cm, 1zu15Mio,
702nd the starry sky of Earth, Westermann
703.Nordpolargebiet (G5) JPD, 1962, 1.AuFlage, without title, without sticker, bottom right slightly defective, with borders, 197x214cm
704th The British Isles (NT352) boots, stronger paper, 1 900,000 and below re dusty, 109x154cm
705th Hungarian VR, SR Romania, VR Bulgaria, SFR Yugoslavia, SVR Albania (NE19) Haack, 750 tonnes, up cracked, paper, Metallstäbung, 173x152cm
706 States Europe (SB844) boots Wenschow, ca. 1990, 5Mio, crack pasted above, 140x95cm
707 Central Europe (BG149) JPG, license print with Flemming's Hamburg, 1946, without frontiers, was called before the same card Germany, stickers, 124x82cm
708th Flemming's war wall map of the Western Front (NT81) 1 320 T, Westermann, damage in Antwerp, otherwise top, 186x145cm, 11.11.1918 printed after
709th Europe: V & K, 3Mio, 1 amp Strip economy (DB40) up and down
710th Europe boots work card (NT362) boots, 5Mio, slightly dirty, dusted up, paper, 137x95cm
711. Europe after the second World War (NT151) V & K, half heading, little stickers, 187x140cm
712 Europe 1949 to 1961 (DB153) h, 3Mio, 1983m lower Rod bent, metallstäbung, 213x179cm
713. Europe - agriculture / fisheries (NT332) Westermann, 6Mio approx. 1978, side map 1Mio, above easy wasserfleckig, top, 135 x 98
714. the Hanseatic League (BG224) lane, cracked, ok, 118x80cm
715. the Roman Empire from Caesar and Augustus - conquest of Gaul by Caesar (58-51 v BC) (NT102) without heading, above sticker, 194x138cm
716. Columbus - School wall map Europe (NT232) Columbus, 4Mio, CA 1945-1955, stickers, 168x118cm
717th Uelzen (NT345) Jensen, 1zu8T, sticker "61", probably 50 years, 150 x 137
718th Southwest, Germany (30 H) list, 1zu150T, amplifier stiffeners, approximately 168x206cm
719 school wall map to the history of the Prussian State part 2 Prussia since 1807 (NT311) Gabriel long, 9.AuFlage, c. 1900-1910, 1 800 T, supporting card 1.7 5Mio, stickers, 166x97cm
720. Schleswig-Holstein home card (NT58) Jensen, approx. 50-60 years, labels, corrugated, ok, 1 100 T, 197x182cm
721. Schleswig-Holstein (NT96) Wenschow, 1zu100T, sticker above, wavy, 214x183cm
722 Saarland-Lorraine-Luxembourg trier / Western Palatinate (BG263) IGN, 1zu250T, 1998, defective, behind stick 110x152cm, small Metallstäbung, self
724. Cologne Bight and the Niederheingebiet (BG218) Stockmann, 1zu200T, little stockfleckig, very good 140x96cm
725 muensterland lowland Bay (257 green) Stockmann, cards part 1 zu200T, NRW part2, wasserfleckig, 140x95cm
726th rheinsch Westphalian industrial region (252 green) Stockmann cards part 1 200 T, part 1, ok, 140x95cm
727. North-West... (NT350) Knoll, 1zu275T, real relief map, rubber, soft PVC?, the detail photo is not of the NT350, 163x153cm
728 Northwest Germany (DB34) Wenschow, 250 T, stickers and slightly cracked, 225x225cm
729th North Rhine-Westphalia (BG187) Wenschow, 1949, no header, about 11-12 boundary changes on the Western frontier noted, paper, ok, 147x165cm
730 North Rhine-Westphalia (SB922) Knoll, relief map, 150 T, 184x164cm
731 learning map of Schleswig-Holstein and Hamburg (NT283) JPD, 1964, 1.AuFlage, foils, this writable 134x166cm
732nd map of the District of Göttingen (BG85) me unknown manufacturer, 1 25,000, repaired, ok, foiling, 177x114cm
733 map of the city of Oldenburg, the LK Oldenburg and the LK Delmenhorst (BG133) peoples, 1zu40T. 1 sticker, left approximately 25cm, 159x120cm, 50's 60's
734 Hesse (NT323) Westermann, 1zu150T, top, 1 amp Strip above, unfortunately are the Woods 176 cm, therefore shipping more expensive, 146x199cm linen
735. Hamburg and surrounding area (NT181) Westermann, CA in 1938, Altona is amalgamated and noted reichsautobahnen RAB are without title, label, 1zu50T, 225x165cm
736th Hamburg city map... (NT178) by the city administration, 1 10,000, 220x200cm, so 22x20km, plus secondary card Blankensee, partial stickers
737th area of the lower Elbe (NT72) Jensen, 1zu60T, 50s 60s, stickers and title defect, 235x148cm
738. free and Hanseatic City of Hamburg and surroundings (NT10) Westermann, 3.AuFlage, May 1963, sticker, 1 50,000, 223x168cm
739 Ennepe-Ruhr-Kreis and city of Hagen (SB889) Flemming, 1zu25T, pink chalk circle middle, above sticker, lens was shooting dirty, so blurred, 163x117cm
740th the German West 1300-1815 (BG252) Westermann, 1Mio, NK 2 x 2,5 Mio, before 1945 sold, above broken, 85x117cm
741. the Teutonic Knights (ME100) lane, top, 119x80cm
742. the German East (BG177) Westermann, Kumsteller, from 1945 1Mio, NK 2 and 3Mio, without heading enlightenment., above, sold, 141x107cm, 176cm overall
743rd Bergheim Köln Leverkusen lower Rhine Bay Middle-East (BG59) Kollbach, 1zu25T, top, musty, lower staff something glued, 236x210cm
744th Altenkirchen district Altenkirchen Southern heritage ERG country (BG43) Kollbach, 1zu25T, top, top left 1 hole at the 3 coat, 240x208cm
745. on German history in the 14th and 15th century (DB139) h, top, paper, Metallstäbung, 1zu750T, 1988, 221x179cm
746. wall map to the German history of the 19.Jahrhundets into two parts part 2 Germany and upper Italy since 1815 (DB46) Gabriel long, 800 T, NK 2.5 Mio, sticker, circa 1900, not more cracked, 200x196cm
747. divided Germany (HD144) IRO, 1.1 million, top left and right damaged, otherwise ok, 119x85cm
748. the Division of Germany (BG151) Sandra, rubber and canvas, Red chalk entries wipeable, about 1965, with Sielow and the short story, 158x154cm
749. the Romans in Germany (BG166) V & K, up some paper ok, solved, 1zu550T, 1961, 136x187cm
750. the Germans 100 BC - 200 ad (NT229) Westermann, Dr.Kumsteller, 1,3 Mio, NK5Mio, up slightly cracked, otherwise top, 2.AuFlage, before 1945 designed and printed, 119x88cm
751 Germany... (NT397) O.a.. DETMERS, drivers-Panalpina card, 98x137cm
752 Germany after the 30 years war (NT384) the new Schulmann, probably 1 1.8 5Mio so round 1zu2Mio, linen, top, 90x59cm
753 Germany in 1789 (BG169) 1Mio, 1952 list, fixed, ok, records of battles, 161x132cm
754 Germany in the 17.Jahrhundert (NT187) Flemming, 800 tons, up sticker, ok, 194x161cm
755th Germany Europe (SB829) Westermann, 1961, abwaschabre map, 1Mio, 5Mio, top right broken, 15 cm, glued there, stored too warm, 130x106cm
756. Germany (BG196) Sandre, 715T, back mute, i.e. without any details, such as on the front, hole in Frankfurt / Oder, crack in the Alsace region, 154x161cm
757 Federal Republic of Germany and GDR (NT228) Lindenhof, 1zu450T, above left and right out a little, rubbery material, 154x185cm
758 Federal Republic of Germany - physical (DB243) Westermann, paper, top and bottom damage, 1991, 1zu750T, 100x140cm
759th animal world (NT329) IRO, legend with 127 points with vegetation Australia to all these animals and plants, paper, 122x84cm
760 Midddle East (DB79) list, 1.5 Mio, 2.AuFlage, enlightenment, ok, without borders, with the heading a crack 25 cm, 217x172cm
761st Nordasien(UdSSR) - economy (BR8) Westermann, 6Mio, ca. 1977-1982, left painted and damaged, 164x97cm
762nd South America: V & K, 5Mio, 1 enlightenment. economy (NT210)-bands at the top, 160x208cm
763. the States of South America (NT389) JPD, 10Mio, 1.AuFlage 1964, slightly stained, top right light, smells, 94x117cm
764 Africa: Economy V & K, 5Mio, up cracked, after 1961, 161x211cm (OL133)
765 school wall map of the German colonies (WI1) Gabriel long Leipzig, 11.A, 14 maps, mostly 1 to 3Mio, with new Cameroon, so, cracks, patches, stickers, steamship lines from 1911, , etc., 181x170cm
766. Pomeranian (WI6) Flemming, enlightenment. upper, 2 tears off the coast yourself glue, 1zu230T, 195x127cm
767. Africa (WN57) F & B Vienna, 1zu6Mio, CA before 1960, enlightenment. otherwise good, very rich in detail, 163x183cm
768 Africa (AB30) VEB education Leipzig, 1977, 1 hole unit. Caspian Sea, cloth slate to write with chalk and wet wipe, 131x144cm
769th Africa wildlife (LA13) JRO, approximately 9.5 million, 88x118cm, with legend
770th Africa vegetation (TR500) JRO, approximately 9.5 million, 88 x 118, x legend 60 top
771. Africa, industry (AB78) Haack, 1973, 1 6Mio, bar stickers: mining and industry, well, edge above bumped 160x180cm
772. the end cover of America's and your consequences (LA127) Stockmann, bottom left 25 cm, 139x96cm
773.. the States of North America (S182) JPD 1 10Mio, 1965, 1.AuFlage, good 98x120cm
774. the United States (LA73) JPD, 3.5 million, 1965, 1.AuFlage, 1 sticker bands at the top and bottom, top, with national parks, 138x96cm
775th North - and Central America land animals (TR569) JRO, approx. 8Mio, 87x117cm
776. North - and Central America vegetation wildlife (LA12) JRO, approx. 8Mio, 88x118cm
777th North America (WM14) F & B, Vienna, 6Mio, 1962, above in the heading slightly cracked, stain before California, great detail, 160x188cm
778. South America fauna (NE46) JRO, approximately 6.5 million, top, 87x117cm
779th Americas vegetation (TR499) JRO, approximately 6.5 million, top, cord off, 87x117cm
780th United States vegetation and land use back United States mining and industry (TR193) CACA/Zierer Munich, offshoot of Wenschow, 1 to 3Mio, 157x107cm
781 Asia (WM4) F & B Vienna, 6Mio, 208x190cm, Western un East Pakistan, so may still 50 years card edges cracked, some veiling, very rich in detail
782. Asia (TR352) Hall, Vienna, 10Mio, 1950 123x108cm approved, slightly cracked, below re 6cm out
783 Asia people pets crops (WN68) Westermann, 135x154cm, above re and li easily broken
784th Asia wildlife (TR483) JRO, approx. 1 10Mio, 122x84cm
785 Asia vegetation (TR411) JRO, approx. 10Mio, 122x84cm
786 Soviet Union (Z44) Haack, Gotha, 1 4Mio, 1981, Holzstäbung glue, wood, paper, in Arctic something knittri left and right, ok, 228x162cm
787 Soviet Russia. (AB63) Haack, Gotha, 4Mio, 1971, cracked, stained, Northern Blechstäbung and linen, 234x160cm
788. Soviet Union and adjacent States (TR167) boots, 6Mio, pre-1990, RS mute, made of reinforced paper, 158x107cm
789th Australia (TR357), after 1950 approved Hölzel, Vienna, 1zu10Mio and printed, 1 enlightenment. Strips up, very good, 119x78cm
790. Australia and South-East Asia (ME88) F & B, Vienna, 1966, 10Mio, 1 hole in the Tasman Sea, otherwise top, 123x84cm
791st work card (SB934) spectra, Dorsten, Germany 1 620 t, 1990, rubber, chalk, etc. 114x151cm
792nd effects of the second world war in Central Europe (F14) G & W, Wenschow, Munich, 1.1 million, back mute, pappeähnl. Film, 158x108cm
793th Federal Republic of Germany - politics (LA111) Westermann, 800 T, 1990, per 1 hole in Hesse, Brandenburg and Bavaria, wood, paper, 100x138cm
794. Federal Republic of Germany / GDR space image map (NT292) Westermann, 1981, top u down one amplifier stripes, top, 136x188cm
795. the divided Germany (LA174) Lindenhof, 1.2 million, single card no. 7, 131x112cm, PVC film, RS white
796. Germany (TR59) JRO, 1 600,000, the whole German-speaking area fine on a big detailierten 1 amplifier, 2 stickers at Rügen, a secondary card Niemen, 242x168cm map shows,
797th Germany (OL174) Keysers, 1.2 5Mio, 1958, splotched, 113x80cm
798. Germany (U106) Haack, Gotha, 1968, 1 7 50,000, typical DDR school map, 220x178cm
799 Germany administrative divisions (TR152) list, 1 900,000, 5.AuFlage, lower bar open, re 5 cm, 1 x paper down broken, otherwise very good, no lines, PVC? , 154x116cm, approx. 1953 printed
800th school wall map of Switzerland, Germany and Austria (WN17) Gabriel, long, Leipzig, 1zu800T, 1.Weltkrieg, 23.AuFlage, small issue, paper error, 150x167cm worn, cracked,
801 wall map to the German history from 1273 to 1500(1519) (SB229) Gabriel, long, Leipzig, 1 800 T, 2.AuFlage CA. 1900, abundant sticker below 30 cm out, supporting card London with Hanseatic Kontor steel yard 1zu1000, Bergen 20 T and Hanse and German order of 1,6 Mio, Empire of medieval circular layout 2.0 Mio, 191x217cm
802. on German history 1648 to 1789 (NN140) Haack, Gotha, 1975, 1 7 50,000, including crack bottom left, Blechstäbung and paper, about 221x177cm
803.zur German history 1815-1871 (AB43) Haack, Gotha, 1979, 1 7 50,000, corners bumped, well, Blechstäbung and paper, 222x181cm
804th on German history in the 10 & 11.Jh. (AB55) Haack, Gotha, 1969, 1 1Mio, Blechstäbung and linen, above wavy, veiling, edge cracked and top center, 160x167cm
805 Berlin - Germany's capital - outposts of the free world (TR361) Middle and right something defective, town history until ca. 1960, 118x82cm
806 Burgenland (WN22) F & B, Vienna, 1zu100T, 1968, 4 small stickers, 108x155cm
807. the Rhine and his landscapes (TR376) Stockmann, 1Mio, somewhat stockfleckig, 100x66cm
808. Carinthia (WN27) Leon, Klagenfurt, 1zu100T, CA 1962, enlightenment., wavy, 192x89cm
809. Lernkartr of Rhineland-Palatinate and Saarland (BU100) JPD, 1964, 1.AuFlage, top labels, 138x187cm
810th Central European waterways (TR278) Rhein-Verlag, 1959/60, CA
def above 1zu1Mio, photos, detailed pictures of TR348, above middle u re. Re u li adhesive strips, 115x80cm
811. lower (WN60) F & B, Vienna, 150T, 1967, something cracked, still ok, without stickers, 164x147cm
812. Oberösterreich (WN36) F & B, Vienna, 100T, 1963, without title, label, 187x149cm
813. East German lowland (S272) Stockmann, 1Mio, pictures, 99x66cm
814. Salzburg (WN44) QHL Quirin Haslinger of Linz, 100T, 1975, above, also left and right cracked, ok, 201x159cm
815 Styria (WN61) F & B, Vienna, 150T, 1968, enlightenment., 159x150cm
816. South of the main (TR228) Stockmann, 1Mio, pictures, well preserved, 99x67cm
817. Tyrol and Vorarlberg (WN32) F & B, Vienna, sticker, was once a dream piece, piece of cake, 150 T, without motorways, 1958, patina, 203x181cm
818. Vienna (WM11) F & B, Vienna, 1 12,500, enlightenment, blue tram and bus lines, 240x154cm
819. economy in southwestern Germany (LA149) Jensen, 150 T, 50s, enlightenment, ok, 156x182cm
820. antiquity Greek world (SB26) without title, probably Gabriel long Leipzig, Greek world, 431-404 V.c.., main map 1 restored 7 50,000, behind sticks,. front foils, 6 cards, also 2.5 million, new Stäbung, 220x147cm
821. the Alps (LA169) list, Wenschow 1 400,000, 5.AuFlage, stickers, small paper error in Basel and Zurich, side map 1.6 Mio, 242x166cm
822. the spread of Christianity (WU54) Flemming's, Hamburg, wavy, above 1 sticker strips, 204x160cm
823rd. the European Union at a glance (LA183) boots, 1999, 157x109cm
824.. the French Revolution shattered Europe (LA192) Lindenhof, without scale, PVC film, stable material, whether this is good as a table cloth?
wipeable, 132x110cm
825th the supremacy of the German Empire in the middle ages 900-1500 (OL139) Stockmann, 139x98cm
826 Europe (WM10) F & B, Vienna, label, without title, Eastern territories even before the knee / East contracts here in Vienna no more extra red, 3Mio, 1970, 188x166cm
827. Europe (WN138) G & W, 1 4.5 million, after 1992, ok, 138x96cm
828. Europe (SB862) Velcro, Alexander map, up here ALU and down wooden stick, not so original, but original, top right crack and label, left, back blank map, 5.1 million, 1989, lens was dirty so blurred, 137x90cm
829. Europe (AB102) Mairs, Stuttgart, ohne Titel, up scrap, may shorten, 104x144cm
830 Europe in the 6.Jahrhundert (BG68) Haack, Gotha 1957, 5 paper damage, ok, 1 enlightenment tanned,. above, 199x155cm
831. Europe and the Middle East (TR88) boots, after 2008, 4.1 million, 190x132cm
832. Europe before the 1st World War 1871-1914 (SB7) JPD, 1973, 1.AuFlage, cracked up at the heading above 40 cm, 1zu2Mio, 246x193cm
833. Italy (TR337) Stockmann, pictures, edge loose, just cut strips, 98x66cm, for your Italian restaurant
834. Italy and South-Eastern Europe (ME36) Wenschow, list, 1 1Mio, 3.AuFlage, veiling, ok, 221x171cm835 London (AB4) Haack, Gotha, 1 10,000, 1983, something broken, Blechstäbung, paper, 156x110cm
836. London (WM3) Westermann, 1 40,000, 1970, 4.AuFlage, foiled, without sticker, details on German, 109x132cm
Schulwandkarte Beautiful Old Londonkarte City Map 42 7/8x52in Vintage From 1970:
$416
Card Michelin « Luchon – Perpignan No.86» 1986 In Good Condition
75" X 74" Roll Down School Map Germany Im Zeitalter Der Reformation 1438-1555...
Card From Perú With A Part Of Country That Are At East By Mr Bonne
Card Spain Or Iberia By Calvinv. Monin 1837
Card Germania Antiqua And Regiones Danubium Inter And Mare Adriatieum
Rand Mcnally World Atlas Map Vintage Hardcover Book
| http://www.holidays.net/store/Schulwandkarte-Beautiful-Old-Londonkarte-City-Map-42-7-8x52in-Vintage-from-1970_292922081680.html | CC-MAIN-2019-09 | refinedweb | 13,974 | 75.3 |
Pipeline data object that contains multiple vtkArray objects. More...
#include <vtkArrayData.h>
Pipeline data object that contains multiple vtkArray objects.
Because vtkArray cannot be stored as attributes of data objects (yet), a "carrier" object is needed to pass vtkArray through the pipeline. vtkArrayData acts as a container of zero-to-many vtkArray instances, which can be retrieved via a zero-based index. Note that a collection of arrays stored in vtkArrayData may-or-may-not have related types, dimensions, or extents.
Definition at line 49 of file vtkArrayData.h.
Definition at line 53 of file vtkArrayData.h.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkDataObject.
Clears the contents of the collection.
Returns the array having called name from the collection.
Return class name of data type (VTK_ARRAY_DATA).
Reimplemented from vtkDataObject.
Definition at line 87 of file vtkArrayData.h.
Shallow and Deep copy.
These copy the data, but not any of the pipeline connections.
Reimplemented from vtkDataObject. | https://vtk.org/doc/nightly/html/classvtkArrayData.html | CC-MAIN-2020-50 | refinedweb | 182 | 53.88 |
I'm up to finish an assignment for my C++, but there must be some thing wrong with the codes....
hope someone will able to give me a hand~
so...here's the title assignment:
Program generates a random math problem, dealing with the : addition, subtraction, multiplication, or division of integer values.
The value ranges for the operand are:
operand 1 operand 2 addition 101-200 101-200 subtraction 101-200 101-200 multiplication 11-20 11-20 division 101-200 11-20
The program randomly picks the operator type and operand values,displays the question, waits for the user to input an answer, then then either tells the user they got the answer correct (if they did) or asks them if they wish to see the correct answer (to which they can respond Y or N for yes or no) if they do wish to see the correct answer the program should display both the original problem and the answer the program should then exit!
And here are my codes: (although its just half done....)
include<iostream> include<cstdlib> include<ctime> using namespace std; int main () { srand(time (0)); const int PLUS=0; //const PLUS as 0 const int MINUS=1; //const MINUS as 1 const int DIVIDE=2; //const DIVIDE as 2 const int MULTIPLY=3; //const MULTIPLY as 3 int a, b, op; a = rand() % 200 + 101; //range 101-200 b = (rand() % 200) + 101; //range 101-200 op = rand() % 4; //the 0-3 to match an operation int answer, userGuess; //right answer switch(op) { case PLUS: answer = a + b; cout << "What is " << a << " + " << b << "? "; cin >> userGuess; break; case MINUS: answer = a - b; cout << "What is " << a << " - " << b << "? "; cin >> userGuess; break; case DIVIDE: answer = a / b; cout << "what is " << a << " / " << b << "? "; cin >> userGuess; break; case MULTIPLY: answer = a * b; cout << "what is " << a << " * " << b << "? "; cin >> userGuess; } if(answer == userGuess) cout << "you are correct." << endl; return; }
sorry I'm a noob...but :'(please help me out if you are there. THX | https://www.daniweb.com/programming/software-development/threads/385979/need-some-help-from-c-please | CC-MAIN-2018-43 | refinedweb | 332 | 62.61 |
Welcome to the WinForms feedback portal. We’re happy you’re here! If you have feedback on how to improve the WinForms, we’d love to hear it!>
Thanks for joining our community and helping improve Syncfusion products!
When ribbon contains too many items some tool strips are collapsed. In such case ToolStripEx.Items returns a collection of single item - drop down button of the collapsed strip, instead of real items. I was able to work around this by using method below, however I'm not sure if that will always be valid and work as intended. I think Items property should always return the original item collection.
private ToolStripItemCollection GetItemCollection(ToolStripEx strip)
{
if (strip == null) return null;
if (strip.State == ToolStripEx.ToolStripExState.Collapsed)
return ((ToolStripPanelItem)((CustomDropDownButtonBase)strip.Items[0]).DropDownItems[0]).Items;
else
return strip.Items;
} | https://www.syncfusion.com/feedback/12073/items-collection-content-invalid-when-tool-strip-is-in-collapsed-state | CC-MAIN-2021-25 | refinedweb | 137 | 52.87 |
#ifdef INCLUDE_NAME #error //or whatever stops the compile #endif #define INCLUDE_NAMESeeing as the redundant guard method is so commonly used (and therefor proven :), what am I missing? -- SvenDowideit Back in the old days, a complier would open an #include file *from disk* every time it saw the #include directive mentioned. The redundant include guards of course prevented this. xlc6 is the only compiler left breathing that I know of that still reads from disk, rather than just keeping the whole file in memory. I used to use Redundant Include Guards, now I don't. --LanceDiduck? In C++ it is usually necessary to mix interface and implementation within a header file (i.e. private declarations and inline methods). If a header file was pure interface, then it might be reasonable to require a client to include the declarations of the types that the interface depends on. But it is not reasonable to have a client include declarations for items used solely in the implementation of the class it uses. Therefore it is necessary to have include files including other include files, therefore it is prudent to guard all include files (to minimize effort I have editor macros set up to do it for me - using the directory and file names, it is easy to create skeletons for the classes). -- DaveWhipp I wondered about this for a bit ... when you put an include into a header file you are forcing a client to include declarations used solely in the implementation of the class it is using - you are just using the pre-processor to do it, rather than forcing the client to be aware of the added dependencies. This is a place were you are obscuring a mess made because are choosing to make methods inline before you need to ... (oops that's Heretical, isn't it?). The way that I am heading is that either you want to hide the implementation for real (and therefor you make a pure interface class), or you realize that you are exposing the implementation to the client and so you make them aware of the mess they are inheriting (by making them put the includes needed into their implementation file) -- SvenDowideit (I hope that this can be refactored by someone who can communicate clearly :) A way to visualize the mess is DoxyGen, which generates include graphs. The reason to have SelfContainedHeaders is that if you add one that is not, and later remove it, you are often left with totally useless include artifacts. They tend to assemble in heaps, too. Also, if I want to use some class interface, I'm not interested in the ImplementationDetail? of what it needs to implement itself. Finally, when that implementation changes, with "externalized" include dependencies, the chance is much higher that such a change breaks massive amounts of client code. -- JuergenHermann The chance of continued development including many more definitions than necessary also goes up, and then your compiler mistakenly recompiles many more files than necessary (the guard blocks do not help here). From this, I think that for a Large project you are considerably better off forcing the user of the class to know what dependencies they are adding. Dis-allowing recurring #includes also encourages header files to be an interface containing as little implementation as possible. -- SvenDowideit I'm not sure I understand - you have foo.h which uses string.h and bar.h which also uses string.h. Later. some program.cpp needs to use foo.h and bar.h. These are recurring includes. Thanks :) (I'm very poor at starting examples :) I feel that program.cpp needs string.h too, so why put it into foo.h and bar.h? you may be saving typing, but in reality the dependencies exist either way, only if you don't allow #includes in header files they are explicit (and the client will probably think twice about using a bloated component..) Imagine if the class defined in foo.h uses the string class only for private functions. Those private functions will require foo.h to include string.h. program.cpp may make no use of strings so it would seem odd to include string.h in 'program.cpp''' just because class foo needs strings.
#include "foo.h"then you know that foo.h doesn't have any implied include dependencies. program.cpp can
#include "foo.h"and know that it will compile. This is covered in the book. -- DirckBlaskey This does lead to a problem: If program.cpp needs foo.h and bar.h, but foo.h already includes bar.h, your compile will work. If, at some point in the future, program.cpp stops requiring foo.h (so you take it out), all of the bar-related stuff will stop compiling. This is moderately confusing (but not enough to stop using this idiom). -- RogerLipscombe It's a good idea to try to remember to include bar.h upfront, to avoid the mystery later, even if it compiles ok without it. No way to detect or enforce it, though. Lakos enforces this through the rule that bar.cpp must include bar.h first. | http://c2.com/cgi/wiki?LargeScaleCppSoftwareDesign | CC-MAIN-2015-22 | refinedweb | 856 | 66.64 |
Welcome, Guest! Log in | Create account
Using javaxdelta 2.0.0 with trove 2.0.4, I have two files that exhibit a problem. Running Delta on the two files, then running GDiffPatcher succeeds, but produces a file that is not identical to the original.
> java -cp lib/javaxdelta-2.0.0.jar;lib/trove-2.0.4.jar com.nothome.delta.Delta source target patch
> java -cp lib/javaxdelta-2.0.0.jar;lib/trove-2.0.4.jar...
08:25PM UTC on Jan 08 2009 in javaxdelta
I am also seeing this problem. It seems that FindBugs only reports this when the program is quite large, and perhaps it cannot do as complete a data flow analysis as it might on a smaller program? The detailed warning message lists the concrete implementations of Map, noting that a null key is valid for HashMap and many other implementations, but is not valid for Hashtable. In my example, the...
01:50AM UTC on Mar 01 2007 in FindBugs
I am running the latest version I can see on Sourceforge, or the Eclipse plug-in update site - that is, 1.1.3.20070105. The bug is still present in that release. Where is the 1.1.4 version you refer to?.
04:10AM UTC on Feb 14 2007 in FindBugs
The duplicate branch detector reports a false
positive with the following code:
public class TestCase
{
public static void main(String... args)
{
loop: for (int i=0; i<5; i++)
{
switch (args.length)
{
case 0:
case 1:
System.out.println("some thing");
break loop;...
11:46PM UTC on Oct 24 2006 in FindBugs
Copyright © 2009 SourceForge, Inc. All rights reserved. Terms of Use | http://sourceforge.net/users/ndjc/ | crawl-002 | refinedweb | 281 | 77.03 |
In the C++ programming language, the catch keyword can be used to encase a code block to handle various exceptions thrown by a try block. If you are not already familiar with the concepts of a try block, I encourage you to refer to that node first before continuing here. With that settled, let us evaluate the catch block in better detail:
Once a try block throws an exception, the program's execution unwinds, or "back-steps", through the code until it finds an appropriate catch block to handle that exception. If no catch statement/block is provided, the program defaults to whatever your compiler uses as its primary exception handler. More often than not, compilers allow the operating system to deal with the exception. This can result in (multiple) errors and other dangerous side effects (i.e. memory leaks).
Multiple catch blocks can be supplied for a single try block. This means that you are allowed to make multple catch blocks to handle multiple exceptions that may be thrown from one try block. A simple example of this is demonstrated using a string input system. This system checks for multiple conditions in which to throw an exception:
#include <iostream>
#include <string>
using std::cout;
using std::cin;
using std::string;
// Empty class to throw if string is empty
class XNullString{ };
// Emtpy class to throw if the empty string exception
catch ( XNullString ) {
cout << "You didn't type anything in!\n";
// Exit with error status ( return(1) )
return(1);
}
// Catch the string length exception
catch ( XStringTooLong ) {
cout << "The string is too long!\n";
// Exit with error status ( return(1) )
return(1);
}
// Nothing was thrown! Good!
cout << "You typed in: " << strInput << std::endl;
// Return with successful status ( return(0) )
return(0);
}
You are allowed to create a "catch everything" catch block. Inside of the parenthesis, instead of specifying an object, class, or other exception item, you can supply three ellipses (...). To demonstrate this technique, let's use the same code from before, but create only a single catch block:
#include <iostream>
#include <string>
using std::cout;
using std::cin;
using std::string;
// Empty class to throw if string is empty
class XNullString{ };
// Emtpy class to throw is EVERYTHING
catch (...) {
cout << "Error Caught!\n";
// Return with error status ( return(1) )
return(1);
}
// Nothing was thrown! Good!
cout << "You typed in: " << strInput << std::endl;
// Return with successful status ( return(0) )
return(0);
}
If the try block in this program throws an exception, regardless of which exception it is, the single catch block will be executed. This is generally NOT a good habit to become accustomed to. If you know beforehand what possible exceptions may be thrown in your program, you should take the extra time to create the appropriate catch blocks to facilitate them. The "catch everything" block can always be used after all 'specific' catch blocks have been written, allowing the program to catch those specific exceptions...and if any unknown (or unexpected) exception occurs, catch it with the "catch everything" block.
Do NOT attempt to do anything in a catch block that resulted in that exception being thrown (ex. If you caught a failed allocation variable, do not try to allocate anything). This is emphasized in many C++ programming books, but often over-looked!.
Just as you can nest try blocks (within other try blocks), you can nest catch blocks within other try/catch blocks.
Catch
My father came home with a new glove,all tight stitches and unscuffed gold,its deep pocket exhaling baseball,signed by Mays or Mantle or “The Man,”or some lesser god I’ve since forgotten.He took off his tie and dark jacketand we went outside to break it in,throwing the ball back and forthin the dusk, the big man sweatingalready, grunting as he triedto fire it at his son, who saw now,for the first time, that his father,who loved to talk baseball at dinnerand let him stay up late to watch the fightsunfold like grainy nightmares
on Gillette’s Cavalcade of Sports,the massive father who could lift himhigh in the air with one hand,threw like a girl — far and awaythe worst he could say of anyone — his off-kilter wind-up and releaselike a raw confession, so nakedand helpless in the failing lightthat thirty years later, stillfeeling the ball’s soft kiss in my glove,I’m afraid to throw it back.
- - -
Evening Mr. Bilgere. Or...morning. The terms are debatable.
It's so difficult to not try too hard when writing to poets. I try to force every word into the shape of a sledgehammer and give it the appropriate heft with alliteration and whatnot and it just...it doesn't work. Plainspokenness is dreary and verbosity carries the possibility of being hackneyed and forced, so I'm caught. Fly-papered.
Damnit.
- - -
I came across "Catch" when I was in High School - it was published in an issue of American Scholar, I believe, and my father used to slip the quarterly into my bag (or, on one occasion, into my box of Lucky Charms) when I wasn't looking. It's been at the back of my head since then for a number of reasons that I couldn't quite place until last night. It's T.S. Eliot's fault; it usually is.
There were people in my apartment last night, along with a copy of the Norton Anthology of American Literature. I work in a used book store, a very, very large one, and am surrounded by words and people and word-people all night long. One of my guests was (is) a poet, and flipped the thing open to find Prufrock staring him in the face. He started reading, and off we went.
We started debating forms and words and meters and whatnot because, well, because there was beer involved and that's what happens when book store clerks take their work home with them. One of the people involved, it turned out, had an interesting disability, though intellectual or educational I'm not sure - she is incapable of hearing the difference between masculine and feminine syllables which, to be fair, makes it rather difficult to explain anything more poetically complicated. Because of this disability she has a hard time finding poems interesting for anything other than pure emotional response - she can't analyze and therefore doesn't see the point.
In a way, that's refreshing - it brings poetic plot to the forefront in a way that is usually hidden. We (at least, I) spend so much time searching for lyrical tricks and wit that meaning tends to become less important. In an attempt to help her, I went looking for works whose plot, like Prufrock's, is bare and yet whose metrical nature is still intrinsic to what we were looking at. And I remembered "Catch." And I Googled it - couldn't for the life of me remember the title, but searching for ("threw like a girl" poetry) worked wonders. In attempting to explain to this girl why poetry is poetry and prose is not, I realized that the way you tell stories on paper is the way I tell them to others - the stresses, the emphasis, the dip of intoned speech, all of it.
I'm not a disciple and, while I write poetry, I'm certainly no poet, caught somewhere between student and impostor with delusions of grandeur. But in some small way, "Catch" was an integral part of my lyrical upbringing.
Thank you, is what I'm saying. I don't know how often you hear that, but you've heard it from me.
Take care.
Jack Thompson
This Writeup conforms to current E2 Copyright policies. Just so you know.
Catch (?), v. t. [imp. & p. p. Caught (?) ∨.
© Webster 1913.
Catch (?), v. i.
To attain possession.
Have is have, however men do catch.
Shak.
To be held or impeded by entanglement or a light obstruction; as, a kite catches in a tree; a door catches so as not to open.
To take hold; as, the bolt does not catch..
Catch, n.
Act of seizing; a grasp.
Sir P. Sidney.
That by which anything is caught or temporarily fastened; as, the catch of a gate.
The posture of seizing; a state of preparation to lay hold of, or of watching he opportunity to seize; as, to lie on the catch.
Addison.
The common and the canon law . . . lie at catch, and wait advantages one againt another.
T. Fuller.
That which is caught or taken; profit; gain; especially, the whole quantity caught or taken at one time; as, a good catch of fish.
Hector shall have a great catch if he knock out either of your brains.
Shak.
Something desirable to be caught, esp. a husband or wife in matrimony.
Marryat.
6. pl.
Passing opportunities seized; snatches.
It has been writ by catches with many intervals.
Locke.
A slight remembrance; a trace.
We retain a catch of those pretty stories.
Glanvill.
8. Mus.
A humorous canon or round, so contrived that the singers catch up each other's words.
© Webster 1913.
Log in or register
to write something here or to contact authors. | http://everything2.com/title/Catch | CC-MAIN-2013-48 | refinedweb | 1,529 | 68.4 |
#include <DOP_Engine.h>
This subclass of SIM_Engine is the one used to contain simulations controlled by DOP_Node networks. It serves as the glue between the pure simulation library and the DOP_Node interface given to simulations in Houdini.
Definition at line 36 of file DOP_Engine.h.
Constructor is given a pointer to the network that owns the simulation.
Destructor destroys all data for the simulation.
Uses the object and data arguments to put the error message on an appropriate DOP_Node. First priority is given to the node that created the data, then the node that created the object, then the node with the display flag.
Reimplemented from SIM_Engine.
Adds the specified root data (cast to an object) as a sort of "extra object" processed by a DOP node. These extra objects don't affect cooking at all, but show up in the list of objects processed by that node (in the MMB info and dopnodeobjs expression).
When the most recent simulation state is being deleted or removed from memory, we have to clear all our references to it. This basically means our lists of output objects to process.
Reimplemented from SIM_Engine.
Gets a pointer to the node that is currently being processed by the DOP_Engine. The return value will be null if the engine is not currently in the preSimulationStep and inside a call to DOP_Node::processObjects(). This function is used by subnet DOPs to evaluate standard DOP local variables in the context of the node that is being processed (since subnets have no context of their own).
Returns the simulation time that corresponds to the given global time.
Reimplemented from SIM_Engine.
Returns the global time that corresponds to the given simulation time.
Reimplemented from SIM_Engine.
Groups objects passed to a DOP_Node by the input they arrived in. Each entry in the returned UT_ValArray represents an input to the DOP_Node. If a particular input is not connected, the corresponding entry in the array will by null. Generally this function will be called from DOP_Node::processObjects() by nodes which need to have objects coming through one input interact in some way with objects coming through another input.
Gets the objects in the stream associated with the supplied node. For nodes with multiple outputs, the result is the combination of all objects on any output. This function does not do any cooking of the networks, or call partitionObjects on any nodes. It only pulls existing data from our set of outputs to process. This function is for use by expression functions that want to know what objects most recently passed through a particular node.
Returns whether resimulations is globally disabled.
Definition at line 87 of file DOP_Engine.h.
Override this method to flag ourselves as requiring a resimulation when any of our external referenced nodes changes in a significant way.
Reimplemented from SIM_Engine.
Pass through to our parent's notification.
Reimplemented from SIM_Engine.
Sets up custom data on a new object. This implementation calls the base class implementation, then it stores information about the DOP_Node that created the object.
Reimplemented from SIM_Engine.
Overrides the default behavior when removing a simulation object. This implementation eliminates all references to this object stored in our internal data then calls the base class implementation.
Reimplemented from SIM_Engine.
Overrides the actions that occur after a simulation step.
Reimplemented from SIM_Engine.
Overrides the actions that occur before doing a simulation step. This is where the network of DOP_Nodes is parsed. Each DOP_Node has a chance to alter the simulation objects in the DOP_Node::processObjects() function. Nodes can also specify which objects are sent to each output of the node.
Reimplemented from SIM_Engine.
Explicitly dirty dependents of this simulation.
Overrides the base class method to make sure that all our internal data related to the last timestep cook has been cleared away.
Reimplemented from SIM_Engine.
Alerts our owner that we are simulating due to an internally generated need. Returns the previous state
Reimplemented from SIM_Engine.
Globally disables resimulations caused by external node changes. This is meant for internal use.
Definition at line 92 of file DOP_Engine.h. | http://www.sidefx.com/docs/hdk/class_d_o_p___engine.html | CC-MAIN-2018-05 | refinedweb | 677 | 59.4 |
Breaking out of the Middle of a C++ while Loop
Sometimes the condition that causes you to terminate a while loop doesn’t occur until somewhere in the middle of the loop. This is especially true when testing user input for some termination character. C++ provides these two control commands to handle this case:
break exits the inner most loop immediately.
continue passes control back to the top of the loop.
The following Product program demonstrates both break and continue. This program multiplies positive values entered by the user until the user enters a negative number. The program ignores zero.
// // Product - demonstrate the use of break and continue. // #include <cstdio> #include <cstdlib> #include <iostream> using namespace std; int main(int nNumberofArgs, char* pszArgs[]) { // enter the number to calculate the factorial of cout << "This program multiplies the numbersn" << "entered by the user. Enter a negativen" << "number to exit. Zeroes are ignored.n" << endl; int nProduct = 1; while (true) { int nValue; cout << "Enter a number to multiply: "; cin >> nValue; if (nValue < 0) { cout << "Exiting." << endl; break; } if (nValue == 0) { cout << "Ignoring zero." << endl; continue; } // multiply accumulator by this value and // output the result cout << nProduct << " * " << nValue; nProduct *= nValue; cout << " is " << nProduct << endl; } // wait until user is ready before terminating program // to allow the user to see the program results cout << "Press Enter to continue..." << endl; cin.ignore(10, 'n'); cin.get(); return 0; }
The program starts out with an initial value of nProduct of 1. The program then evaluates the logical expression true to see if it’s true. It is.
There aren’t too many rules that hold in C++ without exception, but here’s one: true is always true.
The program then enters the loop to prompt the user for another value to multiply times nProduct, the accumulated product of all numbers entered so far. If the value entered is negative, then the program outputs the phrase “Exiting.” before executing the break, which passes control out of the loop.
If the value entered is not negative, control passes to the second if statement. If nValue is equal to zero, then the program outputs the messages “Ignoring zero.” before executing the continue statement which passes control back to the top of the loop to allow the user to enter another value.
If nValue is neither less than zero nor zero, then control flows down to where nValue is multiplied by nProduct using the special assignment operator:
nProduct *= nValue;
This expression is the same as:
nProduct = nProduct * nValue;
The output from a sample run from this program appears as follows:
This program multiplies the numbers entered by the user. Enter a negative number to exit. Zeroes are ignored. Enter a number to multiply: 2 1 * 2 is 2 Enter a number to multiply: 5 2 * 5 is 10 Enter a number to multiply: 0 Ignoring zero. Enter a number to multiply: 3 10 * 3 is 30 Enter a number to multiply: -1 Exiting. Press Enter to continue . . . | https://www.dummies.com/programming/cpp/breaking-out-of-the-middle-of-a-c-while-loop/ | CC-MAIN-2019-51 | refinedweb | 495 | 55.03 |
MVVM is the critical design pattern for front-end engineers. There are so many ways that objects can talk to each other in an iOS App: delegates, callbacks, notification. In this Swift Language User Group talk, Max Alexander shows you how to streamline your development process in 3 easy patterns with RxSwift. He’ll go over the MVVM basics, creating custom observers, wrangling disparate APIs, and manipulating calls using concurrency and dispatch queues. Your code will be so neat that you could quit your job and leave it for the next guy in impeccable condition.
Introduction (00:00)
I am Max, and I am going to talk to you about MVVM with RxSwift. The first part of my talk is going to be about MVVM, and we will get to some code.
Massive View Controller (00:51)
MVVM solves the MVC pattern (everyone jokes about the “Massive View Controller”). It seems to be the case that many engineers start with a view controller (it feels good: it’s already bootstrapped for you when you start a new project). You shove everything in there. And you don’t stop. You don’t stop for years, and then you end up with legacy apps.
It’s hard to bring in new engineers because everyone is confused as to what functions are calling what, where the data is supposed to be, where the service classes are supposed to be, and where the UI is even supposed to be.
Instead of shoving everything into your view controller, we’re going to do a Model, View, and ViewModel.
View (01:37)
- UIButton
- UITextField
- UITableView
- UICollectionView
- NSLayoutConstraints
This is a way to split things up within your view controller, and then two different classes. The view is everything that’s normal, probably Interface Builder for you. You probably shove in
UITextFields,
UITableViews, you make constraints around them. Everyone who’s ever touched iOS or Mac development with the respective classes has done this quite often. If you have a
UIViewController, it looks something like this. This is how you start.
class LoginViewController : UIViewController { @IBOutlet weak var usernameTextField: UITextField! @IBOutlet weak var passwordTextField: UITextField! @IBOutlet weak var confirmButton: UIButton }
Then you have the model.
Model (02:10)
“Model” is an abstract term. It’s like data coming from disparate classes.
- CLLocationManager
- Alamofire
- Facebook SDK
- Database like Realm
- Service Classes
They could be Facebook API with Twitter API, different shapes, different formats; databases like Realm; and generic service classes. Many people say it’s a POJO (“Plain Old Java Object”) or a POCO (“Plain Old Class Object”). But it doesn’t matter: it means there’s data coming from somewhere. And if you have model classes, you probably will not find
import UIKit.
ViewModel (02:55)
The ViewModel:
- Prepares data
- Manipulates data
It’s sitting right between two things. If you have a login view controller, you have a username and a password member field. Then you have some function that you might call like Alamofire or Facebook API client. You can see that it’s preparing some data and then you can call it.
I did not show the full implementation, but you can understand where this is going. Now you have a
LoginViewController, and you set up your data members and then you have a
LoginViewModel.
struct LoginViewModel { var username: String = "" var password: String = "" func attemptToLogin() { let params = [ "username": username, "password": password ] ApiClient.shared.login(email: email, password: password) { (response, error) in } } }
We have the
UIViewController and
UIViews (whatever you have that relates with visual presentation). It’s going to have member variables of subviews. You are going to create the struct (usually it’s a struct because it is data at a certain point of time; don’t use it as a class).() } }
Get more development news like this
The
viewModel is a struct that has member variables, it has however you are going to talk with your service classes. The model is your API, you could have a wrapper around
CLLocationManager, contact store. These UI interactions can call functions within the ViewModel to get data through. In the last bit of code I probably showed you how the ViewModel exists and how you can call some of these other functions.
Hooking up the UI and the ViewModel is not simple because everyone is going to do it differently.() usernameTextField.addTarget(self, action: #selector(LoginViewController.usernameTextFieldDidChange:_), forControlEvents: UIControlEvents.EditingChanged) passwordTextField.addTarget(self, action: #selector(LoginViewController.passworldTextFieldDidChange:_), forControlEvents: UIControlEvents.EditingChanged) } func usernameTextFieldDidChange(textField: UITextField){ viewModel.username = textField.text ?? "" } func passworldTextFieldDidChange(textField: UITextField){ viewModel.password = textField.text ?? "" } func confirmButtonTapped(sender: UIButton){ viewModel.attemptLogin() } }
You have your
IBOutlets, classes with UIKit, ViewModel. In
viewDidLoad, you can use Interface Builder to jerry-rig your event handlers here. But then when the username text field changes, we’re going to mutate
viewModel.username with text. And then the password, the same thing. When a confirm button is tapped you can hook it up and call the attempt login on the ViewModel.
That was less fun because I missed one big part of it: UI that reacts to changing events. I showed you how the view can go to the ViewModel. But the view itself has to get updated: loading indicators, making sure the confirm button is disabled or enabled.
For example, form validation is going to the data and then back. This part is hard. This part where the ViewModel talks back to the UI is something everyone is going to do differently. Here’s one way to do it.
struct LoginViewModel { var username: String = "" { didSet { evaluateValidity() } } var password: String = "" { didSet { evaluateValidity() } } var isValid : Bool = "" func attemptToLogin() { let params = [ "username": username, "password": password ] ApiClient.shared.login(email: email, password: password) { (response, error) in } } private func evaluateValidity(){ isValid = username.characters.count > 0 && password.characters.count > 0 } }
People use
didSet as a reactionary handler. Whenever you use the UI to update username or password, you’re going to call and evaluate the validity of the form to make sure
username has some value in it, or
password; you can get as robust as you want.
You can mutate the
isValid form. But how do you bring this back to the form’s
isDisabled or
isEnabled state?
Make sure you never have the ViewModel file that you are editing ever have UIKit imported. It should never have a reference to
UIViewController. Because all it’s supposed to do is prepare data; it’s not supposed to mutate data at all. The ViewController owns the ViewModel. In the long run, if you have enough time and resources, also make these ViewModels testable.
Never ever say (see slides for code), “
isValid, I’m going to mutate that state of the confirm button to
isDisabled whenever that is
didSet. We created a weak LoginViewController and we’re going to jerry-rig it in the
viewDidLoad and set it; later when these two things fire the, evaluate validity, we are going to mutate this.”
This is not a good scenario. This means that you’re not getting any of the benefits of MVVM, because you’re saying, “I am going to shove references back and forth.” Plus you can get into weird situations where you will have a retain cycle because one object will be owning the other.
I have to make a side note because we have UI updating the
username and
password text fields, both of which are calling the same thing. You can see how many forms, especially very large forms like credit card forms, can get into the hundreds of lines by calling the same evaluation cycle.
We have two streams of data with
didSets, calling evaluate validity. That’s going to call an
isValid which is going to call another mutation. We’re taking two different events, putting them into one, and then doing something with that data. This can be simplified vastly in the future. This is all synchronous code–what if we had this as asynchronous in the future as well?
View Model Worst Practices (09:06)
- Never reference the view controller
- Do not import UIKit. Make it a different file.
- Do not reference anything from UIKit. (You might think, “I need a button reference, I’m going to shove it in there.” Don’t do that.)
- It should only be data, i.e., strings, structs, JSON; classes that do not have much functionality.
Here’s a pattern I’ve seen.
isValid needs to now update the UI buttons’ state.
struct LoginViewModel { var username: String = "" { didSet { evaluateValidity() } } var password: String = "" { didSet { evaluateValidity() } } var isValid : Bool = "" { didSet { isValidCallback?(isValid: isValid) } } var isValidCallback : ((isValid: Bool) -> Void)? func attemptToLogin() { //truncated for space } private func evaluateValidity(){ isValid = username.characters.count > 0 && password.characters.count > 0 } }
You have this closure, this little callback that you create.
class LoginViewController { @IBOutlet var confirmButton: UIButton! var loginViewModel = LoginViewModel() override func viewDidLoad(){ super.viewDidLoad() loginViewModel.isValidCallback = { [weak self] (isValid) in self?.confirmButton.isEnabled = isValid } } }
The ViewModel can listen to it. This is going to call this callback and return its value to some listener, and this listener is going to be the view controller. The LoginViewModel will have an
isValid callback and you will set that value. This is nice and testable because you can put it into the run-time and test for booleans instead of the entire state of the view controller.
That problem with unidirectional data flow, where the UI talks to the ViewModel and the ViewModel talks to the model, is a hard problem. You saw how easy this was, going from UIKit to ViewModel to Model, back to ViewModel.
This part, going from ViewModel back to UIKit, everyone is going to do differently. This is what RxSwift will solve for you.
RxSwift (11:38)
How can RxSwift help with this? RxSwift is Lodash for events, or Underscore for events, if you’re coming from the JavaScript world. It allows you to operate events, evented data, as if you were able to manipulate arrays or collections. Things changing over time is similar to something changing in an array.
I have a little playground, RxSwift (see video). An observable is a collection type, you can say. It’s going to emit events, some source that emits events, you can create observables that represent almost everything. An easy one to do is: you’re creating something like a stock ticker helper to tell you if you should buy something or not from a web socket. We’re going to fake it and create something.
You do observable float from an array, and these are stock prices that come up. 1, 2, 35, 90 are floats. In this playground, it has already run. You can subscribe to the source of events, and the source of events will keep on emitting and then you’re going to get the values back. The cool part of this is an easy one for example. And remember this can come in, pretend this is not literal, for example the stock prices is not updating every minute; it could be updating in varying amounts of time.
We want to tell the UI, to put UI code, for now, in the subscribe event. We want to buy when the price is higher than 30. We will print the prices as they come along. But we want to create another string from it.
The second string of data in this new observable we’re going to filter. And then it will only run this subscribe block if this filter works. RxSwift allows you to filter, to map them. For example, we could do something like this, map. We can do a exchange rate. For example, you’re trying to buy in a different country and you have an exchange rate. And then you’ll print out these new rates. We removed the filter, so it’s going to do it for every single different event.
There’s an easy subclass, almost like a substruct, of an observable. An observable is something that emits events and luckily with generics we always have types of them. There is a thing called a variable; you can always create them statically; they’re representations around very primitive types that you can use. And username and password are observables, because you can always get their values by
username.value, without subscribing to it.
This will be mutated from the UI, but we also want to listen to the
isValid state. We’ve created two streams,
self.username and
self.password, and we want to evaluate when any of them changes. That’s what we call the
combineLatest: it’s an operator, similar to Lodash operators like
groupBy.
If any of these change, we’ll run this reducer block, and it will return to us a boolean. If the username character count is greater than zero and password count is greater than zero, now this got rid of all the
didSets, this got rid of any private methods, and created this stream of data. You go from the UI and the
isValid will return to the UI.
I typically write into my code
fromUI or
toUI. Or you could say
fromViewModel or
toViewModel. You still understand that you have two sections of code, that is one going this direction and one that is going the other direction.
The view controller will have your regular ViewModels but you have some extension methods within the RxCocoa library. They’re extension methods on top of UIKit,which allow you to get changes in values without creating delegates around them.
I hate delegates and I think they are the worst things in the entire world because they make you scroll. Especially with
UITableView and
UICollectionView. And simple things like having two different text views, you have to check the references from each one - it’s rather annoying. Here you can directly get the reference, on the ViewDidLoad,
usernameTextField.rx.text, and you are going to
bindTo the ViewModel.
bindTo is the same thing as subscribe for all intents and purposes.
You’ve probably used some APIs where you get indefinite events, but then you have to stop the token handler. A “dispose bag” is a collection of those. Whenever the dispose bag has deinitialized, it will dispose of any of these streams. It’s good for cleaning things up. And you can manually dispose of them as well by calling
disposeBag.dispose. This is going to the ViewModel and then the ViewModel now has the
rx.isValid. And we are going to map out the
isEnabled value.
I have a little example that does that. I truncated this for you; you probably noticed that
rx.text from the
UITextField gives you an optional string. I’m going to give it a default value of an empty string, to make things easier for you.
import RxSwift struct LoginViewModel { var username = Variable<String>("") var password = Variable<String>("") var isValid : Observable<Bool>{ return Observable.combineLatest( self.username, self.password) { (username, password) in return username.characters.count > 0 && password.characters.count > 0 } } }
We’re going to get a simple login form.
import RxSwift import RxCocoa class LoginViewController { var usernameTextField: UITextField! var passwordTextField: UITextField! var confirmButton: UIBUtton! var viewModel = LoginViewModel() var disposeBag = DisposeBag() override func viewDidLoad(){ super.viewDidLoad() usernameTextField.rx.text.bindTo(viewModel.username).addTo(disposeBag) passwordTextField.rx.text.bindTo(viewModel.password).addTo(disposeBag) //from the viewModel viewModel.rx.isValid.map{ $0 } .bindTo(confirmButton.rx.isEnabled) } }
As you can see, there is a login button, but it’s disabled. And then you are going to say password. All of a sudden login is now valid. That is a ton of code gone. If I remove it, it goes back.
If we put a breakpoint in
LoginViewModel, this will run if any of them changes; that’s what the
combineLatest allows you to do. It gets the state each time you get the values back. If you look over username, you’ll get its value and then password, you’ll get its value. And then it’ll return a boolean. This allows you to always mutate values as well as combine them to get new values out.
Make sure that the UIBindings do not talk to each other. I know many people were excited with RxCocoa because it got rid of many of their delegate talks. But make sure you do not have your UI streams talk to each other. It gets messy easily.
For example, you might decide, “I am not going to do this ViewModel thing.” And you create the
combineLatest by binding to your username text field and password text field. And then your result selector returns you this. And you create this
isValid.bindTo.
class ViewController : UIViewController { override func viewDidLoad(){ super.viewDidLoad() let isValid Observable.combineLatest( username.rx.text, password.rx.text, resultSelector: { (username, password) -> Bool in return username.characters.count > 0 && password.characters.count > 0 }) isValid.bindTo(confirmButton.rx.isEnabled) .addTo(disposeBag) } }
This is completely untestable because
isValid is this ambient thing that stays in the view controller; you don’t get the reference to it, it just lives there. And you’re not getting any of the benefit of MVVM; it’s still a Massive View Controller.
RxCocoa does not cover the entire world of UI binding. It has lots of stuff that helps, like UITextField, UIButtonTaps, MapKit, and UITableView (you can create an entire career out of UITableView).
class MyCustomView : UIView { var sink : AnyObserver<SomeComplexStructure> { return AnyObserver { [weak self] event in switch event { case .next(let data): self?.something.text = data.property.text break case .error(let error): self?.backgroundColor = .red case .completed: self.alpha = 0 } } } }
UITableView and UICollectionView are expensive because there are all these ways you want to do it, and getting rid of a row is a Stack Overflow lookup. There’s a good chance that you’ve probably created subclasses of UIViews for yourself. You may have views like Mapbox, or you’re using something off of CocoaPods that you like. It does not have to be a subclass of your own but you can create an extension method.
The idea is that you would be able to have this “Rx-able” custom view.
class ViewController { let myCustomView : MyCustomView override func viewDidLoad(){ super.viewDidLoad() viewModel.dataStream .bindTo(myCustomView.sink) .addTo(disposeBag) } }
You create an observer. An observer is a block of code that takes in the event. The event’s an enum that returns to you its type, and then an error, and then a completion. You probably will shove a whole bunch of these “sinks” in there, and then do something by stating its property text, and then in case of an error maybe turn it red. If it’s completed, maybe make it disappear.
If you have some custom stock label or ticker price, you can create Rxable elements. This is nice because it doesn’t know about any data, or about async; it just accepts the data in that format. And it has nice generics with it as well. Make sure it is always weak within that block, because you are referencing to itself.
So you have some custom view. In your ViewModel, there’s some data stream that comes in. And then you’re going to bind to the sink. Therefore you have the ability to get a UIView to accept Reactive streams.
And the way to get it is, using CocoaPods or Carthage, RxSwift and RxCocoa. RxCocoa is the UIKit extension methods, and has it for Mac development as well. This is the big one because I told you how big UITableView and UICollectionView is.
Here is an example app (see video) that shows you the power of RxDataSources, which is a separate open-source library that is built on top of RxSwift. Here is a customization using UITableView with different sections.
I’ll show you how small the code is. You’re not even going to implement UITableView data source. You’re going to have sections. Sections are interesting. It’s just a protocol. A data source that accepts a protocol. And it’s going to have an ID on it. You can tell Rx data sources what the ID is, what is mutated, when it is different, and it will call the respective UITableView.animate row or delete row, without you having to always mutate the data source manually. You create two different sections, the first section and the second section. And then you have a data source.
The data source is going to be typed with this generic of my section. This is your custom table view implementation. You’re going to call your
dequeue reusable identifier, tell it what table view cell you want. For the ViewModel for data source, you can have the data source but tell it what section. For the static sections that we made, we
bindTo(tableView.rx.items(...)), and with the data source that it wants. Now you set the delegate to itself and you have multiple sections.
You’re not creating member variables with different values, you’re not managing indexes, index sections, and paths. You’re handing it over to RxSwift and RxDataSources. And this can be as async as you want.
The randomized example is quite powerful. There are all these sections in
UICollectionView. And there is a fake async stream that is constantly running. It’s much smaller than if you wrote it by hand. And all it’s doing is creating different numbers, removing values differently. And you never have to call the
insertRow,
indexPath and jerry-rig all that nonsense. And now you have something that is quite impressive. This would be great for a stock ticker application.
I did not talk a lot about operators; I didn’t want to overwhelm a lot of people. The next talk will be operators using complex things sort, groupBys, combining, merging, zipping, what they all mean, that will be a much more like algorithms and data structures talk about RxSwift. But this is how you get the data structure to talk to your UI in a very nice conformant way.
Q & A (27:55)
Q: Is it difficult for a shop or a company to do this? And is it harder at larger or smaller companies? How do you convince people? Max: This is a great part of RxSwift that it is not very opinionated, unlike all of a sudden introducing Lodash. You can use your own native events if you want to and you can use it in an isolated way. For example, if you have many tickets that start saying, “creating new view controllers,” you can use it. And the old code does not need to even know. You are going to have to maintain the two things in tandem but it’s a utility library, it’s not like a framework.
Q: When you added a breakpoint - I wanted to ask what performance penalty does RxSwift have? When the app gets larger and larger and you have network code, speed is very important. Max: Yes, that is a large topic in general. In terms of doing things on the main thread, there’s no issue. I have not seen the issue, there are benchmarks on RxSwift’s repo page, you can check them out.
The real trick with iOS development is if you can handle async and concurrency, and that is where RxSwift shines. It’s probably for another talk, but I can quickly go over what the benefits might actually be.
For example (see video), you can create a scheduler like concurrent. Which one do we want? You can create a background scheduler. Stock prices. And then observe on the scheduler. And then do next. Maybe map. This is going to be on that background QoS. This will be on the background QoS. Say you want to return it back to the main thread. This will run on the background, that is where you are going to probably see your performance benefits.
There are other things, for example, you probably want to do something with throttling. If you’re doing something like a search, you said you’re from Yelp? Many people do not want to click the search button in apps; they want to type and then you want to send these things off to the backend. You’re going to probably not send every single character that streams in.
Let’s say this is a stream of text that is coming in. You can do a throttle. And then do 0.25 seconds. Main scheduler. Instance. Subscribe on. And then run your async code here. Or if we are going to do it the MVVM way, you are going to bindTo. If someone is a fast typer, you do not want to search for, for example if you want to search pizza or something, you do not want to search P, that is irrelevant. You may want to throttle but not only throttle but actually run this block if he has only entered three bits of data. This helps you out in terms of maintaining your main thread to be as fast as possible.
But you should not be doing it on the main thread on the iPhone 7s, then you have serious problems. This is a fast machine. Do everything on other threads, deliver them on the right one. And you can always test this by the way, you can check it out on the repo in the testing, how people throw things on one thread then do it on another one, then hand it back to the main thread. But in your ViewModel, you usually want to almost always give it back to the main thread.
About the content
This content has been published here with the express permission of the author. | https://academy.realm.io/posts/slug-max-alexander-mvvm-rxswift/ | CC-MAIN-2018-22 | refinedweb | 4,300 | 66.74 |
Merge Sort
Sorting has always been a popular subject in Computer Science. Back in 1945 Mr. John von Neumann came up with Merge Sort. It's an efficient divide and conquer algorithm, and we'll dive right into it.
The general flow of the algorithm comes as follows: (1) divide the list into n lists of size 1 (being n the size of the original list), (2) recursively merge them back together to produce one sorted list.
As usual, I always get things better with an example. Let's transform this (kind of) abstract explanation into a concrete example. We'll use a simple, yet descriptive, example for sorting lists of integers. Let's start with (1).
def merge_sort(list):if len(list) <= 1:return listreturn merge(merge_sort(list[:len(list)/2]), merge_sort(list[len(list)/2:]))
Here we have part 1. Basically what we're doing here is to check at each step if our list is already of size 1 and ff it is we return it. If not, we split it in half, call merge_sort on each of them and call a function merge with both halves. How many times will this merge function be called? log n because we're splitting the list in half at each time.
Next, phase number (2). We need to merge everything back together.
def merge(l1, l2):result = []i, j = 0, 0while i < len(l1) and j < len(l2):if l1[i] > l2[j]:result.append(l2[j])j += 1else:result.append(l1[i])i += 1result += l1[i:]result += l2[j:]return result
So, what's going on here? We know that we start with lists of size 1. That means that at each step, each of the 2 lists will be sorted on it's own. We just need to stitch them together. That means that we go through the lists (until we reach the end of at least one) and we get the smallest element at each step. When one of them ends, we just need to add the remaining elements of the other to the result.
We already know that this merge will be called log n times. But at each call merge does n comparisons because it needs to figure out where the all the elements fit together. So Merge Sort is a O(n log n) comparison sorting algorithm | https://mccricardo.com/merge-sort-lets-sort/ | CC-MAIN-2021-25 | refinedweb | 394 | 83.15 |
I am a beginner in programming and tried to learn how to do programming in Java. Friends please explain how can I create a class in Java.
Ads
hi friend, As you said that you are a learner of Java programming and want to create a class in Java so, lets understand first what is class in Java.
Class in Java is a blueprint that facilitate you to declare data members and methods as well as to define methods.
Here sample code is being given which demonstrates how to create a class in Java.
public class JavaSampleClass { int i = 5; String str = "How to create a class in Java"; public void intDisplay() { System.out.println(i); } public void strDisplay() { System.out.println(str); } }
Ads | http://roseindia.net/answers/viewqa/Java-Beginners/29157-How-to-create-a-class-in-java.html | CC-MAIN-2016-30 | refinedweb | 124 | 62.78 |
Hello
I have a question about the variable declaration
do I have to write the type of the variable in Python as we do in Java ?
ex: String name , int num and char x?
Thank you.
Hello.
I know this is not the right place to see post this but I do not know how to open a thread and neither the bug report allows you to share a screenshot; which makes it quite difficult to explain.
Is this a bug? I know that
\makes the value next to it not convert it special. I’m just typing the current date…
Also I was wondering if the error is because having introduced numbers in a string. I’ve done a very quick search and haven’t encountered anything. I suppose I will have to Google better next time…
Which lesson is this?
Not an error to use a string. It makes
/ printable characters otherwise Python would evaluate it as division.
Is it possible this is a lesson on the
datetime class?
from datetime import datetime now = datetime.now() todays_date = f"{now.month}/{now.day}/{now.year}"
No. This is the lesson from the Python 2 lecture 5: Variables, from the first module.
Edit: I’m so dumbbbbb. I have to use
print() that code is for only saying that the variable is x and therefore the interpreter do not compile
| https://discuss.codecademy.com/t/do-i-have-to-set-variable-types/328129/1 | CC-MAIN-2020-34 | refinedweb | 230 | 82.95 |
This document describes how to enable retrying for background functions. Automatic retrying is not available for HTTP functions.
Semantics of retry
By default, without retries enabled, the semantics of executing a background function are "best effort." This means that while the goal is to execute the function exactly once, this is not guaranteed.
When you enable retries on a background function, however, the semantics change to "at-least-once" delivery (with the caveat that a function that keeps failing for days may eventually time out).
To enable or disable retries, you can either use the
gcloud command-line tool
or the GCP Console. By default, retries are disabled.
Using the
gcloud command-line tool
To enable retries via the
gcloud command-line tool, include the
--retry flag
when deploying your function:
gcloud functions deploy FUNCTION_NAME --retry FLAGS...
To disable retries, re-deploy the function without the
--retry flag:
gcloud functions deploy FUNCTION_NAME FLAGS... thorough:
Node.js 6
/** * Background Cloud Function that only executes within * a certain time period after the triggering event * * @param {object} event The Cloud Functions event. * @param {function} callback The callback function. */ exports.avoidInfiniteRetries = (event, callback) => { const eventAge = Date.now() - Date.parse(event.timestamp); const eventMaxAge = 10000; // Ignore events that are too old if (eventAge > eventMaxAge) { console.log(`Dropping event ${event} with age ${eventAge} ms.`); callback(); return; } // Do what the function is supposed to do console.log(`Processing event ${event} with age ${eventAge} ms.`); callback(); };
Node.js 8 (Beta)
/** * Background Cloud Function that only executes within a certain time * period after the triggering event to avoid infinite retry loops. * * @param {object} data The event payload. * @param {object} context The event metadata. */ exports.avoidInfiniteRetries = (data, context) => { const eventAge = Date.now() - Date.parse(context.timestamp); const eventMaxAge = 10000; // Ignore events that are too old if (eventAge > eventMaxAge) { console.log(`Dropping event ${context.eventId} with age ${eventAge} ms.`); return; } // Do what the function is supposed to do console.log(`Processing event ${context.eventId} with age ${eventAge} ms.`); };
Python (Beta)
from datetime import datetime, timezone # The 'python-dateutil' package must be included in requirements.txt. from dateutil import parser def avoid_infinite_retries(data, context): """Background Cloud Function that only executes within a certain time period after the triggering event. Args: data (dict): The event payload. context (google.cloud.functions.Context): The event metadata. Returns: None; output is written to Stackdriver Logging """ timestamp = context.timestamp event_time = parser.parse(timestamp) event_age = (datetime.now(timezone.utc) - event_time).total_seconds() event_age_ms = event_age * 1000 # Ignore events that are too old max_age_ms = 10000 if event_age_ms > max_age_ms: print('Dropped {} (age {}ms)'.format(context.event_id, event_age_ms)) return 'Timeout' # Do what the function is supposed to do print('Processed {} (age {}ms)'.format(context.event_id, event_age_ms)) return
Distinguish between retriable and fatal errors
If your function has retries enabled, any unhandled error will trigger a retry. Make sure that your code captures any errors that should not result in a retry.
Node.js 6
/** * Background Cloud Function that demonstrates * how to toggle retries using a promise * * @param {object} event The Cloud Functions event. * @param {object} event.data Data included with the event. * @param {object} event.data.retry Whether or not to retry the function. */ exports.retryPromise = (event) => { const tryAgain = !!event.data.retry; if (tryAgain) { throw new Error(`Retrying...`); } else { return Promise.reject(new Error('Not retrying...')); } }; /** * Background Cloud Function that demonstrates * how to toggle retries using a callback * * @param {object} event The Cloud Functions event. * @param {object} event.data Data included with the event. * @param {object} event.data.retry Whether or not to retry the function. * @param {function} callback The callback function. */ exports.retryCallback = (event, callback) => { const tryAgain = !!event.data.retry; const err = new Error('Error!'); if (tryAgain) { console.error('Retrying:', err); callback(err); } else { console.error('Not retrying:', err); callback(); } };
Node.js 8 (Beta)
/** * Background Cloud Function that demonstrates * how to toggle retries using a promise * * @param {object} data The event payload. * @param {object} context The event metadata. */ exports.retryPromise = (data, context) => { const tryAgain = !!data.retry; if (tryAgain) { throw new Error(`Retrying...`); } else { return new Error('Not retrying...'); } };
Python (Beta)
from google.cloud import error_reporting error_client = error_reporting.Client() def retry_or_not(data, context): """Background Cloud Function that demonstrates how to toggle retries. Args: data (dict): The event payload. context (google.cloud.functions.Context): The event metadata. Returns: None; output is written to Stackdriver Logging """ if data.data.get('retry'): try_again = True else: try_again = False try: raise RuntimeError('I failed you') except RuntimeError: error_client.report_exception() if try_again: raise # Raise the exception and try again else: pass # Swallow the exception and don't retry. | https://cloud.google.com/functions/docs/bestpractices/retries | CC-MAIN-2018-51 | refinedweb | 749 | 53.47 |
Switching visibility of a Rectangle via button
Hi all.
I have a rectangle that I need to switch between visible / not visible via "myButton". So the button should change alternately the rectangle visibility. (with two states I think)...
I've tried to make the button "checkable" in order to apply the condition below but doesn't work at all. What am I doing wrong?
my main.qml:
Button { id: myButton width: 80 height: 20 text: "click me" checkable: true onClicked: if (checked){ myRectangle.visible = true }else{ myRectangle.visible = false } }
P.S.: Transparency is not ok for this case. So Opacity is not applicable as myRectangle content will keep active and I need to avoid any user interaction when the rectangle content is "invisible". that's why I need to make it not visible.
Can you please help me? Thanks.
Try this;
main.qml
import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick.Controls 2.12 Window { width: 200 height: 200 visible: true title: qsTr("Hello World") Rectangle { id: myRectangle width: 100 height: width color: "red" anchors.centerIn: parent } Button { id: myButton width: 80 height: 20 text: "click me" checked: true onClicked: { if (myRectangle.visible === false) myRectangle.visible = !false else myRectangle.visible = false } anchors { bottom: myRectangle.top bottomMargin: 10 horizontalCenter: parent.horizontalCenter } } }
Also, this may help too;
Rectangle { id: myRectangle visible: myButton.checked // other values } Button { id: myButton width: 80 height: 20 text: "click me" checkable: true }
Thanks gents,
@eyllanesc is it possible to bind the myRectangle visibility to 2 buttons? Something like this:
Rectangle { id: myRectangle visible: myButton.checked : myButton2.checked
no idea of the correct syntax... | https://forum.qt.io/topic/126045/switching-visibility-of-a-rectangle-via-button | CC-MAIN-2022-27 | refinedweb | 269 | 53.78 |
Automatic Update Field, with SQLObject
SQLObject 0.8 (which is currently still in SVN) will add a feature for capturing events. But those of us using the current release (0.7.1) will need to do a little hack to do something I find fairly common.
An example: a comment system, where user's can edit their comments later on. You want to track when the comment was last modified, so you create a Modified field in your Comment class/table:
class Comment(SQLObject): User = ForeignKey('TG_User') Created = DateTimeCol(notNone=True, default=datetime.now()) Modified = DateTimeCol(notNone=True, default=datetime.now()) Subject = StringCol(length=200) Body = StringCol()
Obviously this is a very simple example, and you could just do c.Modified = datetime.now() on the target of your Edit Comment form. But, think about a case where there are many more fields, and several different places where your record could get modified (not just the Edit Comment form). Then it would be nice to have the Modified field updated automatically every time the record is changed.
Take the following:
class Comment(SQLObject): User = ForeignKey('TG_User') Created = DateTimeCol(notNone=True, default=datetime.now()) Modified = DateTimeCol(notNone=True, default=datetime.now()) Subject = StringCol(length=200) Body = StringCol() def __setattr__( self, name, value ): super( Comment, self ).__setattr__( name, value ) if name in self.sqlmeta.columns.keys(): super( Comment, self ).__setattr__( 'Modified', datetime.now() )
This will update the Modified field every time an assignment is made on any of the other fields (actually, including the Modified field, but doing that would be silly). In place of self.sqlmeta.columns.keys(), you could use a list of the field names you want to catch and update the Modified field for.
-Sean Jamieson (AcidReign?)
This is a good recipe, exactly what I needed... but I had to modify is a bit to get it to work properly. I don't know if this is because of changes to SQLObject or what. What I found was that, in the case of the example above, when setattr is called, for example, for the 'Body' column, what is actually passed as the name to setattr is '_SO_val_Body'. However, what's in the list of keys is 'Body'. So my fix was as follows:
def __setattr__( self, name, value ): super( Comment, self ).__setattr__( name, value ) if name.startswith("_SO_val_"): if name[8:] in self.sqlmeta.columns.keys(): super( Comment, self ).__setattr__( '_SO_val_Modified', datetime.now() )
Mike Kent
Careful though, this won't be atomic.
Danny W. Adair
Can someone expand on the problem with atomicity? Could this kind of thing be made atomic (at least as far as the DB is concerned) by turning off autocommits in SQLObject? What might some problems be with this technique not being atomic? Is there an alternative technique which might solve those problems?
Kevin Horn
Yes, it could be made atomic by turning off autocommits. The problem is that, when autocommits are on, you have a chance for the system to go down after a field is updated in the database but before the "date modified" attribute can also be changed, which would leave the data out of sync. The alternative would be to run the entire update in a transaction to make sure that it either is fully committed or is not written at all.
Adam Jones | http://trac.turbogears.org/wiki/SQLObjectAutoUpdateField | CC-MAIN-2016-40 | refinedweb | 553 | 66.84 |
I have moved this page to its new home here:
I wrote this article about adding a host.json file to an Azure Function “How to add a HOST.JSON file to an Azure Function” and while I was breaking it, I saw a behavior which I didn’t expect.
The behavior was that my code within the try…catch… block continued to execute after the timeout, the code in my catch did not execute and a 500 was rendered to my consumer. I experienced what I see in Figure 1.
Figure 1, how to handle a timeout in Azure Function
Adding the log.Info() method into your Azure Function is helpful to see what code is executed and when. in Figure 1 I see that the log.Info() method triggered even after the Timeout happened eventhough a 500 was returned to the consumer.
I partially resolved this by adding a cancellation token to my Azure Function. See here for more on the structure of System.Threading.CancellationToken.
Compare this code with my code I originally wrote here “How to add a HOST.JSON file to an Azure Function”.
using System.Net; using System.Threading; public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, CancellationToken cancellationToken, TraceWriter log) { log.Info("C# TimeOutHttpTrigger entered Run() method..."); try { log.Info("C# TimeOutHttpTrigger entered try..."); await Task.Delay(8000, cancellationToken); log.Info("C# TimeOutHttpTrigger success delay of 8 seconds..."); return req.CreateResponse(HttpStatusCode.OK, "Success"); } catch (System.Exception ex) { log.Info("C# TimeOutHttpTrigger entered catch..."); return req.CreateResponse(HttpStatusCode.OK, $"Error: {ex.Message}"); } }
Notice the following:
- I added the using directive “System.Threading”
- I added a CancellationToken to the paramters of my Run() method
- I added the cancellationToken to my Task.Delay() method
Then when I ran my Azure Function, the code within the try… did stop executing and proceeded into the catch…, as seen in Figure 2.
Figure 2, how to handle a timeout in Azure Function
As you can see that the log.Info() method which came after the Task.Dely() method did not run, instead, the log.Info() method within the catch… was run.
The interesting point is that I still received a 500 when I consumed it eventhough in my catch… I wanted it to return an HttpStatusCode.OK which would have been a 200. I can work with this for now, if someone knows how to get my desired response back, please leave a comment. | https://blogs.msdn.microsoft.com/benjaminperkins/2018/06/12/how-i-would-handle-a-timeout-in-azure-function/ | CC-MAIN-2019-39 | refinedweb | 406 | 68.57 |
view raw
I'm writing a small Python program that will simulate a certain TV show (it's for fun guys ok please don't judge me).
In this program, I am trying to randomly sort the contestants into a list of lists (to emulate a Team Challenge, as you may say). I wrote a function that (is intended) to take in an unsorted list of Contestant objects (because I want to access the name attribute of each Contestant object) and a list that contains the information about the sizes of each individual list in the 2D list I intend to return in the end.
To further explain the second parameter, I'll give an example. Let's say the Queens need to be divided into 2 teams of 5 and 6. The numTeams will then be [5,6].
This is full function that I have written so far:
def sortIntoTeams(contest_obj, numTeams):
# where I will eventually store all the names of the individuals
queenList = []
# creates 2D list, however, I just initialized the first subscript
# part, so to speak
teamShuffledList = len(numTeams) * [[None]]
# this was just a test, but I made another for loop to
# fill the second subscript part of the 2D list, so to speak too
for i in range(0, len(numTeams)):
count = numTeams[i]
teamShuffledList[i] = count * [0]
# for loop to fill queenList with all the names of the Queen
# objects in the contest_obj
for i in range(0, countRemaining(contest_obj)):
queenList.append(contest_obj[i].name)
# from random import shuffle, to shuffle queenList
shuffle(queenList)
def sortIntoTeams(contest_obj, numTeams): # Create a randomly-ordered copy of the contestants' names random_contestants = [x.name for x in contest_obj] random.shuffle(random_contestants) result = [] for teamsize in numTeams: # Take the first <teamsize> contestants from the list result.append(random_contestans[:teamsize]) # Remove those contestants, since they now have a team random_contestants = random_contestants[teamsize:] return result
There is no need to "initialize" anything. | https://codedump.io/share/NNp7aKYPjBuo/1/how-do-i-initialize-and-fill-in-a-list-of-lists-of-different-lengths | CC-MAIN-2017-22 | refinedweb | 320 | 53.04 |
Re: beginner with programming, how to learn to debug and few C general questions
- From: Andrew Poelstra <asp11@xxxxxx>
- Date: Tue, 09 Jun 2009 16:36:09 -0700
bpascal123@xxxxxxxxxxxxxx wrote:
Hi,
I am first an accountant and decided to take on programming a few
years ago...
I have been first studying simple algorithms...
Now I have decided to first learn C (i'll give it 600 effective hours
- so btw 1 and 3 years) before going to c++ or perl, or python, java
or php or ruby on rails or vba, i'll see at that time what answers
most my needs either as a professionnal accountant or as a personal
project
i'd like to take on... Is there anything specific I could do with a
good knowledge of C only ?
So I have been having a hard time with the basics of pointers (20
hours). I really want to feel ok with this before getting any further.
I haven't looked at functions yet.
Here is a programm I know by heart - from a book - but in this code it
doesn't work althought it is very similar to other versions I have
been "coding" over and over more than 30 times ... and 10 % of the
time, there is a bug very similar to the one below I can solve by
looking carefully at the code or cannot solve if I have looked
carefully at the code. That's why I'd like to learn how to debug.
So far or so early with learning C, I'd like to use this opportunity
to learn how to debug and know about the tools that can help.
I have mainly been programming under Windows Xp DJGPP Dolorie... and I
have found a command : simify (link : ) to help debug.
===O===
FIRST, here is the culprit : (this prg inserts a symbol every 3 digits
if the number includes 4 or more digits)
===O===
#include <stdio.h> /* printf, putchar, gets, puts */
#include <conio.h> /* getche */
<conio.h> is part of your Windows library, not of C. Therefore, your
code is not only outside the scope of this group, but more importantly,
many regulars here will be unable to compile your code. (I for one have
no <conio.h> header.)
#include <stdlib.h> /* atoi */
#include <ctype.h> /* isdigit */
#include <string.h> /* strcpy, strlen */
#define BOOL int
/**********/
main()
This should be
int main(void)
Which is clearer and works with the latest standardized version of C.
{
char buffer[128] ;
char clearbuf[128] ;
char output[172] ;
char * pb = buffer, * pc = clearbuf, * po = output ;
int symbol ;
int i, k ;
int cnt ;
BOOL ok ;
/**********/
printf("\033[2J");
printf("\nCe prog separe...\n");
You can use puts() instead of printf(), which saves a fair bit of
overhead and automatically adds the \n to the end of every line.
/**********/
do
{
printf("\nSeparateur :");
symbol = getche() ;
What does this function do? If you need to get a single character, use
getc() or fgetc().
if ( symbol < 32 )
printf("Pas de caractere de control comme separateur !");
if ( isdigit(symbol) )
printf("Pas de chiffre comme separateur !" ) ;
} while ( symbol < 32 || isdigit(symbol) ) ;
/**********/
Please use proper indentation when posting on Usenet.
do
{
ok = 1 ;
printf("\nEntrez un nombre entier positif ");
gets(buffer) ;
Stop right here. NEVER ever EVER use gets(). It is impossible to use
safely, since it does no buffer size checking and therefore is inviting
users to overrun your buffer and run over who-knows-what memory.
Use fgets() instead, which will accept sizeof buffer as an additional
argument, and will check the size of the input.
cnt = 0 ;
while ( *(buffer + cnt) && ok )
{
if ( !isdigit(*(buffer + cnt) ) )
{
printf("Pas d'entier positif!") ;
ok = 0 ;
}
cnt++;
}
} while ( !ok ) ;
/**********/
if ( cnt > 1 && atoi(buffer) )
atoi() is also not recommended since it doesn't fail well if you pass it
invalid input. Use strtol() or strtoul() instead.
{
while ( *pb == '0' )
pb++ ;
while ( *pc++ = *pb++ )
;
strcpy(buffer, clearbuf);
}
/**********/
if ( (cnt = strlen(buffer)) > 3)
{
for ( i = cnt - 1, k = 0, pb = buffer ; i >= 0 ; i--, k++ )
Your spacing is abysmal. Please correct it.
{
po[k] = pb[i] ;
if ( ( (cnt - i) % 3 ) == 0 && i != 0 )
po[++k] = symbol ;
}
po[k] = '\0' ;
for ( i = k - 1 ; i >= 0 ; i-- )
putchar(*(output + i));
}
else
puts(buffer) ;
}
If you fix all of the above problems, it might fix whatever the issue
with your code is. But as it stands, I can neither compile nor read the
posted code. So please correct these things, fix the formatting and
repost if you are still having issues.
--
Andrew Poelstra <>
.
- References:
- beginner with programming, how to learn to debug and few C general questions
- From: bpascal123@xxxxxxxxxxxxxx
- Prev by Date: Re: beginner with programming, how to learn to debug and few C general questions
- Next by Date: Re: beginner with programming, how to learn to debug and few C general questions
- Previous by thread: Re: beginner with programming, how to learn to debug and few C general questions
- Next by thread: Re: beginner with programming, how to learn to debug and few C general questions
- Index(es): | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2009-06/msg01002.html | CC-MAIN-2016-26 | refinedweb | 841 | 70.53 |
In this tutorial we will check how to get the running time of the micro:bit, since the board was turned on or restarted.
Introduction
In this tutorial we will check how to get the running time of the micro:bit, since the board was turned on or restarted.
In order to get this value, we will use a function from the microbit module. The function we will use is called running_time.
The code
The code for this tutorial will be really simple and short, as we will see below. First, we will start by importing the microbit module which, as already mentioned, contains the function we need to get the running time.
import microbit
Then, to obtain the running time, we simply need to call the running_time function. This function takes no arguments and returns the device running time, in milliseconds [1].
We can directly print the value returned by the running_time function, as shown below.
print(running_time())
The complete code can be seen below.
import microbit print(running_time())
Testing the code
To test the code, simply run it on your micro:bit board using a tool of your choice. In my case, I’ll be using uPyCraft, a MicroPython IDE.
You should get an output similar to figure 1. As can be seen, the number of milliseconds elapsed since my device started was printed to the MicroPython console. Naturally, your values will most likely differ from mine, depending on the time that has passed between your device start and running the command.
Figure 1 – Printing the running time of the micro:bit board.
References
[1] | https://techtutorialsx.com/2019/03/03/microbit-micropython-getting-the-device-running-time/?shared=email&msg=fail | CC-MAIN-2020-34 | refinedweb | 267 | 63.39 |
Initializes a new instance of the ODataCollectionView class.
Url of the OData service (for example '').
Name of the table (entity) to retrieve from the service. If not provided, a list of the tables (entities) available is retrieved.
JavaScript object containing initialization data (property values and event handlers) for the ODataCollectionView.
Gets a value that indicates whether a new item can be added to the collection.
Gets a value that indicates whether the collection view can discard pending changes and restore the original values of an edited object.
Gets a value that indicates whether this view supports grouping via the groupDescriptions property.
Gets a value that indicates whether items can be removed from the collection.
Gets a value that indicates whether this view supports sorting via the sortDescriptions property.
Gets the item that is being added during the current add transaction.
Gets the item that is being edited during the current edit transaction.
Gets or sets the current item in the view.
Gets the ordinal position of the current item in the view.
Gets or sets a JavaScript object to be used as a map for coercing data types when loading the data.
The object keys represent the field names and the values are DataType values that indicate how the data should be coerced.
For example, the code below creates an ODataCollectionView and specifies that 'Freight' values, which are stored as strings in the database, should be converted into numbers; and that three date fields should be converted into dates:
import { ODataCollectionView } from '@grapecity/wijmo.odata'; var orders = new ODataCollectionView(url, 'Orders', { dataTypes: { Freight: wijmo.DataType.Number OrderDate: wijmo.DataType.Date, RequiredDate: wijmo.DataType.Date, ShippedDate: wijmo.DataType.Date, } });
This property is useful when the database contains data stored in formats that do not conform to common usage.
In most cases you don't have to provide information about the data types, because the inferDataTypes property handles the conversion of Date values automatically.
If you do provide explicit type information, the inferDataTypes property is not applied. Because of this, any data type information that is provided should be complete, including all fields of type Date.
Gets or sets a string that specifies whether related entities should be included in the return data.
This property maps directly to OData's $expand option.
For example, the code below retrieves all the customers and their orders from the database. Each customer entity has an "Orders" field that contains an array of order objects:
var url = ''; var customersOrders = new wijmo.odata.ODataCollectionView(url, 'Customers', { expand: 'Orders' });
Gets or sets an array containing the names of the fields to retrieve from the data source.
If this property is set to null or to an empty array, all fields are retrieved.
For example, the code below creates an ODataCollectionView that gets only three fields from the 'Categories' table in the database:
import { ODataCollectionView } from '@grapecity/wijmo.odata'; var categories = new ODataCollectionView(url, 'Categories', { fields: ['CategoryID', 'CategoryName', 'Description'] });
Gets or sets a callback used to determine if an item is suitable for inclusion in the view.
The callback function should return true if the item passed in as a parameter should be included in the view.
NOTE: If the filter function needs a scope (i.e. a meaningful 'this' value) remember to set the filter using the 'bind' function to specify the 'this' object. For example:
collectionView.filter = this._filter.bind(this);
Gets or sets a string containing an OData filter specification to be used for filtering the data on the server.
The filter definition syntax is described in the OData documentation.
For example, the code below causes the server to return records where the 'CompanyName' field starts with 'A' and ends with 'S':
view.filterDefinition = "startswith(CompanyName, 'A') and endswith(CompanyName, 'B')";
Filter definitions can be generated automatically. For example, the FlexGridFilter component detects whether its data source is an ODataCollectionView and automatically updates both the ODataCollectionView.filter and ODataCollectionView.filterDefinition properties.
Note that the ODataCollectionView.filterDefinition property is applied even if the ODataCollectionView.filterOnServer property is set to false. This allows you to apply server and client filters to the same collection, which can be useful in many scenarios.
For example, the code below uses the ODataCollectionView.filterDefinition property to filter on the server and the ODataCollectionView.filter property to further filter on the client. The collection will show items with names that start with 'C' and have unit prices greater than 20:
var url = ''; var data = new wijmo.odata.ODataCollectionView(url, 'Products', { oDataVersion: 4, filterDefinition: 'startswith(ProductName, \'C\')', // server filter filterOnServer: false, // client filter filter: function(product) { return product.UnitPrice > 20; }, });
Gets or sets a value that determines whether filtering should be performed on the server or on the client.
Use the filter property to perform filtering on the client, and use the filterDefinition property to perform filtering on the server.
In some cases it may be desirable to apply independent filters on the client and on the server.
You can achieve this by setting (1) the filterOnServer property to false and the filter property to a filter function (to enable client-side filtering) and (2) the filterDefinition property to a filter string (to enable server-side filtering).
The default value for this property is true.
Gets or sets a callback that determines whether a specific property of an item contains validation errors.
If provided, the callback should take two parameters containing the item and the property to validate, and should return a string describing the error (or null if there are no errors).
For example:
import { CollectionView } from '@grapecity/wijmo'; var view = new CollectionView(data, { getError: function (item, property) { switch (property) { case 'country': return countries.indexOf(item.country) < 0 ? 'Invalid Country' : null; case 'downloads': case 'sales': case 'expenses': return item[property] < 0 ? 'Cannot be negative!' : null; case 'active': return item.active && item.country.match(/US|UK/) ? 'No active items allowed in the US or UK!' : null; } return null; } });
Gets a collection of GroupDescription objects that describe how the items in the collection are grouped in the view.
Gets an array of CollectionViewGroup objects that represents the top-level groups.
Gets or sets a value that determines whether fields that contain strings that look like standard date representations should be converted to dates automatically.
This property is set to true by default, because the ODataCollectionView class uses JSON and that format does not support Date objects.
This property has no effect if specific type information is provided using the dataTypes property.
Gets a value that indicates whether an add transaction is in progress.
Gets a value that indicates whether an edit transaction is in progress.
Gets a value that indicates whether this view contains no items.
Gets a value that indicates the ODataCollectionView is currently loading data.
This property can be used to provide progress indicators.
Gets a value that indicates whether the page index is changing.
Gets a value that indicates whether notifications are currently suspended (see beginUpdate and endUpdate).
Gets the total number of items in the view taking paging into account.
Gets items in the view.
Gets an ObservableArray containing the records that were added to the collection since trackChanges was enabled.
Gets an ObservableArray containing the records that were edited in the collection since trackChanges was enabled.
Gets an ObservableArray containing the records that were removed from the collection since trackChanges was enabled.
Gets or sets a custom reviver function to use when parsing JSON values returned from the server.
If provided, the function must take two parameters (key and value), and must return the parsed value (which can be the same as the original value).
For details about reviver functions, please refer to the documentation for the JSON.parse method.
Gets or sets an array containing the names of the key fields.
Key fields are required for update operations (add/remove/delete).
Gets or sets a function that creates new items for the collection.
If the creator function is not supplied, the CollectionView will try to create an uninitialized item of the appropriate type.
If the creator function is supplied, it should be a function that takes no parameters and returns an initialized object of the proper type for the collection.
Gets or sets the OData version used by the server.
There are currently four versions of OData services, 1.0 through 4.0. Version 4.0 is used by the latest services, but there are many legacy services still in operation.
If you know what version of OData your service implements, set the oDataVersion property to the appropriate value (1 through 4) when creating the ODataCollectionView (see example below).
var url = ''; var categories = new wijmo.odata.ODataCollectionView(url, 'Categories', { oDataVersion: 1.0, // legacy OData source fields: ['CategoryID', 'CategoryName', 'Description'], sortOnServer: false });
If you do not know what version of OData your service implements (perhaps you are writing an OData explorer application), then do not specify the version. In this case, the ODataCollectionView will get this information from the server. This operation requires an extra request, but only once per service URL, so the overhead is small.
Gets the total number of pages.
Gets the zero-based index of the current page.
Gets or sets the number of items to display on a page.
Gets or sets an object containing request headers to be used when sending or requesting data.
The most typical use for this property is in scenarios where authentication is required. For example:
import { ODataCollectionView } from '@grapecity/wijmo.odata'; var categories = new ODataCollectionView(serviceUrl, 'Categories', { fields: ['Category_ID', 'Category_Name'], requestHeaders: { Authorization: db.token } });
Gets or sets a value that determines whether dates should be adjusted to look like GMT rather than local dates.
Gets or sets a function used to compare values when sorting.
If provided, the sort comparer function should take as parameters two values of any type, and should return -1, 0, or +1 to indicate whether the first value is smaller than, equal to, or greater than the second. If the sort comparer returns null, the standard built-in comparer is used.
This sortComparer property allows you to use custom comparison algorithms that in some cases result in sorting sequences that are more consistent with user's expectations than plain string comparisons.
For example, see Dave Koele's Alphanum algorithm. It breaks up strings into chunks composed of strings or numbers, then sorts number chunks in value order and string chunks in ASCII order. Dave calls the result a "natural sorting order".
The example below shows a typical use for the sortComparer property:
// create a CollectionView with a custom sort comparer var dataCustomSort = new wijmo.collections.CollectionView(data, { sortComparer: function (a, b) { return wijmo.isString(a) && wijmo.isString(b) ? alphanum(a, b) // use custom comparer for strings : null; // use default comparer for everything else } });
The example below shows how you can use an Intl.Collator to control the sort order:
// create a CollectionView that uses an Intl.Collator to sort var collator = window.Intl ? new Intl.Collator() : null; var dataCollator = new wijmo.collections.CollectionView(data, { sortComparer: function (a, b) { return wijmo.isString(a) && wijmo.isString(b) && collator ? collator.compare(a, b) // use collator for strings : null; // use default comparer for everything else } });
Gets or sets a function used to convert values when sorting.
If provided, the function should take as parameters a SortDescription, a data item, and a value to convert, and should return the converted value.
This property provides a way to customize sorting. For example, the FlexGrid control uses it to sort mapped columns by display value instead of by raw value.
For example, the code below causes a CollectionView to sort the 'country' property, which contains country code integers, using the corresponding country names:
var countries = 'US,Germany,UK,Japan,Italy,Greece'.split(','); collectionView.sortConverter = function (sd, item, value) { if (sd.property == 'countryMapped') { value = countries[value]; // convert country id into name } return value; }
Gets an array of SortDescription objects that describe how the items in the collection are sorted in the view.
Gets or sets a value that determines whether null values should appear first or last when the collection is sorted (regardless of sort direction).
This property is set to false by default, which causes null values to appear last on the sorted collection. This is also the default behavior in Excel.
Gets or sets a value that determines whether sort operations should be performed on the server or on the client.
Use the sortDescriptions property to specify how the data should be sorted.
The default value for this property is true.
Gets or sets the underlying (unfiltered and unsorted) collection.
Gets the name of the table (entity) that this collection is bound to.
Gets the total number of items in the view before paging is applied.
Gets or sets a value that determines whether the control should track changes to the data.
If trackChanges is set to true, the CollectionView keeps track of changes to the data and exposes them through the itemsAdded, itemsRemoved, and itemsEdited collections.
Tracking changes is useful in situations where you need to update the server after the user has confirmed that the modifications are valid.
After committing or cancelling changes, use the clearChanges method to clear the itemsAdded, itemsRemoved, and itemsEdited collections.
The CollectionView only tracks changes made when the proper CollectionView methods are used (editItem/commitEdit, addNew/commitNew, and remove). Changes made directly to the data are not tracked.
Gets or sets whether to use a stable sort algorithm.
Stable sorting algorithms maintain the relative order of records with equal keys. For example, consider a collection of objects with an "Amount" field. If you sort the collection by "Amount", a stable sort will keep the original order of records with the same Amount value.
This property is set to false by default, which causes the CollectionView to use JavaScript's built-in sort method, which is very fast but not stable. Setting the useStableSort property to true increases sort times by 30% to 50%, which can be significant for large collections.
Creates a new item and adds it to the collection.
This method takes no parameters. It creates a new item, adds it to the collection, and defers refresh operations until the new item is committed using the commitNew method or canceled using the cancelNew method.
The code below shows how the addNew method is typically used:
// create the new item, add it to the collection var newItem = view.addNew(); // initialize the new item newItem.id = getFreshId(); newItem.name = 'New Customer'; // commit the new item so the view can be refreshed view.commitNew();
You can also add new items by pushing them into the sourceCollection and then calling the refresh method. The main advantage of addNew is in user-interactive scenarios (like adding new items in a data grid), because it gives users the ability to cancel the add operation. It also prevents the new item from being sorted or filtered out of view until the add operation is committed.
The item that was added to the collection.
Ends the current edit transaction and, if possible, restores the original value to the item.
Ends the current add transaction and discards the pending new item.
Clears all changes by removing all items in the itemsAdded, itemsRemoved, and itemsEdited collections.
Call this method after committing changes to the server or after refreshing the data from the server.
Override commitEdit to modify the item in the database.
Returns a value indicating whether a given item belongs to this view.
Item to seek.
Executes a function within a beginUpdate/endUpdate block.
The collection will not be refreshed until the function finishes. This method ensures endUpdate is called even if the function throws an exception.
Function to be executed without updates.
Begins an edit transaction of the specified item.
Item to be edited.
Resume refreshes suspended by a call to beginUpdate.
Calculates an aggregate value for the items in this collection.
Type of aggregate to calculate.
Property to aggregate on.
Whether to include only items on the current page.
The aggregate value.
Returns true if the caller queries for a supported interface.
Name of the interface to look for.
Loads or re-loads the data from the OData source.
Sets the specified item to be the current item in the view.
Item that will become current.
Sets the first item in the view as the current item.
Sets the last item in the view as the current item.
Sets the item after the current item in the view as the current item.
Sets the item at the specified index in the view as the current item.
Index of the item that will become current.
Sets the item before the current item in the view as the current item.
Sets the first page as the current page.
True if the page index was changed successfully.
Sets the last page as the current page.
True if the page index was changed successfully.
Moves to the page after the current page.
True if the page index was changed successfully.
Moves to the page at the specified index.
Index of the page to move to.
True if the page index was changed successfully.
Moves to the page before the current page.
True if the page index was changed successfully.
Raises the collectionChanged event.
Contains a description of the change.
Raises the currentChanged event.
Raises the currentChanging event.
CancelEventArgs that contains the event data.
By default, errors throw exceptions and trigger a data refresh. If you want to prevent this behavior, set the RequestErrorEventArgs.cancel parameter to true in the event handler.
RequestErrorEventArgs that contains information about the error.
Raises the pageChanged event.
Raises the pageChanging event.
PageChangingEventArgs that contains the event data.
Raises the sourceCollectionChanged event.
Raises the sourceCollectionChanging event.
CancelEventArgs that contains the event data.
Re-creates the view using the current sort, filter, and group parameters.
Item to be removed from the database.
Removes the item at the specified index from the collection.
Index of the item to be removed from the collection. The index is relative to the view, not to the source collection.
Updates the filter definition based on a known filter provider such as the FlexGridFilter.
Known filter provider, typically an instance of a FlexGridFilter.
Occurs when the collection changes.
Occurs after the current item changes.
Occurs before the current item changes.
Occurs when there is an error reading or writing data.
Occurs when the ODataCollectionView finishes loading data.
Occurs when the ODataCollectionView starts loading data.
Occurs after the page index changes.
Occurs before the page index changes.
Occurs after the value of the sourceCollection property changes.
Occurs before the value of the sourceCollection property changes.
Extends the CollectionView class to support loading and saving data to and from OData sources.
You can use the ODataCollectionView class to load data from OData services and use it as a data source for any Wijmo controls.
In addition to full CRUD support you get all the CollectionView features including sorting, filtering, paging, and grouping. The sorting, filtering, and paging functions may be performed on the server or on the client.
The code below shows how you can instantiate an ODataCollectionView that selects some fields from the data source and provides sorting on the client. Notice how the 'options' parameter is used to pass in initialization data, which is the same approach used when initializing controls:
The example below uses an ODataCollectionView to load data from a NorthWind OData provider service, and shows the result in a FlexGrid control:
Example | https://www.grapecity.com/wijmo/api/classes/wijmo_odata.odatacollectionview.html | CC-MAIN-2019-47 | refinedweb | 3,239 | 58.18 |
Suppose there is a tunnel whose height is 41 and width is very large. We also have a list of boxes with length, width and height. A box can pass through the tunnel if its height is exactly less than tunnel height. We shall have to find amount of volume that are passed through the tunnel. The volume is length * width * height. So we have a number N, an 2D array with N rows and three columns.
So, if the input is like N = 4 boxes = [[9,5,20],[3,7,15],[8,15,41],[6,3,42]], then the output will be 900 and 315 we can pass first two boxes, the volumes are 9 * 5 * 20 = 900 and 3 * 7 * 15 = 315. The height of other boxes are not supported.
To solve this, we will follow these steps −
Define Box data type with length, width and height
Define a function volume(), this will take box,
return box.length * box.width * box.height
Define a function lower(), this will take box,
return true if box.height < 41 otherwise false
From the main method, do the following:,
for initialize i := 0, when i < N, update (increase i by 1), do:
if lower(boxes[i]) is true, then:
display volume(boxes[i])
Let us see the following implementation to get better understanding −
#include <stdio.h> #define N 4 struct Box{ int length, width, height; }; int volume(struct Box box){ return box.length*box.width*box.height; } int lower(struct Box box){ return box.height < 41; } int solve(struct Box boxes[]){ for (int i = 0; i < N; i++) if (lower(boxes[i])) printf("%d\n", volume(boxes[i])); } int main(){ struct Box boxes[N] = {{9,5,20},{3,7,15},{8,15,41},{6,3,42}}; solve(boxes); }
4, {{9,5,20},{3,7,15},{8,15,41},{6,3,42}}
900 315 | https://www.tutorialspoint.com/c-program-to-find-amount-of-volume-passed-through-a-tunnel | CC-MAIN-2022-21 | refinedweb | 311 | 78.59 |
I'm wondering how to call an external program in such a way that allows the user to continue to interact with my program's UI (built using tkinter, if it matters) while the Python program is running. The program waits for the user to select files to copy, so they should still be able to select and copy files while the external program is running. The external program is Adobe Flash Player.
Perhaps some of the difficulty is due to the fact that I have a threaded "worker" class? It updates the progress bar while it does the copying. I would like the progress bars to update even if the Flash Player is open.
I tried the
subprocess module. The program runs, however it prevents the user from using the UI until the Flash Player is closed. Also, the copying still seems to occur in the background, it's just that the progress bar does not update until the Flash Player is closed.
def run_clip(): flash_filepath = "C:\\path\\to\\file.exe" # halts UI until flash player is closed... subprocess.call([flash_filepath])
Next, I tried using the
concurrent.futures module (I was using Python 3 anyway). Since I'm still using
subprocess to call the application, it's not surprising that this code behaves exactly like the above example.
def run_clip(): with futures.ProcessPoolExecutor() as executor: flash_filepath = "C:\\path\\to\\file.exe" executor.submit(subprocess.call(animate_filepath))
Does the problem lie in using
subprocess? If so, is there a better way to call the external program? Thanks in advance.
You just need to keep reading about the subprocess module, specifically about
Popen.
To run a background process concurrently, you need to use
subprocess.Popen:
import subprocess child = subprocess.Popen([flash_filepath]) # At this point, the child process runs concurrently with the current process # Do other stuff # And later on, when you need the subprocess to finish or whatever result = child.wait()
You can also interact with the subprocess' input and output streams via members of the
Popen-object (in this case
child).
Similar Questions | http://ebanshi.cc/questions/2148317/run-external-program-concurrently-in-python | CC-MAIN-2017-22 | refinedweb | 343 | 66.54 |
ECE497 Project: Multiple Partitions via U-boot
Team members: Michael J. Yuhas, Jack Ma
Contents
- 1 Executive Summary
- 2 Installation Instructions
- 3 User Instructions
- 4 Highlights
- 5 Theory of Operation
- 6 Work Breakdown
- 7 Conclusions Usage: partitioning.sh <drive>
Now plug the mmc card into your host machine and determine which name the host computer assigns to it.
host$ dmesg | tail
There should be a few lines talking about mounting a disk and giving it the name sdx where x is any letter for the drive. Now that we know the name of the disk, we can run our script:
host$ sudo ./partitioning.sh /dev/sdx
Your drive now has four partitions on it, three of which can serve as a root file system for linux. The first partition is FAT32 and will serve as the boot sector where U-boot lives. Now, copy your MLO, u-boot.bin, u-boot.img, and uimage to the boot sector of your mmc card. Also, obtain a copy of the root file system for Angstrom. Due to the the size limitations of uploads, we cannot provide this, but if you follow the instruction here to build a normal Angstrom MMC card and then sudo copy all the files in Narcissus-rootfs to your computer, you will have a bootable root file system.
I Have Multiple Partitions, Now What?
Now put the sd card in your beagle board and fire it up. Be sure to plug a cable from the serial port on the beagle board to some kind of connection on the computer. Instructions to do this can be found here.
Once byobu is opened, and you have U-boot running, enter the following command:
U-boot # jack
The command with no parameters should display a list of partitions on the mmc card. Now choose some partition x and boot.
U-boot # jack x
The Beagle Board should now boot to whichever file system you have selected. section will describe how we accomplished our project and detail some of the challenges we faced while we worked.
Partition Script
Initially we tried to manually create a multi-partitioned sd card from which to boot, however, we soon ran into many problems. At first, we manually partitioned the disk into a 128MB boot sector and divided the rest of the space evenly between two ext2 partitions. We then copied files from a working sd card to populate the partitions. The initial problem was that we performed did not perform a sudo copy and that some files were left behind, but when we tried this strategy on previously unformatted cards, we discovered that there was something else wrong with this method.
Before we could continue, we had to learn little bit about how static memory is addressed in a hard disk or any other form of longterm storage. It turns out that data is still allocated into sectors, cylinders and heads, just like in the early days of computing. An i depth description can be found [ here]. We learned that for a FAT32 file system, there are 512 bytes in each sector, and 63 sectors in each head. There is a maximum number of 255 permitted heads for one device. This means that to deal with larger file systems, the number of cylinders must change. We played around with the command line tool fdisk to get an idea of partition a sd card with this format. Because the process was fairly complex, we decided to write a script to do our job for us.
Rather than reinventing the wheel, we found a website who had already written a script to partition an sd card for a single partition of Angstrom Linux. We used this script as the basis for our creation which will create four partitions on a single sd card. The first partition is used for boot, meaning the remaining three partitions can be used as the root file system in an OS.
U-boot Modifications
Initially, we wanted to prove that we could multi-boot in U-boot just from the command line as a proof of concept. After creating a multi-partition sd card (see above section), we discovered a method of booting to separate partitions. This involves starting U-boot, and first, listing all the available partitions on the detected mmc device. Next, we had to determine which partitions were bootable and then pass that partition to the ext2load command. Analysis of the ext2load command showed us that this command calls the bootm command, which intern calls the boot command. Each of these commands generates a set of parameters for the next one down the chain allowing the user to go from a high level of abstraction to a very low level of operations. We played with the booting commands and environmental variables and determined the best way to select a boot partition was to follow the following commands:
U-boot # mmc part U-boot # setenv pnum X U-boot # setenv mmcroot /dev/mmcblk0pX rw U-boot # setenv loaduimage ext2load mmc 0:X ${loadaddr} /boot/uImage U-boot # boot
In the above commands, 'X' is an arbitrary ext2 partition number that exists on our mmc card. Basically, we are resetting the location that U-boot chooses to look for the uImage file (kernel), and the partition that the kernel will mount as root.
This process takes long time to perform by hand, so we thought it would be neat to create a user command that would add another layer of abstraction.
After some research, we found instructions on how to create a command in U-boot, but the article was written entirely in Chinese. Luckily we had Jack in our group. The following is a high level description of the process of adding a user defined command to U-boot.
First, you must add your command.c file to the common folder using the naming scheme, cmd_<your command name>.c. You must also include the following headers in your file:
#include <common.h> #include <config.h> //CONFIG_CMD_JACK #include <command.h> //U_BOOT_CMD
The common.h loads libraries native to all U-boot source code. The command.h and config.h load globals that commands will need to be recognized by U-boot. It is important to note that stdio.h and stdlib.h are not supported by the tool chain we will be using.
In addition, we must define our command in the Makefile, the header for the Omap3 Beagle Board, and the config_cmd_all.h that contains a list of all commands.
./common/Makefile:COBJS-$(CONFIG_CMD_JACK) += cmd_jack.o ./include/configs/omap3_beagle.h:#define CONFIG_CMD_JACK ./include/config_cmd_all.h:#define CONFIG_CMD_JACK
We named our command jack and created a cmd_jack.c file in the common directory. In the cmd_jack.c file, we needed to create some functions to execute when the command is called. The do_jack() function is the main function that will be executed every time the command is called. We also needed to add a call to U_BOOT_CMD(jack,0,1,do_jack,"Test Program.\n"); that would run every time somebody ran help on the jack command.
To call other commands from the U-boot, we discovered the run_command("string to run", flag); command to let us access pre-written commands. We also discovered that although stdio.c was not in the toolchain, we could use sprintf to parse variables into strings. Theses commands gave us the tools we needed to take the above command list and automate them under a single unified user command.
Difficulties Using the Git Repository
The installation section describes some steps that must be taken if the git repository that someone pulls is bad. We discovered that a number of .depend files are created every time a user runs make on U-boot, and if this user pushes there directory to git, the .depends will cause problems for users with other machine configurations. To solve this, we discovered an includes file in git that would allow us to include and exclude certain files while do a git push. We believe we have solved this problem, but just in case, the work-around is located above.
Work Breakdown
Conclusions
We found this project very interesting because it gave us an in depth look into the way a boot-loader works. We saw how to add commands, and learned how the boot-loader actually determines what to do when a user enters a command. We also learned about the way memory is addressed in long term storage and learned about how it affects different file system schemes.
We feel that future additions to this project could include the following:
1.) The ability to boot from USB (We tried to look into this, but did not make much progress)
2.) The usage of the user button to select a boot partition at start-up
3.) The ability to boot into different operating systems. (Our code should support this, but we did not have time to test it). | http://elinux.org/index.php?title=ECE497_Project:_Multiple_Partitions_via_U-boot&oldid=100922 | CC-MAIN-2015-48 | refinedweb | 1,498 | 71.34 |
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
Registered users can ask their own questions, contribute to discussions, and be part of the Community!
I want to write to Google Bigquery via the entries fed into webapp created in Dataiku.
Is there some help available with that ?
I don't know which version of DSS you are using. However, it appears that Google Bigquery is natively supported by DSS only in the paid version. If you are using the community edition this feature is not directly available. (That said you might be able to use a Python, or R library to "roll your own".)
--Tom
Tom, might one possibility for users of the community edition be to pull BigQuery data (for example, their Google Analytics dataset) into AWS, and then pull the data from AWS into Dataiku?
Yes, that might be possible.
What I was thinking of was using a python library like the one described here:
With python code somewhat like this.
from google.cloud import bigquery client = bigquery.Client() # Perform a query. QUERY = ( 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` ' 'WHERE state = "TX" ' 'LIMIT 100') query_job = client.query(QUERY) # API request rows = query_job.result() # Waits for query to finish for row in rows: print(row.name)
Here is some further documentation
Hi,
Writing into Bigquery, however, is a complicated topic. You cannot simply write one record after another. BigQuery is an analytical database designed for very-large-scale analytics workloads, not at all for online transaction processing (i.e. modifying records one by one).
The only way to add data to BigQuery is to add said data to a "Google Cloud Storage" kind of dataset and then to sync this GCS dataset to BigQuery, which will use fast load capabilities of BigQuery.
Writing to a GCS dataset is covered by the regular dataset write APIs of Dataiku. | https://community.dataiku.com/t5/Using-Dataiku/Google-Big-Query-in-DataIku/m-p/692/highlight/true | CC-MAIN-2022-33 | refinedweb | 318 | 66.54 |
Opened 4 years ago
Closed 2 years ago
#15745 closed Bug (needsinfo)
Description of DEBUG setting in email is misleading if DEBUG == False.
Description
If DEBUG == False, Django sends emails to those listed in the ADMIN setting with the exact text of the debug message.
At the bottom of this message is the following:
"You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 500 page."
In the case that the message is displayed in an email, this content is misleading. Of course in order for the email to have been sent, we know that the user has DEBUG = False.
Thus, we need a short branch divorcing these two cases and displaying slightly different content.
Also (and please let me know if the convention is to separate these into two tickets) - there are some grammar issues with the docstrings - many need to be migrated to the present recurring tense / indicative mood if in fact that is going to remain the guideline.
Attachments (1)
Change History (6)
Changed 4 years ago by jMyles
comment:1 Changed 4 years ago by jMyles
- Needs documentation unset
- Needs tests unset
- Owner changed from nobody to jMyles
- Patch needs improvement unset
comment:2 Changed 4 years ago by anonymous
- Resolution set to needsinfo
- Status changed from new to closed
comment:3 Changed 4 years ago by jMyles
Yep - you're right. I was still using 1.3 Beta 1 in that env. My bad!
In any case, I still like the idea of delivering an email to those listed as ADMIN. Even though it's true that knowing the full set of effects of DEBUG is indeed a reasonable expectation, which you point out, I also think that in reality some people find themselves in that tuple without being as familiar with django as we'd hope. If they are talented admins who come from another framework, this is exactly the kind of message that will help them in their transition. I think it's worth keeping.
comment:5 Changed 2 years ago by nagasthya@…
- Resolution needsinfo deleted
- Status changed from closed to reopened
comment:6 Changed 2 years ago by ramiro
- Resolution set to needsinfo
- Status changed from reopened to closed
I'm confused. The problem with the email containing that incorrect "you're seeing this because" note was reported in #15597 and fixed in r15802. The attached patch shows that the the fix is in the version of the source you are using -- that block is bracketed by an {% if not is_email %}/{% endif %} (as are other portions of the template -- what gets sent in html email is not, in fact, exactly what would be sent to the browser). I tested the change then and again now, and when I receive HTML email from 1.3 it does NOT include the "You're seeing this because" paragraph. Are you really seeing this paragraph in HTML email with 1.3? Do you also get the other bits bracketed by {% if not is_email %} in your error emails?
(Also since r15850 html email is not the default it has to be explictly configured...I also checked the non-HTML email and it does not contain anything like that paragraph either.)
I don't believe it is necessary to say something else about why you are receiving this email. Anyone listed as an ADMIN in the settings file ought to already know why they are receiving this email. | https://code.djangoproject.com/ticket/15745 | CC-MAIN-2015-11 | refinedweb | 585 | 66.27 |
1m - Change text to hicolour (bold) mode
4m - " " " Underline (doesn't seem to work)
5m - " " " BLINK!!
8m - " " " Hidden (same colour as bg)
30m - " " " Black
31m - " " " Red
32m - " " " Green
33m - " " " Yellow
34m - " " " Blue
35m - " " " Magenta
36m - " " " Cyan
37m - " " " White
40m - Change Background to Black
41m - " " " Red
42m - " " " Green
43m - " " " Yellow
44m - " " " Blue
45m - " " " Magenta
46m - " " " Cyan
47m - " " " White
7m - Change to Black text on a White bg
0m - Turn off all attributes.
Many terminals and terminal emulators display the bold variant as a different colour, rather than (or in addition to) actually thickening the characters. This provides increased flexibility when, say, customising your shell prompt, or configuring coloured ls output. However, it can also lead to problems on terminals that don't exhibit this behaviour if you are accustomed to having 16 distinct colours to work with.
The following is a quick C program I wrote that showcases the 8 ANSI
colours and their bold variants. It is useful for quickly seeing what colours your terminal provides, and how, if at all, the bold variants differ from the regular colours.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main (void) {
int c, p;
char * colours;
colours = malloc (64);
memset (colours, 0, 64);
strcpy (&colours[0*8], "black");
strcpy (&colours[1*8], "red");
strcpy (&colours[2*8], "green");
strcpy (&colours[3*8], "yellow");
strcpy (&colours[4*8], "blue");
strcpy (&colours[5*8], "magenta");
strcpy (&colours[6*8], "cyan");
strcpy (&colours[7*8], "white");
for (c=30; c<=37; c++) {
p = (c-30)*8;
printf ("\033[%im%s, \033[1mbold %s\033[0m\n",
c, &colours[p], &colours[p]);
}
free (colours);
exit (0);
}
Or in Perl:
#!/usr/bin/perl -w
use strict;
my @colours = ("black", "red", "green", "yellow", "blue", "magenta", "cyan",
"white");
do {
my $n = 37 - $#colours;
my $c = shift @colours;
print "\033[${n}m$c, \033[1mbold $c\033[0m\n";
} while (@colours);
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/ANSI+color+codes | CC-MAIN-2016-36 | refinedweb | 329 | 62.11 |
CMessagePump
CWorkItemDispatcher
CCopyIPC
WM_COPYDATA
I have had a longstanding itch - I finally scratched it. This small utility is the result, and I'm posting it here for a couple of reasons. First, maybe you have the same itch, and this is a solution. And second, while writing it, I used several simple utility classes I keep in my toolbox and you might find them useful too.
What is this nagging itch? It is the darn password edit controls that show asterisks or circles instead of the password you're typing. Oh yes, this is a "security" feature. Some bad guy might be looking over your shoulder to steal your passwords. Except, that of the 4 computers I use on a daily basis, nobody is ever, I mean, ever looking over my shoulder. They're on my desk in my home! But, I use strong passwords with punctuation marks and digits and all that good stuff—sometimes I can't really tell if I typed the password correctly.
So, this simple utility lives in the tray - oops, I mean the Taskbar Notification Area, and whenever I have a password field blocking my view, I can simply click on its icon and the password field will be changed to a normal text field. Problem solved!
This is really a quite vanilla old-style Win32 program. Still, I have used a few utility classes you might find useful yourself. I have a class that encapsulates sending a message (buffer of bytes) from one Windows program to another via WM_COPYDATA, a general message pump, and a nice little work item dispatcher suitable for running work items in your message loop's idle time. All of them are small and simple, and you can read the code in a few minutes to figure out what they do and if they're suitable for your needs. If you find one useful, go ahead and use it!
The PasswordUnhider uses two different techniques to unhide (unmask) password fields.
First, some password fields are standard edit controls in an ordinary Win32 program, that have a password mask character set—either the control was created with the ES_PASSWORD style, or it was sent an EM_SETPASSWORDCHAR notification with a non-zero password character. The utility searches for these edit controls and send them an EM_SETPASSWORDCHAR notification with a zero password character.
ES_PASSWORD
EM_SETPASSWORDCHAR
// Set the password character on an edit control - which is in another process
bool SetPasswordChar(HWND hWnd, wchar_t c)
{
// Try SendMessage first
::SendMessage(hWnd, EM_SETPASSWORDCHAR, c, 0);
DWORD err = ::GetLastError();
if (err != 0)
{
// PostMessage can succeed where SendMessage failed.
// (I'm not sure what the security model is that explains this.)
BOOL b = ::PostMessage(hWnd, EM_SETPASSWORDCHAR, c, 0);
if (!b)
{
err = ::GetLastError();
// Well, it didn't work
return false;
}
}
::InvalidateRect(hWnd, NULL, TRUE);
return true;
}
To find these controls, the utility enumerates the desktop windows (::EnumDesktopWindows API), and then for each desktop window which is an application, it enumerates all child windows looking for password controls (using the ::EnumChildWindows API).
::EnumDesktopWindows
::EnumChildWindows
To determine if a window is a password control—that is, an edit control with a password hiding character set—the utility sends the window an EM_GETPASSWORDCHAR message and sees what kind of answer it gets. Only edit controls—or things that act like edit controls—will answer this notification (with maybe some exceptions). I can't look at the class name or window procedure because the application may be using edit controls that are superclassed or subclassed but still behave like password fields.
EM_GETPASSWORDCHAR
// Determine if a window is an edit control of some kind
bool IsPasswordEditControl(HWND hWnd)
{
if (!::IsWindow(hWnd))
return false;
// Make sure the window class isn't on our list of non-edit controls
// (that answer message EM_GETPASSWORDCHAR)
wchar_t szClass[128];
int n = ::GetClassName(hWnd, szClass, ARRAYSIZE(szClass));
for (unsigned int i = 0; i < s_notEditControlClasses.size(); i++)
if (s_notEditControlClasses[i] == szClass)
return false;
// Check for edit control by checking to see if it understands
// EM_GETPASSWORDCHAR (and if the password char is set!)
DWORD msgResult = 0;
LRESULT r = ::SendMessageTimeout(hWnd, EM_GETPASSWORDCHAR, 0, 0,
SMTO_ABORTIFHUNG | SMTO_NORMAL, MSG_TIMEOUT, &msgResult);
return (r != 0) && (msgResult != 0);
}
(I did find that a window with the class name "Acrobat IEHelper Object" returns a non-zero answer for EM_GETPASSWORDCHAR—the exception mentioned above—so I also use a little array of exceptional class names to filter out these false positives. Right now, "Acrobat IEHelper Object" is the only class name in that array.)
Second, some password fields are INPUT elements in web pages. In IE, these are not implemented with Win32 edit controls, so a different technique is used.
INPUT
First, I find all of the browser (IE) windows by using the ShellWindows COM object, which is a collection of all the browser windows. From each of those windows (represented as a COM object, not a window handle), the HTML document can be retrieved, and from each HTML document, the collection of all INPUT elements can be retrieved. Then, each INPUT element can be checked to see if it has an attribute "type" with the value "password".
ShellWindows
type
Once a password type INPUT element is found, it is necessary to change the type to "text". Unfortunately, for security reasons, you can't just change the attribute value. Instead, the utility gets the outer HTML for the INPUT element, does a text substitution of "type=text" for "type=password", creates a new INPUT element that is the same as the original except for this attribute, and patches it into the tree in place of the original INPUT element.
text
type=text
type=password
// Begin the replacement by getting the outer HTML.
if (SUCCEEDED(hr))
{
hr = pElement->get_outerHTML(&bstrOuterHTML);
}
// The string "type=password" ought to be in there somewhere ...
if (SUCCEEDED(hr))
{
outer = bstrOuterHTML;
wstring outerLower = MakeLower(outer);
iFind = Find(outerLower, wstring(L"type=password"));
hr = BOOL_TO_HR(iFind >= 0);
}
// ... so remove it.
if (SUCCEEDED(hr))
{
outer.erase(iFind, _tcslen(L"type=password"));
outer.insert(iFind, L"type=text");
bstrNewElement = CComBSTR(outer.c_str());
hr = PTR_NON_NULL(pDoc2 = pDoc3_);
}
// Create a new HTML element from the modified outer HTML
if (SUCCEEDED(hr))
{
hr = SUCCEEDED_AND_PTR_NON_NULL(pDoc2->createElement(bstrNewElement,
&pNewElement), pNewElement);
}
// And now hook up our new HTML (INPUT) element replacing the original
// INPUT element.
if (SUCCEEDED(hr))
{
hr = SUCCEEDED_AND_PTR_NON_NULL(pElement->get_parentElement(&pParent), pParent);
}
hr = SUCCEEDED_AND_PTR_NON_NULL(hr, pParentDOM = pParent);
hr = SUCCEEDED_AND_PTR_NON_NULL(hr, pElementDOM = pElement);
hr = SUCCEEDED_AND_PTR_NON_NULL(hr, pNewElementDOM = pNewElement);
if (SUCCEEDED(hr))
{
hr = pParentDOM->replaceChild(pNewElementDOM, pElementDOM, &pReplaceDOM);
}
(Unfortunately, when you do this, whatever text has been typed into that input element is lost. So: click the icon before typing your password into an HTML field.)
The original program did everything in one process: it ran the tray icon and, when commanded, searched for and changed password fields. I knew, even before I wrote the program, that it would take some time to search all the windows on the desktop looking for password fields, and also some time to search all browser windows looking for INPUT elements. So, I wanted the user to have some feedback while this search was taking place. I wanted to blink the tray icon while this process was happening.
I didn't want to mess with multiple threads for such a simple utility, so I found my trusty idle-time work-item dispatcher and used it to split the password unhiding process into a lot of little work items. Looking at each window would be its own work item, as would looking at each INPUT element. In this way, I thought I could run the work items during the application's idle time, and the system tray class would then be able to nicely and smoothly animate the tray icon via Windows timer messages.
But, it turned out the icon animation wasn't smooth at all. Some of the operations took quite a bit of time, for example, getting the ShellWindows collection, and then processing each INPUT element. It was understandable, because those operations were manipulating the web page's DOM cross-process.
So, I restructured the program so it runs in two processes now. The original process that the user starts puts up the tray icon and manages it, and that's it. When the user clicks to unhide password fields, the program launches a copy of itself in a new process, with command line parameters to tell it what to do, and to tell it how to communicate back to the original process.
The second process sends messages back to the original process for each password field that it unmasks. That way, the original process can put up notification balloons as feedback to the user.
(The work item stuff is all still there, even though the second process doesn't really need it, because it has no UI that it needs to keep responsive. It was working, and there was no reason to rip it out.)
(By the way, I know that it is a waste of resources to keep a process running a system tray icon, consuming a couple of megabytes of private data, and so on, but I don't care. I want those password fields unmasked on demand, and I'm willing to pay the price. It's not like this is some random utility some device manufacturer (mouse, or trackball, or video card) installed on my machine and decided to keep running all the time just in case some day I wanted to change the resolution on my screen—which I never do.)
CMessagePump is a class that implements an application's main message pump. It runs until WM_QUIT is received.
WM_QUIT
It is parameterizable as follows:
Here is an example showing how to create a message pump, parameterize it, and then set it into operation:
// Create the message pump
auto_ptr<CMessagePump> s_msgPump(new CMessagePump(hInstance));
// Set up accelerator table, and hook up
// the work item dispatcher to run in the idle time
s_msgPump->AddAccelerators(::LoadAccelerators(hInstance, MAKEINTRESOURCE(IDC_PASSWORD)))
.SetIdleWorkTest(pdispatcher, &CWorkItemDispatcher::HaveWork)
.SetIdleWorkFunction(pdispatcher, &CWorkItemDispatcher::DoWork);
// Pump messages, don't return until WM_QUIT
ret = s_msgPump->DoMessagePump();
CWorkItemDispatcher implements a queue of work items, executing one when requested.
Before getting into how to use it, one question is surely in your mind, so I'll answer it first: Eh, why the heck does the world need another work item dispatcher?
Well, it doesn't really. But, nevertheless, I like this one. It is very simple, so there is very little chance it can get in your way and misbehave. It is very easy to use, so work items can be built as classes if you need state, or you can just use any object you have lying around by implementing a suitable 0- or 1-parameter method on it, or you can use anything you can stuff into a std::tr1::function—which gives you a lot of choice, and maximizes the chance that you don't have to go out of your way to create work items. Yet, simple as it is, it does implement priorities, and it also implements delayed work items that fire at a future time.
std::tr1::function
And, it integrates smoothly with CMessagePump (or with any other message pump you have) to make a smoothly running Win32 application.
So, how to use it? First, let's talk about work items. There is an abstract class, IWorkItem, from which all work items are derived.
IWorkItem
class IWorkItem
{
public:
IWorkItem() {}
virtual ~IWorkItem() {}
// This is where you actually do the work of your work item.
// The dispatcher is provided so that you can enqueue further
// work items. You can even enqueue yourself again.
virtual void DoWork(CWorkItemDispatcher* dispatcher) = 0;
};
Well, that's pretty standard. You derive from IWorkItem, implement DoWork() (and a constructor and destructor if you need to), and you're ready to go.
DoWork()
Alternatively, you can just provide a function to run. A class CWorkItemFactory provides a bunch of static functions that create work items from your function or function-like thing.
CWorkItemFactory
class CWorkItemFactory
{
public:
// Work functions that take no parameters
static std::auto_ptr<IWorkItem> New(void (*)());
static std::auto_ptr<IWorkItem> New(std::tr1::function<void()>);
template<class T> static std::auto_ptr<IWorkItem> New(T&, void(T::*)());
// Work functions that take 1 parameter (typically, some kind of cookie)
template<class TP> static std::auto_ptr<IWorkItem> New(void (*)(TP), TP);
template<class TP> static std::auto_ptr<IWorkItem> New(std::tr1::function<void(TP)>, TP);
template<class T, class TP> static std::auto_ptr<IWorkItem> New(T&, void(T::*)(TP), TP);
...
The six New functions come in two flavors: those which take a 0-parameter function, and those which take a 1-parameter function, and a parameter object to supply to it (when the work item is run).
New
Each of the two flavors has three varieties: one which takes a simple function, one which takes a std::tr1::function, and one which takes an object and a method on that object.
So that covers a lot of possibilities on how to easily create a work item from some function or method you have lying around.
Enqueuing a work item is very easy. The dispatcher provides an AddWork() method which takes a work item and a priority. When you enqueue the work item, the dispatcher takes over ownership of it. There is also an AddDelayedWork() method which takes, in addition, a delay in milliseconds. The work item will be held in a separate queue until (at least) that many milliseconds have elapsed, and then it will be transferred to the regular queue (where it will be executed in priority order).
AddWork()
AddDelayedWork()
Finally, how is the work item dispatcher created and used? A factory method, CWorkItemDispatcher::StandardWorkItemDispatcherFactory(), will return a dispatcher. The dispatcher has two useful methods for running work items:
CWorkItemDispatcher::StandardWorkItemDispatcherFactory()
// Returns true if there are any work items in the queue, or if
// there are any delayed work items even if their timers haven't
// expired.
virtual bool HaveWork() = 0;
// Performs (at most) one unit of work (and moves delayed work items
// to the work queue if they're ready to go)
virtual void DoWork() = 0;
These simple methods are just right for hooking up to a message pump, or in the main loop of a thread.
The work item dispatcher can be parameterized in two ways. First, you can choose whether or not you need to protect its work item queues with a lock or not. If you have a single threaded program, you don't need a lock. If you are going to enqueue work items from different threads, you need a lock. (All work items must be consumed from a single thread; that is, DoWork() should always be called from the same thread.)
The other way to parameterize the work item dispatcher is to control how it can "kick" its controller—typically, the message pump—when it has new work items ready to run. The issue is that the message pump might want to go into a wait state when there is no work for it to do. Then, some other thread might add a work item, or the time might have arrived for a delayed message to run. The message pump must be awakened. You can control what Windows message is used to kick the message pump, and what window or thread the message should be sent to.
CCopyIPC will copy a message—a buffer of bytes—from one process to another, via the Windows WM_COPYDATA facility. Obviously, this means that the receiving process must be running a message loop.
The sender and the receiver both instantiate a CCopyIPC object.
All the sender needs to do is call CCopyIPC::Send with a buffer of bytes, or an object, and the handle of the window to which to send the message. The window handle is passed out-of-band. For the PasswordUnhider, I put it on the command line of the process that is launched to process the password fields.
CCopyIPC::Send
class CCopyIPC
{
public:
...
// Send a buffer of bytes to the receiver
result_t Send(HWND hWndReceiver, uint nBytes, const char* bytes);
// Send an object to the receiver
template<class T>
result_t Send(HWND hWndReceiver, const T& data);
...
The receiver has a little more work. It has to handle the WM_COPYDATA message in the message loop of the target window. CCopyIPC supplies a suitable procedure that can be called with the wparam and lparam when the WM_COPYDATA message is received.
wparam
lparam
// Message handler for receiver
LRESULT DoWMCopyData(WPARAM wParam, LPARAM lParam);
The receiver also has to supply a callback function to be called when the data comes in. The nice thing is that CCopyIPC lets you supply the callback function in several forms.
First, the callback can get either a buffer of bytes, or an object. Second, the callback can be either a simple function, a function object, or a method.
...
// Callbacks for received data - as a buffer of bytes
// (the first parameter is the HWND of the sender)
CCopyIPC& SetDataCallback(void (*)(HWND, uint, const char*));
CCopyIPC& SetDataCallback(std::tr1::function<void(HWND, uint, const char*)>);
template<class T>
CCopyIPC& SetDataCallback(T& t, void (T::*)(HWND, uint, const char*));
// Callbacks for received data - as a particular self-contained struct
template<class TD>
CCopyIPC& SetDataCallback(void (*)(HWND, const TD&));
template<class TD>
CCopyIPC& SetDataCallback(std::tr1::function<void(HWND, const TD&)>);
template<class T, class TD>
CCopyIPC& SetDataCallback(T& t, void (T::*)(HWND, const TD&));
...
};
I am currently satisfied with my click-to-unhide approach. Another option would be to continually monitor the system and go ahead and unhide password fields as they are created. It would seem more convenient for the user.
This approach would use Windows hooks for ordinary Win32 applications, for example, the CBT hook. Or a DLL injection approach could be used, for example, with Detours. For IE windows, you could use a BHO.
Unfortunately, solutions like this could create stability problems, and possibly performance issues as well. I decided that, for now, I didn't need the convenience of a 0-click solution.
One issue that came up is that in most cases, when sending EM_SETPASSWORDCHAR to a window (in another process), SendMessage would fail with ERROR_ACCESS_DENIED, but PostMessage would work. I'm not sure why this happens. If anyone could enlighten me in the comments, I would appreciate it.
SendMessage
ERROR_ACCESS_DENIED
PostMessage
This is just a simple utility that I wrote to meet my needs. I hope it meets your needs, but if it doesn't, ... I'm sorry.
In particular:
In all cases, I'd be interested to hear about the problem, and really interested to hear how you fixed it!
Those of you wishing to compile the utility yourself, or to use the utility classes I've described: I make use of TR1 facilities like function, mem_fn, bind, and tuple. This means, you need, at a minimum, Visual Studio 2008 SP1. Alternatively, you could use Boost's TR1 implementation (though I haven't tried it).
function
mem_fn
bind
tuple
The system tray icon is managed by the excellent CSystemTray class, written by Chris Maunder and described in his excellent article Adding Icons to the System Tray.
CSystemTr. | http://www.codeproject.com/Articles/35297/Password-Field-Unhider-and-some-C-utility-classes?fid=1538906&df=10000&mpp=10&sort=Position&spc=Relaxed&tid=3019622 | CC-MAIN-2016-30 | refinedweb | 3,192 | 59.74 |
The Lagom Framework
The framework of the modern rural house image via Shutterstock
Radically different, but nonetheless easy — that is the dichotomy the new open source microservice framework Lagom is trying to create. What are the features that differentiate it from other frameworks? How easy is it to handle? What does the name actually mean?
The question regarding the meaning of the name is not easy to answer since one cannot literally translate the Swedish idiom Lagom. According to Wikipedia, the meaning is: “Enough, sufficient, adequate, just right.” In our case, this is not supposed to be a self-praise but a critical statement to the concept of microservices. Instead of focusing on “micro” and stubbornly following a “the less code, the better” concept, Lagom suggests that we think of a concept of “Bounded Context” from the Domain-Driven Design to find the boundaries for a service. The conceptual proximity of domain driven design and microservices can be found in different locations in the Lagom framework.
Getting started with Lagom
The easiest way to develop an application with Lagom is with the help of a Maven project template.
-DarchetypeArtifactId=maven-archetype-lagom-java \
-DarchetypeVersion=1.1.0
After the questions regarding names have been answered and you switch into the newly-created directory, you will find the directory structure as displayed here.
hello-api
hello-impl
integration-tests
pom.xml
stream-api
stream-impl
As it should be for microservices, not one, but already two services were generated. After all, the interaction and communication between services are at least as important as the implementation of a single on (and frequently the bigger challenge). Here are the services “hello” and “stream”; each implementation is divided into two subprojects (“api” and “impl”).
To launch the application, a simple
mvn lagom:runAll .
After a few downloads, it should be running at Port 9000. This can be easily checked with a command line tool like HTTPIE:
HTTP/1.1 200 OK
Content-Type: text/plainHello, Lagom!
One particularity that all components needed in the development have is that they —the project’s services, a service registry, an API gateway, and even the database Cassandra (in the embedded version)— are launched through the Maven plug-in. It is not necessary to set up services or a database outside of the project. Lagom stresses the importance to offer the developer an environment which feels interactive – check out the project and get going. This includes the fact that code changes will come into effect right after a reload, without the need for a build/deploy/restart cycle.
The services API — typesafe and asynchronous
As it can be seen from the folder structure, every service is divided into an implementation (“-”) and an API definition (“-”). The latter defines the HTTP interface of the service programmatically.
public interface HelloService extends Service { ServiceCall<NotUsed, String> hello(String id); default Descriptor descriptor() { return named("hello").withCalls( pathCall("/api/hello/:id", this::hello), ); } }
With the help of a builder, the service description will be created, in which the requested path will be mapped on a method call.
This interface is not only the template for implementation; Lagom also generates an appropriate client library. In other Lagom services, this can be injected via dependency injection with Google’s Guice. This way, a type-safe interface is provided when the respective service is selected. The manual construction of an HTML request and the direct use of a generic http client can be omitted.
Still, it is not mandatory to use the client library because the framework maps the method calls on HTTP calls, which may also be called directly, especially by non-Lagom services.
By the way, our little „hello“ method doesn’t deliver the response directly, but a ServiceCall. This is a functional interface. That is to say we do not create a simple object but a function – the function which shall be executed by the corresponding request. We deliver the types as type parameters for the request (since user GET call doesn’t submit any data, in this case “NotUsed”) and the response (in our case a simple String). The processing of the request is always asynchronous – the outcome of our function must be a CompletionStage. Lagom extensively uses Java 8 features. A simple implementation would look like this:
public class HelloServiceImpl implements HelloService { @Override public ServiceCall<NotUsed, String> hello(String id) { return request -> { CompletableFuture.completedFuture("Hello, " + id); }; } }
For a simple GET request, the gain of the service descriptors is limited. It gets more interesting when we want to send events between services asynchronously. We can achieve this in Lagom by choosing different type parameters for the ServiceCall. If our request and response types are defined as source (a type from the Akka streams library), the framework will initialize a WebSocket link. Here the service abstraction can score since it simplifies working with the WebSockets. As far as future versions are concerned, there are plans to support the additional “publish/subscribe” pattern so that messages can be placed on a bus and other services can subscribe to it.
public interface StreamService extends Service { ServiceCall<Source<String, NotUsed>, Source<String, NotUsed>> stream(); @Override default Descriptor descriptor() { return named("stream").withCalls(namedCall("stream", this::stream)); } }
Circuit breaker built-in
Let us assume that our service requests information per HTTP request at another service. This doesn’t respond within the expected timeframe, which means there will be a timeout. Requests to this server shouldn’t be repeated constantly because we ask an unnecessary amount of idle time from our application: If it’s likely that we won’t be getting a response, why should we wait for a timeout? Furthermore, there would be requests accumulating to the service. As soon as it becomes available again, it will be bombarded with pending requests to such an extent that it will be brought to its knees immediately.
A reliable solution for this problem is the circuit breaker pattern [6]. A circuit breaker knows three states:
- As long as everything is running without errors, it is closed
- If a defined limit of errors (timeouts, exceptions) is reached, it will be open for a defined period of time. Additional requests will fail with a “CircuitBreakerException”. For the client there won’t be additional waiting time and the external service won’t even notice the request.
- As soon as the set time period runs out, the circuit breaker will switch into the state “half open”. Now there will be one request passed through. If it is successful, the circuit breaker will be closed- the external system seems to be available again. If it fails, the next round with the state “open” begins.
Such circuit breakers are already integrated into the Lagom service client. The parameters are adjustable with the configuration file.
Lagom persistence
One aspect which proves that Lagom is very different from other micro frameworks is the integration of a framework for Event Sourcing and CQRS.
For many developers, working with a relational databaseis still the “default case”, possibly in connection with an ORM tool. Even this can be implemented in Lagom, but the user is steered into another direction. The standard in Lagom is the use of “Persistent Entities” ( corresponding to “Aggregate Roots” in Domain-Driven design). These Persistent Entities receive messages (commands).
public class HelloEntity extends PersistentEntity<HelloCommand, HelloEvent, HelloState> { @Override public Behavior initialBehavior(Optional<HelloState> snapshotState) { /* * Das Behavior definiert, wie die Entity auf Kommandos reagiert. */ BehaviorBuilder b = newBehaviorBuilder( snapshotState.orElse(new HelloState("Hello", LocalDateTime.now().toString()))); /* * Command handler für UseGreetingMessage. */ b.setCommandHandler(UseGreetingMessage.class, (cmd, ctx) -> ctx.thenPersist(new GreetingMessageChanged(cmd.message), evt -> ctx.reply(Done.getInstance()))); /* * Event handler für GreetingMessageChanged.. */ b.setEventHandler(GreetingMessageChanged.class, evt -> new HelloState(evt.message, LocalDateTime.now().toString())); return b.build(); } }
You can see exactly how this is presented in the code. Our quite simple entity allows us to change the welcome text for our service. We extend the superclass PersistentEntity which expects three type parameters: the command type, the event type, and the type of the state. In our case we define the command as a class UseGreetingMessage, which implements the interface HelloCommand and its instances are immutable. For type-saving purposes, one can go back to commands, events and states from the library Immutables. To save yourself some keystrokes, you can leverage a library such as Immutables for your commands, events and states.
The way our entity responds to commands is defined by a behavior. This can change at runtime. This way the entities can implement finite-state machines – the replacement of one behavior with another at the runtime correlates with the transition of the machines into another state.
The framework obtains the initial behavior via initialBevahior. To construct this, we will make use of the builder pattern.
First, we define a CommandHandler as our command. If a command is valid and demands the entity to be changed, for example, in case it sets an attribute to a new value, the change won’t occur immediately. Instead, an event will be created, saved and emitted. The EventHandler of the persistent entity which we also added with the builder to the behavior, reacts to the event and executes the actual change.
A significant difference to an “Update” in a relational database is that the current state of the persistent entity does not necessarily have to be saved. This will be merely held in memory (Memory Image). In case it becomes necessary to restore the state, e.g. after a restart of the application, this will be reconstructed through a playback of the events. The optional saving of the current state in called “Snapshot” in the model and does not replace the Event history, but solely represents a “pre-processing”. If an entity experienced thousands of changes of state during its lifetime, there is no need to play back all the events from the very beginning. It is possible to shortcut by starting with the latest snapshot and repeating only the following events.
The strict specifications that Lagom gives for the types and the structure of the behavior are meant to ease the conversion to this principle, called Event Sourcing, for developers. The idea is that I am forced to specify a clear protocol for each entity: Which commands can be processed, which events can be triggered and which values define the state of my class?
Clustering included
The number of Persistent Entities that I can use is not limited by the main memory of a single server. Rather, every Lagom application can be used as a distributed application. During the start of an additional instance I only have to add the address of an already running instance, after that it will register there and form a cluster with the present instances. The Persistent Entities are administered by the framework and will be distributed automatically within the cluster (Cluster Sharding). If nodes are added to or removed from the cluster, the framework will redistribute the instances. Likewise, it can restore instances which were removed from the memory (Passivation).
By the way, the built-in feature to keep the application state in the memory this way and also to scale this hasn’t been developed for Lagom originally. For this, Lagom relies on Akka. This has definitely been used in mission-critical applications , therefore any concerns regarding the reliability of the young framework are not well-founded.
Separate writing and reading
While it is easy in SQL databases to request any information from the data model, it is impossible in the case of Event Sourcing. We can only access our entity and request the state with the primary key. Since we only have an Event Log and not a relational data model, queries through secondary indices are impossible to make.
To enable this, the CQRS architecture (Command Query Responsibility Segregation, for further reading: A CQRS Journey) is applied. The basic principle here is that different data models are used for reading and writing. In our case this means that our Event Log is the write side.. It can be used to reconstruct our entities, but we won’t perform any queries on this. Instead, we also generate a read sidefrom the events. Lagom is already offering an ReadSideProcessor. Every event which occurs in combination with a class of PersistentEntities will also be processed and used to create the read side. This is optimized for reading and doesn’t allow for direct writing.
This architectural approach does not only offer technical advantages, since in many application cases the read and writing frequency are very different and they are scaled independently with this method. It also enables some new possibilities. As a consequence of never deleting the saved events, it is possible to add new structures on the read side, the so-called projections. These can be filled with the historical events and thus can give information not only in the future but also from the past.
CQRS allows the use of different technologies on the read side, adjusted to the Use Case. It is conceivable while not supported by Lagom yet, that one can build an SQL read sideand continue the use of available tooling, but simultaneously feeding an ElasticSearch database for the quick search and to send the events for analysis to Spark Streaming.
It is important to keep in mind that the read sidewill be refreshed asynchronously, with latency (“Eventual Consistency” between the write and the read side). Strong consistency is only available in this model on the level of the PersistentEntity.
Finally, it is also possible to code Lagom without Lagom Persistence. It is not mandatory to use Event Sourcing; the development of “stateless” – Services, or “CRUD” applications (Create, Read, Update, Delete) with a SQL database in the backend is also possible. But if someone is interested in Event Sourcing and CQRS, in scalable, distributed systems, Lagom can help them gain access into the topic.
Immutable values — Immutables
As mentioned earlier, the single commands, events and instances of the state must be immutable. Immutable data structures are an important concept from the functional coding, especially in the area of concurrency. Let us assume a method gets passed a list of numbers. The result is a value that is calculated from the list (maybe a meridian of the numbers of the list). By reasoning about this or maybe in some cases even through mathematical proof, you may state that a function is correct and will always deliver the same output for the same input.
But what if the delivered list is e.g. an ArrayList – how can we be sure? Fix is only the reference that is delivered. But what if another part of the program that is executed in parallel has the same reference? And adds some values to the list? In asynchronous systems that are based on sending the commands, it is essential that a command must not be changed after it has been sent. To rely on the fact that the developer will be careful would be negligent.
Lagom uses third party libraries for this. For the commands it binds Immutables, for collections pCollections. If I add a value to a collection from this library, the original collection will remain unchanged and I will receive a new instance with an additional value.
Deployment
Microservices provide a challenge not just for the developer but also for the ongoing operation. In many companies the deployment processes are still set up for the installation of .war or .ear files for application servers. But microservices are running standalone and are often packed into (Docker) containers and administered by the so-called service orchestration tools like Kubernetes or Docker Swarm.
Lagom requires such an environment, too. But it does not depend on a certain container standard (like Docker). It requires the runtime environment to have a registry which is searchable through other services. To be accessible, it must make an implementation of the Lagom ServiceLocator API available.
Unfortunately, at the moment it is only available for the commercial closed-source product ConductR. The open source community is working on the implementation for Kubernetes and Consul. Alternatively, a ServiceLocator based on static configuration can be used, but this is not recommended for production use.
Conclusion
Lagom follows an interesting path and is a remarkable framework. It’s fundamentally different in its technical base: Everything is asynchronous, it is based on sending commands and persisting is done per Event Sourcing. This brings tremendous advantages for the scalability of services – but for most developers (including everybody from the Java EE area), this means rethinking. With the change of a programming language always comes the fear of a temporary decrease in productivity because developers cannot revert to familiar practices and resources. It is the same in our case.
Lagom is trying to prevent this by giving the developer a clear path. If I follow the documentation of the textbook approach for service implementation and persistence in Lagom, I will be able to build a reactive system – completely based on messaging and being able to cluster, maybe even without realizing it.
In the relatively new area of microservices, standards are yet to be established. We will have to see which frameworks can stand the test of time. In contrast with old acquaintances from Java EE and Spring, Lagom instills new life into this and is putting a whole different architecture in the balance. Those who wish to try something new and are interested in scalable distributed systems will find Lagom helpful.. | https://jaxenter.com/the-lagom-framework-tutorial-130264.html | CC-MAIN-2021-04 | refinedweb | 2,912 | 54.42 |
Variables and Scope
Variable Scope
Scope is a characteristic of a variable that defines from which functions that variable may be accessed. There are two primary scopes in C:
- Local Variables can only be accessed within the functions in which they are created.
- Global Variables can be accessed by any funtion in the program. These are declared outside of all functions.
So far in the CS50 course, we have almost always been working with local variables.
int main(void) { int result = triple(5); } int triple(int x) { return x * 3; }
xis local to the function
triple(). No other function can refer to that variable, not even
main().
resultis local to
main().
Global variables exist too. If a variable is declared outside of all functions, any function may refer to it.
#include <stdio.h> float global = 0.5050; // variable is named global for ease of explanation int main(void) { triple(); printf("%f\n", global); // global is referred to here inside a function } void triple(void) { global *= 3; }
Why do local and global distinctions matter?
For the most part, local variables in C are passed by value in function calls.
When a variable is passed by value, the callee (the function receiving the variable) receives a copy of the passed variable, not the variable itself.
That means that the variable in the caller (the function making the function call) is unchanged unless overwritten.
For example, the following has no effect on
foo:
int main(void) { int foo = 4; triple(foo); } int triple(int x) { return x *= 3; }
fooby overwritting it:
int main(void) { int foo = 4; foo = triple(foo); // the call for triple here overwrites foo after the function call } int triple(int x) { return x *= 3; }
int increment(int x); int main(void) { int x = 1; // x(m) - m is local to main int y; y = increment(x); // x(m) printf("x is %i, y is %i\n", x, y); // x(m) } int increment(int x) // x(i) - i is local to increment { x++; // x(i) return x; // x(i) }
The above has the variable
x stored locally in both
int main(void) and
int increment(int x).
The output of the program above would be "x is 1, y is 2". | https://docs.nicklyss.com/c-variable-scope/ | CC-MAIN-2022-40 | refinedweb | 371 | 67.99 |
Question: DEXSeq install problem
2
4.4 years ago by
bartosovic.marek • 30
bartosovic.marek • 30 wrote:
Hi,
My server was recently updated to R 3.3 and I have a problem installing and running DEXSeq. Can someone help? Bioconductor installer states I am using bioc, version 3.2. Maybe I am doing some obvious mistake. Thanks !
Here is the Error:
source("") biocLite("DEXSeq",lib="~/R/x86_64-pc-linux-gnu-library/") BioC_mirror: Using Bioconductor 3.2 (BiocInstaller 1.20.3), R 3.3.0 (2016-05-03). Installing package(s) ‘DEXSeq’ trying URL '' Content type 'unknown' length 370326 bytes (361 KB) ================================================== downloaded 361 KB * installing *source* package ‘DEXSeq’ ... ** R ** inst ** preparing package for lazy loading Error in makePrototypeFromClassDef(properties, ClassDef, immediate, where) : in making the prototype for class “DEXSeqDataSet” elements of the prototype failed to match the corresponding slot class: design (class "formula" ) Error : unable to load R code in package ‘DEXSeq’ ERROR: lazy loading failed for package ‘DEXSeq’ * removing ‘/homes2/marek/R/x86_64-pc-linux-gnu-library/DEXSeq’
sessionInfo()
R version 3.3.0 (2016-05-03) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 14Installer_1.20.3 loaded via a namespace (and not attached): [1] tools_3.3.0
ADD COMMENT • link •modified 4.4 years ago • written 4.4 years ago by bartosovic.marek • 30
Nevermind, update of BiocInstaller to newer version solved the problem | https://www.biostars.org/p/191938/ | CC-MAIN-2020-45 | refinedweb | 229 | 53.17 |
I am trying to update with Dapper.Contrib this table:
public class MyTable { public int ID { get; set; } public int SomeColumn1 { get; set; } public int SomeColumn2 { get; set; } public int CreateUserID { get; set; } public int UpdateUserID { get; set; } }
I don't want to update the CreateUserID column because it is an update method so that I want to ignore this column while calling the Dapper - Update.Async(entity) method.
I tried using [NotMapped] and [UpdateIgnore] attributes but no help.
Note: I still want this column to be passed on insert operations, therefore, [Computed] and [Write(false)] is not appropriate.
Can someone help me figure out how to ignore this column when updating the table in the database?
As @Evk already mentioned in his answer, there is no solution implemented yet. He have also mentioned the workarounds.
Apart from that, you can choose to use Dapper (
IDbConnection.Execute(...)) directly bypassing Dapper.Contrib for this particular case.
I had a similar problem (with DapperExtensions though) with particular column update and other complex queries those DapperExtensions either cannot generate at all or need much work to make it happen with it.
I used Dapper directly instead of DapperExtensions for that particular case; other part of project still take benefit of DapperExtensions. This is like tread-off. Such cases are very limited. I found this is better solution instead of tweaking/forcing DapperExtensions for this. This also saved me on time and efforts. | https://dapper-tutorial.net/knowledge-base/47639596/dapper-updateasync-ignore-column | CC-MAIN-2021-25 | refinedweb | 239 | 56.86 |
This article shows you how to exploit the benefits of the Web 2.0 tooling support in IBM® Rational® Application Developer V7.5 for portlet and portal applications running on IBM® WebSphere® Portal V6.1. In a typical portlet application, each request to the server causes a complete refresh of the browser page, which leads to page flickering and performance gaps. Web 2.0 technology enables you to create more dynamic and responsive applications with a superior user experience. The technology aims to turn Web browsers into semi-rich clients by projecting user interface logic for page rendering, navigation, aggregation, and cross-portlet interaction into user's browsers.
The Web 2.0 tooling support provided with Rational Application Developer aids and expedites the process of creating such dynamic and responsive portlet and portal applications. Tooling support has been provided so that you can:
- Design portlets that interact using Client-Side Click-to-Action, which is a new event paradigm introduced in WebSphere Portal V6.1 for cooperation between portlets.
- Insert Person Menu and the capability to extend the Person Menu.
- Use an Ajax proxy for portlet applications.
- Use a client-side programming model to retrieve portlet preferences efficiently, performing portlet state changes on the client side.
This article discusses the above four topics in order. For each, it begins by describing a particular Web 2.0-based technology, followed with simple examples that illustrate the tooling support for the same.
Who should read this: Developers of portlet and portal-based applications.
Goals of this article: Describe how to develop rich, interactive, and responsive portlet and portal-based applications, exploiting the benefits of the tooling support for semantic tagging, Ajax proxy, and the Client-side programming model.
Designing portlets that interact and co-operate using Client-Side Click-to-Action
A click-to-action (C2A) event is one of the ways in which portlets can interact with each other and share information.
Click-to-action (C2A)
Using the C2A delivery method, you can transfer data from a source portlet to one or more target portlets with a simple click. When you click the source element, a pop-up menu displays a list of target actions that match the selected source. When you select a menu item, the corresponding target is invoked, and the source data is transferred to it. After the source data is processed, the target portlet triggers an action and displays the results.
Client-side click-to-action
Client-Side Click-to-Action is the new implementation of the C2A framework introduced in WebSphere Portal V6.1. It is based on Web 2.0 technology, and uses semantic tagging to define sources and targets.
The main purpose of semantic tags is to re-use the normal content of an HTML document, and to annotate it with meta-information that is evaluated during Document Object Model (DOM) parsing.
Semantic tagging and the Live Object Framework
Client-Side Click-to-Action is built on the Live Object Framework (LOF), which defines C2A sources as live objects in the system. The root element of a semantic tag is marked by adding a specific class to the element, which is specified by the live object framework.
The LOF also provides the basic DOM parsing and menu management services.
Advantages over the earlier C2A technique
The new Client-Side Click-to-Action technique introduced in WebSphere Portal has many advantages over the earlier C2A technique.
- The newly introduced event-creation paradigm is available for both IBM and Java™ Specification Request (JSR) portlets, whereas the earlier C2A technique was only available to portlets that adhered to the IBM Portlet API.
- It supports client-side JavaScript C2A target actions in addition to the server-side actions. For example, when you select a menu item on the source portlet and the source data is passed to the corresponding target, the action that is triggered at the target portlet can be a server-side action or a JavaScript action.
- With the new Client-Side Click-to-Action technique, evaluation and execution of C2A sources and targets occurs in a browser. Source and target matching no longer requires processing on the server, and the technique also removes menu generation code from the server. This results in reduced server load.
- The menu markup is generated only when the C2A source menu icon is clicked.
All of these advantages lead to a highly reactive and responsive UI, without server round trips and page flickering. All in all, they produce a superior user experience.
Terminology
Figure 1 illustrates C2A components. Table 1 lists the terminology that will be used in this article for Client-Side Click-to-Action.
Figure 1. Client-side click-to-action
Table 1. Terminology
Rational Application Developer tooling support for Client-Side Click-to-Action
The tooling support for Client-Side Click-to-Action in Rational Application Developer provides capabilities like intuitive wizards, palette drawer and menu bar items, automatic code generation, and potential data type matches. These functions help you create Client-Side Click-to-Action-enabled portlet applications with as much ease as possible.
The tools enable you to create a portlet that can:
- Send data to other portlets (source portlet).
- Receive data from other portlets, and then update its own view accordingly (target portlet).
A portlet can be both a source and target, sending data to one portlet and receiving data from another portlet.
Sample applications
This article discusses two example applications to demonstrate the features of Client-Side Click-to-Action:
- The first application shows you how to send data from a source portlet to a target portlet, and then invoke a simple JavaScript action on the target portlet after it receives the data.
- The second example is a portlet application used by a shipping company to maintain the details of its orders and customers. Five portlets in this application will be discussed here.
- The Orders portlet, which maintains the monthly details of the orders
- The Order Details portlet, which shows the details of an order
- The Account Details portlet, which shows the account details for an order
- The Customer Details portlet, which shows the details of a particular customer
- The Tracking Details portlet, which shows the tracking details of an order
In the shipping application, you will see how the Order Details and Tracking Details portlets can exchange data using Client-Side Click-to-Action. In response to that action, a server-side action can be invoked on the Tracking Details portlet.
If you use the tools provided by Rational Application Developer, the above tasks are simplified to a great extent.
Sample application 1
In this simple application, you can send your name from the source portlet to the target portlet. Upon receiving the value, the target calls a JavaScript that prints "Hello [your name]" on the target portlet.
Figure 2. The Display Myname sample
To achieve the above result, you will execute the following steps:
- Create a new portlet project (C2ASource).
- Create a new portlet (C2ATarget portlet) in the C2ASource portlet project.
- Enable the C2ASource portlet to send data to C2ATarget portlet using Client-Side Click-to-Action.
- Insert a Client-Side Click-to-Action menu header onto the source pop-up menu.
- Enable the C2ATarget portlet to receive data from the C2ASource portlet using a Client-Side Click-to-Action.
- Publish the portlet application on WebSphere Portal.
Create a C2A Source Portlet project
- Select File > New > Project > Portlet Project and click Next. The New Portlet Project wizard launches. Enter the following information, as shown in Figure 3.
- Enter
C2aSourceas the Project name of the portlet.
- Select WebSphere Portal 6.1 as the Target Runtime.
- Select JSR 168 Portlet (in this example), JSR 286 Portlet, or IBM Portlet as the Portlet API.
- Select Faces portlet (in this example) or Basic portlet as the Portlet type.
- Click Next, and then click Finish.
Figure 3. Specify a name and location for the new portlet project
Create the C2aTarget portlet
Create a new portlet (C2aTarget) in the C2aSource portlet project that you created previously.
- Right-click the C2ASource project in the Project Explorer.
- Select New > Portlet. The New Portlet wizard launches. Enter the following information, as shown in Figure 4.
- Enter
C2ATargetas the Name of the portlet.
- Keep the Portlet type for the C2ATarget portlet the same as for the C2ASource portlet.
- Click Next, and then click Finish.
Figure 4. Select a portlet project for the new portlet
Enable the C2ASource portlet to send data
Enable the C2ASource portlet to send data to the C2ATarget portlet using Client-Side Click-to-Action
- Double-click the C2ASourceView.jsp to open it in the Page Designer.
- Select the Portlet drawer in the Palette view.
- Drag the Client-Side Click-to-Action Output Property menu item from the palette onto the C2ASourcePortletView.jsp, as shown in Figure 5.
Figure 5. Drag palette item to design tab
- Alternatively, select Insert > Portlet > Client-Side Click-to-Action Output Property from the menu bar, as shown in Figure 6.
Figure 6. Select menu command
The Insert Client-Side Click-to-Action Output Property wizard is displayed.
You need to input the following two fields to enable a portlet to send data:
- Data Type URI: Type name describing the format and semantics of the data
- Value: Actual data that is passed to the target operation
To do so, follow these steps, as shown in Figure 7:
- Specify the Data Type URI field.
- Specify your name (or the value that you want to send to the target portlet) in the Value field, and then click Finish.
Figure 7. Describe the type of data the portlet can transfer
- Save the C2ASourceView.jsp.
- To see the inserted code, click the Source tab.
The Client-Side Click-to-Action source object code shown in Listing 1 is auto generated and inserted in the C2ASourceView.jsp.
Listing 1. Source object code
<div class="C2A:source"> <span class="C2A:typename" style="display: none"></span> <span class="C2A:value" style="display: none">Charu</span> <span class="C2A:anchor">Anchor Data</span> </div>
As you can observe in this code, the
<div> and
<span> tags have been semantically tagged to
provide their class attributes with special meanings, which is evaluated
during DOM parsing by the LOF.
The Anchor Data is the default value inserted for the C2A hover UI String (that is, the value that shows up in the browser), indicating a C2ASource. When the portlet application is published on WebSphere Portal, hovering over the Anchor Data shows you the source portlet menu. The source menu has one entry for each target portlet that can consume the value sent by the source portlet.
You can change this value from
Anchor Data to the
value that you want as the C2A hover UI String (preferably your name in this
scenario).
Figure 8 shows the Design view of the C2ASourceView.jsp.
Figure 8. C2aAnchor
Insert menu header
To insert the Client-Side Click-to-Action menu header to the source pop-up menu, follow these steps.
- Double-click the C2ASourceView.jsp to open it in the Page Designer.
- Select the Portlet drawer in the Palette view.
- Drag the Client Side click-to-action Menu Header menu item from the palette onto the C2ASourcePortletView.jsp, as shown in Figure 9.
Figure 9. Drag menu item to Design tab
- Alternatively you can select Insert > Portlet > Client-Side Click-to-Action Menu Header Property from the menu bar.
The Insert Menu Header wizard is displayed.
- Specify
Display Mynameas the header contents, as shown in Figure 10, and click Finish.
Figure 10. Specify the menu header contents of the source portlet
- Save the C2ASourceView.jsp.
- To see the inserted code, click the Source tab.
The Client-Side Click-to-Action menu header code shown in Listing 2 is auto generated and inserted in C2ASourceView.jsp.
Listing 2. Menu header code
<p class="c2a:display" style="display: none"> Display Myname </p>
The Client-Side Click-to-Action source object code is updated, as shown in Listing 3.
Listing 3. Source object code
<div class="c2a:source"> <span class="c2a:typename" style="display: none"></span> <span class="c2a:value" style="display: none">Charu</span> <span class="c2a:anchor">Charu Malhotra</span> <p class="c2a:display" style="display: none"> Display My name </p> </div>
Enable the C2ATarget portlet to receive data
Enable the C2ATarget portlet to receive data from the C2ASource portlet using Client-Side Click-to-Action.
Design the C2ATarget portlet
(You can ignore the following steps if you have selected Basic portlet as the Portlet type in step 1 of the section "Create a C2A Source Portlet project", because basic portlet already comes with a default form, text field, and submit button.)
- Double-click the C2ASourceView.jsp to open it.
- Switch to the Design view.
- Select the Form Tags drawer in the Palette view.
- Drag the Form object from the palette onto the C2ATargetView.jsp.
- Now, drag the Text Field object from the palette onto this form.
- The Insert Text Field dialog is displayed. Specify
C2AInputin the Name field, and click OK.
- Again, drag the Submit button object from the palette onto the previous form and next to the text field that you inserted, as shown in Figure 11.
- The Insert Submit Button dialog is displayed. Specify
my submitin the Name field and
Submitin the Label field, and then click OK.
Figure 11. Design view of C2aTarget port let before enabling it as a C2a target
Enable the C2ATarget port let to receive data
Enable the C2ATarget portlet to receive data from the C2ASource Portlet.
- Select the Portlet drawer in the Palette view.
- Drag the Client-Side Click-to-Action Input Property menu item from the palette onto the Submit button that you created using the previous steps (or the default Submit button for basic portlets), as shown in Figure 12.
Figure 12. Drag the Input Property palette item
- Alternatively, you can select Insert > Portlet > Client-Side Click-to-Action Input Property from the menu bar.
The Insert Client-Side Click-to-Action Input Property wizard is displayed.
Three inputs are required from the user to enable a portlet to receive data
- Data Type URI: Type a name describing the format and semantics of the data. This should exactly match the Data Type URI field value for the C2A source portlet.
- Action Value: The action to be executed when the value sent by the C2A source portlet reaches the target portlet. This can be a server-side action or a JavaScript action.
- Action Label: The label to be displayed as the C2A pop-up menu for this target. This label corresponds to an entry into the C2A source portlet pop-up menu, as shown in Figure 1 previously.
The Data Type URI field shows potential data type matches in the drop-down list, as shown in Figure 13.
- Select from the Data Type URI field drop-down menu, as shown in Figure 14, and then click Next.
Figure 13. Describe the type of data that your portlet can accept
- For this scenario Specify
javascript:void(0);in the Action Value field to avoid a round trip to the server. In the next section you will see how a JavaScript action can be invoked on the submit event of the form.
- Specify
Send My name to Targetin the Action Label field, as shown in Figure 14, and then click Finish.
Figure 14. Specify action details
- Save the C2ATargetView.jsp.
- To see the inserted code, click the Source tab.
The Client-Side Click-to-Action target object code shown in Listing 4 is auto generated and inserted in the C2ATargetView.jsp.
Listing 4. Target object code
<form class="c2a:target" action="javascript:void(0);"> <input type="text" name="c2aInput" size="20" class="c2a:action-param"><br> <input type="submit" name="mysubmit" value="Submit"> <span class="c2a:typename" style="display: none"></span> <span class="c2a:action-label" style="display: none"> Send My name to Target</span> </form>
The c2aInput text field, which has been tagged as c2a:action-param, receives the value sent by the C2A source.
Figure 15 shows the Design view of the C2ATargetView.jsp.
Figure 15. Design view of C2aTarget portlet after enabling it as a C2a target
- Switch to the Source view of C2ATargetView.jsp.
- Add the following code and JavaScript action in the C2ATargetView.jsp above the
formtag, as shown in Listing 5.
Listing 5. JavaScript action above the form tag
<b><div id="mydiv"></div></b> <br><br> <script type="text/javascript"> function displayName() { var name = window.ibm.portal.c2a.event.value; var myname=document.getElementById("mydiv"); myname.innerHTML= "Hello " +name +"!!"; myname.value="Hello " +name + "!!"; } </script>
This JavaScript prints "Hello [yourname]" when invoked by the target portlet.
- Next, in the form tag, add the
onsubmit="displayName();return false;"attribute.
This invokes the above JavaScript when the value from the C2A source portlet reaches the target portlet, as shown in Listing 6.
Listing 6. Value from source to target
<form class="c2a:target" onsubmit="displayName();return false;" action= "javascript:void(0);"> <input type="text" name="c2aInput" id=mytext<br> <input type="submit" name="mysubmit" value="Submit"> <span class="c2a:typename" style="display: none"></span> <span class="c2a:action-label" style="display: none" >Send Myname to Target </span> </form>
- Save the C2ATargetView.jsp.
Publish the portlet application on WebSphere Portal
Now the portlet application is ready to be published on WebSphere Portal.
Right-click the C2ASource portlet project and select Run on Server. Figure 16 shows what the sample looks like when it is published.
Figure 16. Display Myname sample published on WebSphere Portal V6.1
Larger view of Figure 16.
Sample application 2
You have the following portlets in the Shipping Details portlet application:
- Orders: This portlet displays the summary of orders placed in a particular month. It serves as a C2A source, sending:
- Order_ID to Order Details and Account Details.
- Customer_ID to Customer Details.
- Order Details: This portlet displays the details for a particular order. It serves as a C2A target, receiving Order_ID from the Orders portlet. This article demonstrates how to enable this portlet to send Tracking _ID to Tracking Details portlet using the C2A source option.
- Account Details: This portlet displays the account details for a particular order. It serves as a C2A target, receiving Order_ID from the Orders portlet.
- Customer Details: This portlet displays the details for a particular customer. It serves as a C2A target, receiving Customer_ID from Orders portlet.
- Tracking Details: This portlet displays the details for a particular shipment. This article demonstrates how to enable it to receive Tracking_ID from the Order Details portlet using the C2A target option.
To begin, download the sample available at the end of this article and run it on WebSphere Portal.
The current scenario is that OrdersPortlet sends:
- Order_ID to Order Details and Account Details
- Customer_ID to Customer Details
When the sample is published on the server, perform the following steps:
- Enter any month (for example,
September) in the text box displayed in OrdersPortlet, and then click Submit .The orders for that particular month are displayed.
- When the order details display, hover over any Order_ID. A menu is displayed.
- Click the Show Order Details menu item. the details of the particular order are displayed in the Order Details portlet.
- Click the Show Account Details menu item. The account details of the particular order are displayed in the Account Details portlet.
- Now hover over any Customer_ID, and you can see a menu.
- Click the Show Customer Details menu item. The details of the particular customer are displayed in the Customer Details portlet.
Figure 17 shows what the sample looks like when it is published (before enabling Order Details as a C2aSource that sends Tracking_ID to Tracking Details).
Figure 17. Shipping Details sample
As you can see from the published sample, currently there is no communication between the Order Details portlet and the Tracking Details portlet.
Now you will learn how to enable the Order Details portlet as a C2A source that sends Tracking_ID to the Tracking Details portlet. The Tracking Details portlet in turn will be enabled as the C2A target that receives Tracking_ID from the Order Details portlet. After it receives Tracking_ID, a server-side action is invoked on the Tracking Details portlet that displays the details of the particular shipment tracking, as shown in Figure 18.
Figure 18. Shipping Details sample objective
To achieve the above result, execute the following steps:
- Enable OrderDetail as a C2A source
- Enable TrackingDetail as a C2A target
- Publish the portlet application on WebSphere Portal.
Enable OrderDetail as a C2A source
- Double-click OrderDetailsView.jsp to open it in the Page Designer.
- Drag Client side Click-to-Action Output Property from the palette onto the Tracking_ID column in the Design view of OrderDetailsView.jsp.
Figure 19. Drag property to the Tracking_ID column
- The Insert Client-Side Click-to-Action Output Property wizard is displayed. Fill in the values for the Data Type URI and Value fields, as shown in Figure 20, and then click Finish.
Figure 20. Enable OrderDetail as a C2A source
- Open OrderDetailsView.jsp in the Design view, .
- Change the Anchor value from
Anchor Datato the value you want as the C2A Hover UI string. Preferably, keep the C2A Hover UI string the same as the C2A value being passed. For example, in the current scenario, the Tracking_ID being passed by the Order Details portlet should be displayed to the user.
Figure 21. C2aAnchor2
- Open OrderDetailsView.jsp in the Source view. The code shown in Listing 7 is inserted.
Listing 7. Tracking ID code
<td> <div class="c2a:source"> <span class="c2a:typename" style="display: none"></span> <span class="c2a:value" style="display: none"><%=od.getTrackingId() %></span> <span class="c2a:anchor"><%=od.getTrackingId()%></span> </div> </td>
- Similar to the Display Myname application, you can also insert a menu header to the source. Specify
Send Tracking Id to Tracking Detailsas the header contents.
- Save OrderDetailsView.jsp. The Client-Side Click-to-Action source object code is updated as shown in Listing 8.
Listing 8. Updated source object code
<td> <div class="c2a:source"> <span class="c2a:typename" style="display: none"></span> <span class="c2a:value" style="display: none"> <%=od.getTrackingId() %></span> <span class="c2a:anchor"><%=od.getTrackingId()%></span> <p class="c2a:display" style="display: none"> Send Tracking Id to Tracking Details</p> </div> </td>
Enable TrackingDetail as a C2A target
- Double-click and open TrackingDetailEntry.jsp in the Page Designer
- Drag the Client-Side Click-to-Action Input Property menu item from the palette onto the Submit button in the Design view of TrackingDetailEntry.jsp file, as shown in Figure 22.
Figure 22. Palette DND on TrackingDetail
The Insert Client-Side Click-to-Action Input Property wizard is displayed.
- Select from the Data Type URI drop-down, as shown in Figure 23, and then press Next.
Figure 23. Enable TrackingDetail as C2A target 1
- For this scenario, leave the default value in the Action Value field.
- Specify Show Tracking Details in the Action Label field, as shown in Figure 24, and click Finish.
Figure 24. Enable TrackingDetail as C2A target 2
- Open TrackingDetailEntry.jsp in the Source view to see the inserted code, as shown in Listing 9.
Listing 9. Updated TrackingDetail source code
<FORM method="POST" enctype="application/x-www-form-urlencoded" name="TrackingDetails" class="c2a:target" action="<%= tdb.getActionURL() %>"> <LABEL class="wpsLabelText" for="<%= TrackingPortlet.TRACKING_ID %>"> Enter tracking id to display details:</LABEL><BR/> <INPUT name="<%= TrackingPortlet.TRACKING_ID %>" type="text" class="wpsEditField c2a:action-param"/><BR/> <INPUT class="wpsButtonText" name="tracking details" type="submit" value="Submit"/> <span class="c2a:typename" style="display: none"></span> <span class="c2a:action-label" style="display: none">Show Tracking Details</span> </FORM>
Note: Change the name of the submit button from submit to tracking details and save TrackingDetailEntry.jsp.
Currently in WebSphere Portal, the submit button cannot be named "Submit" for Client-Side Click-to-Action target code to work properly.
- Repeat these steps for TrackingView.jsp.
Publish the portlet application on WebSphere Portal
Now the portlet application is ready to be published on WebSphere Portal.
- Right-click the portlet project and select Run on Server. Figure 25 shows what the sample looks like when it is published.
Figure 25. Shipping Details sample published on WebSphere Portal
Larger view of Figure 25.
Inserting a Person Menu and extending the Person Menu in a portlet application
Following is some basic information about the components of the Person Menu in Rational Application Developer and WebSphere Portal.
The Person Java™Server Page (JSP™) tag provides contextual collaboration functionality related to a named person. It generates the HTML that renders the specific set of actions to display on the Person Menu. It was originally implemented as a server-side JSP tag that cannot be called from JavaScript code. In an attempt to minimize the server load for better performance and scalability, and in order to support Ajax clients, the Person JSP tag has been updated in WebSphere Portal to have a JavaScript API that can be called in the client.
The updated Person Menu displays a set of contact information about the selected person by specifying hCard attributes (hCard is the HTML representation of vCard).
The Person Menu extension allows you to extend the Person Menu, by enabling you to write JavaScript for actions that can be executed, or actions that are targeted. Rational Application Developer provides the necessary tooling support for the same.
Live Object Framework (LOF) and Semantic Tagging
The person names are semantically tagged using the standard hCard micro format, and hence behave as live objects in the system. The Person service that enables the Person Menu for hCards in the page is plugged in to the Live Object Framework (LOF).
Advantages over the original Person JSP tag
- The new semantic tagging-based Person Menu provides the same functionality as the traditional JSP Person tag, but with a better user experience:
- It shows hover and pop-up on each hCard.
- It provides access to actionable menu items like Show Profile, Send E-mail, and so on.
- The person information is retrieved only when the pop-up for the person is shown on the portal, and there is minimal server load until the person's name is clicked. Due to this on-demand data retrieval, there is no wasteful server work for person names that never get clicked.
- The Person Menu extension allows you to extend the Person Menu. You can add more than one extension. It also allows you to customize the look and feel and user registry.
Sample application
Now you will create a simple application to demonstrate how Rational Application Developer simplifies the task of inserting the Person Menu, and then extending it using the Person Menu extension in a portlet application.
Adding actions to the Person Menu
To design this application you will execute the following steps:
- Create a WebSphere Portal targeted portlet project.
- Insert a Person Menu in the portlet .jsp file.
- Extend the Person Menu by inserting Person Menu extension code in the portlet .jsp file.
- Provide the JavaScript for the Person Menu extension.
After designing the portlet application, publish it on WebSphere Portal.
Portlet project creation
- Select File > New > Project > Portlet Project, and then press Next. The New Portlet Project wizard is displayed.
- Enter
PersonMenuExampleas the Name of the portlet.
- Select the WebSphere Portal 6.1 as the Target Runtime.
- Select JSR 168 Portlet, JSR 286 Portlet, or IBM Portlet as the Portlet API.
- Select Faces or Basic as the Portlet type.
- Press Next, and then press Finish.
Insert a Person Menu
- Double-click to open PersonMenuExampleView.jsp in the Page Designer.
- Select the Portlet drawer in the Palette view.
- Drag the Person Menu object from the Palette view onto the PersonMenuExampleView.jsp, as shown in Figure 26.
Figure 26. Person Menu palette item
- Alternatively you can select Insert > Portlet > Person Menu from the menu bar. The intuitive Insert Person Menu wizard is displayed.
- Specify the following hCard attributes, as shown in Figure 27.
- Name
- Address (optional)
- Phone number (optional)
Figure 27. Specify person menu attributes
- Click Finish and save PersonMenuExampleView.jsp.
- Open PersonMenuExampleView.jsp in the Source view. The code shown in Listing 10 has been inserted.
Listing 10. Inserted source code
<div class="vcard"> <span class="fn">Charu Malhotra</span> <span style="display: none"class="email">charumalhotra@in.ibm.com</span> </div>
Insert the Person Menu Extension
Continue with the above scenario.
- Again, select the Portlet drawer in the Palette view.
- Drag the Person Menu Extension object from the Palette view onto the PersonMenuExampleView.jsp, as shown in Figure 28.
Figure 28. Person Menu Extension palette item
- Alternatively you can select Insert > Portlet > Person Menu Extension from the menu bar. The intuitive "Insert Person Menu Extension" wizard is displayed (see Figure 29).
Figure 29. Specify person menu extension attributes
The following inputs are required from you to insert a Person Menu Extension:
- Action details ID: The tooling support auto generates the action id for you. It must be unique for a particular Person Menu Extension.
- JavaScript: The JavaScript name that has the action that will be invoked when the menu item is selected. This should be available at the following directory:
[WebSphere Portal Server Home]\ui\wp.tagging.liveobject\semTagEar\Live_Object_Framework.ear\liveobjects.war\javascript.
- Label: A label for the Person Menu Extension.
- Description: A description for the Person Menu Extension.
- Specify ShowIf: This function decides the visibility of the Person Menu Extension.
- Specify Action: This is the function that is executed when the Person Menu Extension is clicked. The argument of the function should be
@@@ARGS@@@.
- Click Finish and save PersonMenuExampleView.jsp.
- Open PersonMenuExampleView.jsp in the Source view. The code shown in Listing 11 is auto generated.
Listing 11. Auto-generated source code
<div class="com.ibm.portal.action" style="display: none"> <span class="action-id">action_0</span> <span class="action-impl">/javascript/TestAction.js</span> <span class="action-context">person</span> <span class="action-label">Test Action</span> <span class="action-description">This is a test action for adding a Person Menu Extension </span> <span class="action-showif">javascript:TestAction.showif</span> <span class="action-url">javascript:TestAction.execute(@@@ARGS@@@)</span> </div>
The JavaScript for the Person Menu Extension (TestAction.js, as specified previously) should be available at the following directory:
[WebSphere Portal Server Home]
\ui\wp.tagging.liveobject\semTagEar\Live_Object_Framework.ear\liveobjects.war\javascript.
The sample contents for TestAction.js will be like that shown in Listing 12.
Listing 12. TestAction.js sample contents
var TestAction = { showif: function(person) {return true; }, execute: function(person) { alert("TestAction executed for: " + person.fn); } }
This will simply generate an alert when the new Test Action menu item is selected.
Publish the portlet application on WebSphere Portal
Now the portlet application is ready to be published on WebSphere Portal.
- Right-click the PersonMenuExample portlet project and select Run on Server. Figure 30 shows what the sample looks like when it is published.
Figure 30. Person Menu sample published on WebSphere Portal
Using Ajax proxy in portlet applications
Ajax allows Web pages to load data or markup fragments from a server using asynchronous requests that are processed in the background. Therefore, the requests do not interfere with the Web page that is currently displayed in the browser.
Ajax applications
You can use Ajax to increase the responsiveness and usability of a portlet application significantly. You do this by exchanging small amounts of data with the server, and consequently refreshing only small parts of the markup.
Same-origin policy
Ajax-based Web applications sometimes want to do Ajax requests to servers different from the server that served the HTML document.
For example, suppose you are designing a Web application that you want to:
- Use an external representational state transfer (REST) service, such as Google suggestions, Yahoo spell-checking, and so on.
- Use some remote corporate REST service available on the intranet.
- Include news feeds from an external server (like CNN).
Restriction on XMP HTTP requests
In order to prevent malicious Ajax code served from one server from using your browser as the basis for attacking other servers, requests are only allowed to the server that served the current document, as shown in Figure 31. This same-origin policy prevents client-side scripts (in particular, JavaScript) from loading content from a different origin comprising protocol, domain name, and port.
Figure 31. Same-origin policy
Ajax proxy
To overcome the same-origin policy restriction, WebSphere Portal offers a solution that is based on a server-side HTTP proxy, the Ajax proxy layer. The Ajax proxy layer intercepts the calls, and retrieves the content from the remote site, as shown in Figure 32. It also allows for these resources to be cached in a central server. This security model allows administrators to restrict access to trusted origins in a very flexible way.
Figure 32. Ajax proxy layer
Sample application
The AjaxProxyPortletSample application used in this article consists of a portlet that accesses foreign domains using XMLHttpRequests. In order to overcome the same-origin policy, the portlet uses the Ajax proxy layer to access these domains.
- Select the category (Rational or Lotus) from the drop-down box, as shown in Figure 33.
- Next, click the Get the articles button. The portlet gets the feeds of developerWorks articles (Rational or Lotus) from the IBM Website. It then displays the links to the topics.
Figure 33. Sample Ajax proxy
The complete sample is available for download at the end of this article. This article covers the Ajax proxy tooling support in Rational Application Developer. The implementation perspective is discussed only in brief.
The following sections discuss how you can:
- Enable Ajax proxy support for a new portlet project.
- Register the proxy servlet in the Web deployment descriptor.
- Specify Ajax proxy configuration parameters.
- Send an XMP HTTP request through an Ajax proxy.
- Publish the AjaxProxyPortletSample application on WebSphere Portal.
- Enable Ajax proxy support for an existing portlet project.
- Disable Ajax proxy support for an existing portlet project.
Enable Ajax proxy support for a new portlet project
- Select File > New > Project > Portlet Project and then press Next. The New Portlet Project wizard is displayed, as shown in Figure 34.
- Enter
AjaxProxyPortletSampleas the Name of the portlet.
- Select WebSphere Portal 6.1 as the Target Runtime.
- Select JSR 168 Portlet, JSR 286 Portlet, or IBM Portlet as the Portlet API.
- Select Faces, Basic, Empty, or Struts as the Portlet type.
Figure 34. New portlet project with Ajax proxy enabled
- Click Show Advanced Settings. The Project Facet page is displayed.
- Expand Web 2.0 facet and select the Ajax proxy check box, as shown in Figure 35. Click OK.
Figure 35. Project Facets page
- Press Next, and then press Finish.
- Proxy servlet is registered in the Web deployment descriptor. Expand the Web Content > WEB-INF folders, and double-click Web.xml.
- The Web deployment Descriptor is displayed. Click the Servlet tab, as shown in Figure 36.
Figure 36. Web deployment descriptor showing servlets and JavaServer Pages (JSPs)
Larger view of Figure 36.
- The Ajax Proxy servlet with the class name com.ibm.wps.proxy.servlet.ProxyServlet gets registered in the Web.xml.
(Note: If you need to access resources that require authentication, you can specify a second servlet mapping that is associated with a security constraint. To specify a new servlet mapping, click the Add button in the URL Mappings section shown above in Figure 36. This invokes the Add Servlet Mapping dialog.)
- Click the Source tab in Web Deployment Descriptor view to see the code added in Web.xml, as shown in Listing 13.
Listing 13. Added code in Web.xml
<servlet> <servlet-name>ProxyServlet</servlet-name> <servlet-class>com.ibm.wps.proxy.servlet.ProxyServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ProxyServlet</servlet-name> <url-pattern>/proxy/*</url-pattern> </servlet-mapping>
- The Ajax proxy configuration file is created. Expand the Web Content > WEB-INF folders, as shown in Figure 37.
Figure 37. Ajax proxy configuration file
- Double click proxy-config.xml, which opens the Ajax Proxy Configuration Editor window shown in Figure 38.
Figure 38. Specify paths that map to a URL on a remote domain
- Using the proxy-config.xml, you can specify the Ajax proxy configuration parameters. First, you can add an item to the proxy rules, as shown in Figure 39.
Figure 39. Add proxy rules
Context path mappings
A mapping element is used to map incoming requests to a target URL based on
their context path. Therefore, each mapping element needs to specify a
contextpath attribute (and optionally a URL
attribute), as shown in Figure 40.
Figure 40. Specify context path mapping
Larger view of Figure 40.
Access policy
The policy element is used to define an access policy for a specific URL pattern, as shown in Figure 41. It includes the following sub-elements, as shown in Figure 42:
- Actions
- Headers
- Mime-types
- Cookies
- Users
Figure 41. Specify an access policy
Larger view of Figure 41.
Figure 42. Specify policy elements
General configuration parameters
You can specify the HTTP-related parameters, such as:
- Socket-timeout
- Retries
- Max-total-connections
- Max-connections-per-host
Sending an XML HTTP request
Suppose that you have created a context mapping in the proxy-config.xml file, such as that shown in Listing 14.
Listing 14. Context mapping
<proxy:mapping
Context path:
/proxy
URL:
*
Now, if you want to get the response from the site through the proxy servlet, then you have to create the XHR request by constructing its URL as follows:.
This complete URL contains the encoded path of the URL(that is, the http:// part has been replaced with http/)
You may achieve this encoding in your JavaScript file by constructing the function as shown in Listing 15.
Listing 15. Replace characters in the URL
encodeURL: function(url) { return url.replace(/:\/\//, "/"); }
This produces a context mapping, like that shown in Listing 16.
Listing 16. Context mapping produced by the specified function
<proxy:mapping
Context path:
/dw
URL:*
In this case, if you want to get the response from the site through a proxy servlet, then you have to create the XHR request by constructing its URL as follows:
The proxy.js file in the AjaxProxyPortletSample contains the code that performs the following operations:
- Encoding the URL
- Constructing the XHR object for this URL
- Sending the XHR request to the server, and loading the response
- Parsing the response to display it in the AjaxProxyPortletSampleView.jsp file
To have a complete look at the JavaScript file, expand the Web Content > JS folders in the AjaxProxyPortletSample, and then open proxy.js.
Publish the portlet application on WebSphere Portal
- Right-click the AjaxProxyPortletSample and select Run on Server.
- When the project is published, select the category (Rational or Lotus) from the drop-down list.
- Next, click Get the articles. The sample shows the links to the latest topics in the selected category in response.
When you click the Get the articles button, the XMLHttpRequest object is created using a URL that corresponds to the selected topic (Rational or Lotus). This request is then passed to the ProxyServlet, which in turn delegates it to the target server, retrieves the response and, finally, returns the XML response back to the client browser. The JavaScript then parses and displays this response in a user friendly manner, as shown in Figure 43.
Figure 43. Ajax proxy portlet sample published on WebSphere Portal
Enabling Ajax proxy support for an existing portlet project
- Right-click the project and select Properties > Project Facets. The Project Facet window appears.
- Expand Web 2.0 and select the Ajax proxy check box, and then click OK.
Disabling Ajax proxy support for an existing portlet project
- Right-click the project and select Properties > Project Facets. The Project Facet window is displayed.
- On the Project Facets page, clear Ajax proxy (as shown in Figure 44), and then press OK.
- Observe that the proxy-config.xml is deleted from the WEB-INF folder, and the ProxyServlet entry is deleted from Web.xml.
Figure 44. Disabling Ajax support
Using client-side programming model support in WebSphere Portal
Previous versions of WebSphere Portal required that a request be sent to the server for each portlet operation. For example, if you changed the portlet window state to minimize or maximize, a submit request had to be sent to the server and the response sent back to your browser. This would cause a page refresh, and only subsequently was the portlet shown in the maximized or minimized state. Typically these types of server-side operations require repeated round trips to the server.
To reduce this type of server traffic, WebSphere Portal now supports a client-side programming model in which portlet state changes can be performed more efficiently on the client side.
To achieve this, Rational Application Developer provides the necessary tooling support for a client-side programming model.
The following sections discuss these topics:
- Create a client side programming model support-enabled portlet project.
- Use client-side programming model support to retrieve portlet preferences in basic portlet projects.
- Publish the portlet application on WebSphere Portal.
Client-side programming model support for a new portlet project
Follow these steps to create a client-side programming model support-enabled portlet project.
- Select File > New > Project > Portlet Project and press Next. The New Portlet Project wizard is displayed.
- Enter
ClientSideSampleas the Name of the portlet.
- Select WebSphere Portal 6.1 as the Target Runtime.
- Select JSR 168 Portlet or JSR 286 Portlet as the Portlet API.
- Select Basic as the Portlet type.
- Press Next three times. The Advanced Settings page is displayed, as shown in Figure 45.
Figure 45. Client-side support-enabled portlet project creation wizard
- The Client Side Capabilities check box under Web2.0 Portlet Feature group is selected by default. Selecting this enables the client-side programming model support for a new portlet project. Click Finish.
The ClientSideSample portlet project is created, and ClientSideSampleView.jsp opens in the Page Designer.
To see the code that is inserted, click the Source tab, as shown in Listing 17.
Listing 17. Auto-generated client-side code
<%@ taglib uri= <portlet-client-model:require </portlet-client-model:init>
The taglib shown in Listing 18 is added in the ClientSideSampleView.jsp file.
Listing 18. The taglib added to ClientSideSampleView.jsp
<%@taglib uri="" prefix="portlet-client-model" %>
The tag shown in Listing 19 is also added in the Java™Server Page (JSP) file
Listing 19. Tag added to JSP file
<portlet-client-model:init> <portlet-client-model:require <portlet-client-model:require </portlet-client-model:init>
This means that:
- The taglib includes the necessary markup and artifacts needed to use the required module on the client.
- The
ibm.portal.portlet.*module enables you to use PortletWindow, PortletPreference, PortletState and XMLPortletRequest on the client.
- The
ibm.portal.xml.*module allows the use of XSLT and XPath on the client.
Using client-side programming model support to retrieve portlet preferences in basic portlet projects
Now you will see how to use client-side programming model support to retrieve portlet preferences on the client side, as shown in Figure 46. This action used to happen on the server side. Continue with the ClientSideSample project created previously.
- Open the Portlet Deployment Descriptor.
- Click the Portlets tab and select the portlet (ClientSideSample).
- Move to the Persistent Preference Store section and add a preference (name it
MyPreference).
- Give a value to this preference.
Figure 46. Add a preference in the Portlet Deployment Descriptor
- Open the ClientSideSampleView.jsp file.
- Open the Page Data view.
- Expand the Page Data view and select the PortletPreferences node.
- Right-click the Preferences node and select New > PortletPreferences, as shown in Figure 47.
Figure 47. Add a preference in the Page Data view
- Create a new preference variable in the Add Attribute dialog, as shown in Figure 48. Give this preference the same name that you specified previously for the preference created in the Portlet Deployment Descriptor (that is, name it
MyPreference).
Figure 48. Add an attribute to portletPreferences
- Open ClientSideSampleView.jsp.
- Drag MyPreference from under the PortletPreferences node in the Page Data view¸ as shown in Figure 49, and drop it on ClientSideSampleView.jsp.
Figure 49. Drag a preference from the Page Data view
The Insert Java Bean dialog is displayed, as shown in Figure 50.
Figure 50. Configure data controls
- Press Finish.
- To see the code that is inserted click the Source tab, as shown in Listing 20.
Listing 20. Inserted code for MyPreference
<script type="text/javascript"> var preferenceJSONObject= {"bindings": [{ "pref":"MyPreference","id":"ibm__pref_MyPreference_uq_1"} ] }; function <portlet:namespace/>_getPref(portletWindow, status, portletPrefs) { if (status==ibm.portal.portlet.PortletWindow.STATUS_OK) { portletWindow.setAttribute("preferences", portletPrefs); var portletPref_ =portletPrefs; var len = preferenceJSONObject.bindings.length; for(var i=0; i<len ; i++) { var pref = preferenceJSONObject.bindings[i].pref; var pref_val = portletPref_.getValue(pref,""); document.getElementById(preferenceJSONObject.bindings[i].id).innerHTML=pref_val; } } else { alert("error loading feed"); } } function callOnLoad(){ <portlet:namespace/>_portletWindow =new ibm.portal.portlet.PortletWindow ("<%=portletWindowID%>"); <portlet:namespace/>_portletWindow.getPortletPreferences(<portlet:namespace/>_getPref); } dojo.addOnLoad(callOnLoad); </script>
The above auto-generated code retrieves portlet preferences on the client side.
Note: The auto-generated code may incorrectly generate
dojo_101.addOnLoad(callOnLoad); in place of the
correct
dojo.addOnLoad(callOnLoad);
If the incorrect code is generated in the JSP, you have to manually correct it so that when the application is published it runs properly on WebSphere Portal.
The
preferenceJSONObject contents are shown in
Listing 21.
Listing 21. Contents of preferenceJSONObject
var preferenceJSONObject= {"bindings": [ {"pref":"MyPreference","id":"ibm__pref_MyPreference_uq_1"} ] };
The HTML code shown in Listing 22 is also added in the source view.
Listing 22. Generated HTML code
<table> <tbody> <tr> <td align="left">MyPreference:</td> <td> <div id="ibm__pref_MyPreference_uq_1"></div> </td> </tr> </tbody> </table>
- Similarly, you can add another preference (MyPreference2) in the Portlet Deployment Descriptor (specifying a value for it) and also add the same preference in the Page Data view (in the same way as you did previously for MyPreference.
- Open the Design view. Drag this preference into the JSP as you did previously.
- Now open the Source view.
You can see that the preferenceJSONObject contents are now updated, as shown in Listing 23.
Listing 23. Updated JSONObject contents
var preferenceJSONObject= {"bindings": [ {"pref":"MyPreference2","id":"ibm__pref_MyPreference2_uq_1"}, {"pref":"MyPreference","id":"ibm__pref_MyPreference_uq_1"} ] };
Publish the portlet application on WebSphere Portal
- Right-click the ClientSideSample Portlet project and select Run on Server.
- You can observe that the preference values that you specified in the Portlet Deployment Descriptor Preference Store are displayed correctly, as shown in Figure 51.
Figure 51. ClientSideSample published on WebSphere Portal
What you have learned
The tools provided by Rational Application Developer simplify the development of rich, interactive, and responsive portlet and portal-based applications. They exploit the benefits of the client-side programming model, Client-Side Click-to-Action, Person Menu, Person Menu Extension, and Ajax proxy. You need to customize only the code generated according to the requirements of your application.
Downloads
Resources
Learn
- Learn more in the IBM WebSphere Portal Version 6.1 Information Center where you can find information about planning, installing, configuring, administering, developing, and troubleshooting.
- Explore the IBM Rational Application Developer Version 7.5 Information Center to learn more..
- Visit the Rational software area on developerWorks for technical resources and best practices for Rational Software Delivery Platform products.
- Explore the Rational Application Developer Version 7.5 Information Center to find in-depth information.
-.
Get products and technologies
- Download trial versions of IBM Rational software.
Discuss
- Participate in the discussion forum.
- Join the developerWorks Community in forums, blogs, podcasts, wikis, and more.. | http://www.ibm.com/developerworks/rational/library/09/rationalapplicationdeveloperportaltoolkit3/index.html?S_TACT=105AGX15&S_CMP=EDU | CC-MAIN-2014-23 | refinedweb | 7,906 | 56.15 |
Problem with mainwindow (the buttons desapear)
- BrunoVinicius
Hello, i am new in Qt and i having a problem with the application that i am doing.
This application has the mainwindow and a second window that open with a button in the mainwindow. When i run the program it compiles, but when i pass the mouse over a button, the window become white and the buttons desapear.
I am using Qt creator 3.5.1.
Based on Qt 5.5.1(MSVC 2013, 32 bits)
My windows is 7 - 64 bits
code of the mainwindow (posicionador_de_antenas)
#include "posicionador_de_antenas.h"
#include "ui_posicionador_de_antenas.h"
#include "posicionador_manual_1.h"
Posicionador_de_Antenas::Posicionador_de_Antenas(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::Posicionador_de_Antenas)
{
ui->setupUi(this);
}
Posicionador_de_Antenas::~Posicionador_de_Antenas()
{
delete ui;
}
void Posicionador_de_Antenas::on_pushButton_clicked()
{
Posicionador_Manual_1 pos_man;
pos_man.setModal(true);
pos_man.exec();
}
Figure of the problem
error
Thanks!!
Hi
what does Posicionador_Manual_1 window look like?
Its a qdialog type window?
please show source for this class also.
Hi,
Posicionador_Manual_1 have two buttons, one label and one doublespin box. I am setting the valour read in the box to a variable float when i press the button ok. The other button close this window. Here is the code:
#include "posicionador_manual_1.h"
#include "ui_posicionador_manual_1.h"
#include "posicionador_de_antenas.h"
Posicionador_Manual_1::Posicionador_Manual_1(QWidget *parent) :
QDialog(parent),
ui(new Ui::Posicionador_Manual_1)
{
ui->setupUi(this);
}
Posicionador_Manual_1::~Posicionador_Manual_1()
{
delete ui;
}
void Posicionador_Manual_1::on_pushButton_clicked()
{
float posicao = ui->doubleSpinBox->value();
}
Thanks!
Hi
I don't see anything wrong with code.
The picture you shown.
Seems just to be Mainwindow.
Not the Posicionador_Manual_1 dialog
So the dialog do show?
That's the issue.
The mainwindow turns white even when i do not click in any button, just pass the mouse over it.
This is the two windows after the problem.
I am suspecting of a problem with compatibility in my windows or something like that. i don't know for sure.
Hi,
i tried to do a simple window, with just one button to close this mainwindow and i am geting the same problem, the window turns white. I will reinstall my Qt with a an older version that have 64 bits version and see what happend.
thank you anyway!!
@BrunoVinicius
Ok it does seem a bit strange.
Just moving mouse over window should not make it white/non responsive.
If you open any of the many examples. do they run?
I did not try exemples, but i followed some videos in youtube, and did just the same program the video shows, and give me the same error. Like you said, it does seem a bit strange.
when i reinstall the Qt, i will post the result.
Thanks!
@BrunoVinicius
ok before u reinstall,maybe try example just to see?
hi again,
like you said i tried some exemples and the same error. I tried childwidget, most basic impossible!!!
I try to install some older version, but i had problem with the compiler.
i don't know what to do anymore.
- SGaist Lifetime Qt Champion
Hi,
What if you use the 64 bit version of Qt ?
Also, make sure to try to disable your virus scanner for a moment to test.
Hi,
SGaist, i downloaded a Qt 5.0.2-msvc2012_64-x64-offline, but i had some problems with the compiler, because its not integrate, i assume. Do i need to desinstall my current version to install this one? And how i can configure the compiler?
mrjj, i do not have a antivirus runing in the moment, so that's is not the problem.
thanks a lot guys!!
- SGaist Lifetime Qt Champion
Because that's a version for Visual Studio 2012 and you have 2013. You can't mix and match compilers on Windows, they are not compatible one with the other.
Just download the 64 bit Qt 5.5.1 for Visual Studio 2013
Hello,
i know it is not the topic problem but,
i installed Qt 5.5.1 64 (MSVC2013) version, and know in Build and Run, Qt versions and Kits configurations have problem. In compilers, the program just detect Micrsoft Visual C++ Compiler 14.0. How i configure and do i have to install another program?
Thanks
Hello again,
i reinstall Qt, using "qt-opensource-windows-x86-mingw491_opengl-5.4.0" and works properly.
Thanks anyaway guys.
- SGaist Lifetime Qt Champion
Out of curiosity, did you by any chance change from Visual Studio 2013 to Visual Studio 2015 ?
Also, why install an older version of Qt and not the current ? | https://forum.qt.io/topic/60780/problem-with-mainwindow-the-buttons-desapear | CC-MAIN-2018-13 | refinedweb | 742 | 69.07 |
NT.
Hmm. In Unix-based systems such as OS X, this is done by creating a file in /tmp, opening the file (usually create and open are combined into one atomic step) and then deleting the file. The file will remain in memory until it is closed.
So the *nix filesystems and cache manager treat the /tmp portion of the namespace specially?
Or is because the inode’s been unlinked that makes it behave specially? What happens if the system comes under memory pressure and the file metadata needs to be flushed to disk?
what chris is talking about is different.
first, /tmp isn’t special.
if you unlink an open file, the file isn’t deleted from disk until the file is closed. basically all it does is remove the file from the directory’s file, so it only exists in the inode allocation tables.
it doesn’t cause the file to live in ram, etc.
basically there’s no need to clean up your temp files.
Thanks Byron, I appreciate it.
So the unlink does the equivilant of the FILE_FLAG_DELETE_ON_CLOSE, but not the combination of the two (which is what gets you the "never flushed to disk unless necessary")?
yup, that’s correct.
unlink is superior to FILE_FLAG_DELETE_ON_CLOSE because:
1. The name is gone from the directory immediately. With the NT delete model, the deletion doesn’t occur until the last handle to the file object is closed.
2. Someone can’t go and change the delete-on-close attibute on the handle behind your back (in case you didn’t know, if someone else has a handle on your delete-on-close file, they can set it back to not be delete-on-close.
The two of these together make life a living hell for installation programs.
As usual, insiteful comment, Mike.
Hm, "inciteful" or "insightful" 😉 | https://blogs.msdn.microsoft.com/larryosterman/2004/04/19/its-only-temporary/ | CC-MAIN-2017-26 | refinedweb | 309 | 74.29 |
Tutorial: Building a RESTful API with Flask
In this tutorial, we’ll be learning and creating RESTful APIs with Flask. To follow along with this tutorial, you should already have a good grasp of Python, Flask, and SQLAlchemy.
Since the application we’re going to build in this article is an extension of the one we built earlier in the Flask SQLAlchemy Tutorial , make sure you’ve already read that post and have the code available for our API additions!
What is an API?
API is one of those technical terms that gets thrown around a lot in the programming world. We hear about people creating applications using Yelp APIs or Google Map APIs. For example, I created a job search application using Twitter’s API. But what exactly is an API, and why is it so important?
API stands for Application Programming Interface, and it refers to the mode of communication between any two software applications. An API is just a medium that lets two entities of code talk to each other.
Have you ever implemented Google Maps in your application or have seen an app that makes use of Google Maps? That’s the Google Maps API.
Companies like Google and Facebook, among many others, have APIs that allow external applications to use their functionalities without exposing their codebase to the world. There’s a high chance that an organization you want to work with already has an API in place — both for developers and end users.
But why do companies allow us to use their content via APIs? By allowing users access to their content, businesses add value for developers and users alike. Instead of building a new functionality from scratch and re-inventing the wheel, developers can use existing APIs and focus on their primary objectives. This practice actually helps organizations by building relationships with developers and growing their user base.
Now that we have a grasp on APIs, let’s talk about REST.
What is REST?
Like API, REST is an acronym, and it stands of Representational State Transfer. It’s an architectural style for designing standards between computers, making it easier for systems to communicate with each other. In simpler terms, REST is a set of rules developers follow when they create APIs. A system is called when it adheres to these constraints.
To better understand RESTful APIs, we need to define what the terms “client” and the “resource” mean.
Client: A client can refer to either a developer or software application which uses the API. When you are implementing the Google Maps API in your application, you are accessing resources via the API, which makes you a client. Similarly, a web browser can also be a client.
Resource: A resource describes an object, data, or piece of information that you may need to store or send to other services. For example, the location coordinates you receive when you work with Google Maps API are a resource.
So, when a client sends a request to the server, it receives access to a resource. But what language do clients and servers use?
For humans to speak to each other, we have proper syntax and grammar. Without them, it’s impossible to understand what’s being communicated. Similarly, APIs have a set of rules for machines to communicate with each other that are called Protocols.
HTTP and requests
HTTP is one of the protocols that allows you to fetch resources. It is the basis of any data transfer on the Web and a client-server protocol. RESTful APIs almost always rely on HTTP.
When we are working with RESTful APIs, a client will send an HTTP request, and the server will respond with the HTTP response. Let’s dig into what HTTP requests and HTTP responses entail.
When an HTTP request is sent to the server, it usually contains the following:
- A header
- A blank line that separates the header with the body
- An optional body
The header consists of an HTTP verb, URI and an HTTP version number which is collectively called a request line.
GET /home.html HTTP/1.1
In the above example,
GET is an HTTP verb,
home.html is a URI where we want to get the data from, and
HTTP/1.1 refers to the HTTP version.
GET isn’t the only HTTP verb out there, so let’s look at some of the other HTTP verbs commonly used.
- GET: The GET method is only used to retrieve information from the given server. Requests using this method should only recover data and should have no other effect on the data.
- POST: A POST request is used to send data back to the server using HTML forms.
- PUT: A PUT request replaces all the current representations of the target resource with the uploaded content.
- DELETE: A DELETE request removes all the current representations of the target resource given by URI.
When a server receives the request, it sends a message back to the client. If the requests are successful, it returns the data requested else it will return the error.
When an HTTP response is sent back to the client, it usually contains the following:
- A header
- A blank line that separates the header from the body
- An optional body
This time, the header contains the HTTP version, status code, and reason phrase that explains the status code in the plain language.
Have you ever seen an error 404 Not Found? That’s one of the status codes where 404 is a status code followed by the reason phrase.
There are many codes sent between the server and the client. Some of the common ones are as follows:
- 200 OK: This means the request was successful
- 201 Created: This means the resource has been created
- 400 Bad Request: The request cannot be processed because of bad request syntax
- 404 Not Found: This says the server wasn’t able to find the requested page
Luckily, Flask’s implementation takes care of most of this for us on its own, but it’s still useful to know about response codes in order to get the most from API responses.
Creating the API with Flask
As a standalone application, our books database is helpful, but we’ve now realized we want to allow an online book rating service to access our library. Also, we’d like for our online flashcards to be automatically tagged with books, instead of entering book details manually.
As our library grows, our developer followers may be interested in seeing our list, or adding new suggested books. An API with Flask is just the thing.
Let’s create some endpoints for the books database. You can think of an endpoint as the location where we access a specific API resource, and it is usually associated with a specific URL string. But before we start creating endpoints, we need to make a change in our
database_setup.py file.
Where we created our
Book table, we need to add some code that returns the object data in an easily serializable format. Serialization will turn an entry into a string format that can be passed around via HTTP.
Our new code should look like this:
class Book(Base):
__tablename__ = 'book'
id = Column(Integer, primary_key=True)
title = Column(String(250), nullable=False)
author = Column(String(250), nullable=False)
genre = Column(String(250))
@property
def serialize(self):
return {
'title': self.title,
'author': self.author,
'genre': self.genre,
'id': self.id,
}
#we will save the changes and execute this script again.
In the
app.py file, we'll add some endpoints using the
@app decorator. It's important to note that by default,
@app.route has a GET method. If we want to use any other HTTP verbs, we have to specify them by passing them via the
methods parameter as a list.
@app.route("/")
@app.route("/booksApi", methods = ['GET', 'POST'])
def booksFunction():
if request.method == 'GET':
return get_books()
elif request.method == 'POST':
title = request.args.get('title', '')
author = request.args.get('author', '')
genre = request.args.get('genre', '')
return makeANewBook(title, author, genre)
@app.route("/booksApi/", methods = ['GET', 'PUT', 'DELETE'])
def bookFunctionId(id):
if request.method == 'GET':
return get_book(id)
elif request.method == 'PUT':
title = request.args.get('title', '')
author = request.args.get('author', '')
genre = request.args.get('genre', '')
return updateBook(id,title, author,genre)
elif request.method == 'DELETE':
return deleteABook(id)
We created two functions
booksFunction and
bookFunctionId(id). Our first function evaluates whether the request method is GET or POST. If it's the former, it will return the
get_books method. Otherwise, it will return the
makeANewBook method.
The
makeANewBook() function takes in three parameters. These are the values we need to create a row in our database table.
Our second function,
bookFunctionId(), also checks for a GET request. There is a subtle difference between the GET request in
booksFunction and
bookFunctionId. The GET request in our first function returns all the books in our database, while the GET request in our second function only returns the filtered book.
The
bookFunctionId() function also evaluates for PUT and DELETE methods and returns
updateBook() and
deleteABook(), respectively.
from Flask import jsonify
def get_books():
books = session.query(Book).all()
return jsonify(books= [b.serialize for b in books])
def get_book(book_id):
books = session.query(Book).filter_by(id = book_id).one()
return jsonify(books= books.serialize)
def makeANewBook(title,author, genre):
addedbook = Book(title=title, author=author,genre=genre)
session.add(addedbook)
session.commit()
return jsonify(Book=addedbook.serialize)
def updateBook(id,title,author, genre):
updatedBook = session.query(Book).filter_by(id = id).one()
if not title:
updatedBook.title = title
if not author:
updatedBook.author = author
if not genre:
updatedBook.genre = genre
session.add(updatedBook)
session.commit()
return "Updated a Book with id %s" % id
def deleteABook(id):
bookToDelete = session.query(Book).filter_by(id = id).one()
session.delete(bookToDelete)
session.commit()
return "Removed Book with id %s" % id
At the top, we import
jsonify from Flask, a function that serializes the data you pass it to JSON . Data serialization converts the structured data to a format that allows sharing or storage of the data in its original structure.
Before JSON became popular, XML was widely used for open data interchange. JSON involves less overhead when parsing, so you’re more likely to see it when interacting with APIs via Python.
Here we create five different functions that execute CRUD operations. To create a new book, we insert new values in our Book table. To read the existing books from our database, we use
all(). To update a book in our database, we first find the book, update the values and add them. And lastly, to delete a book, we first find the book, and then simply call
delete() and commit the change.
Verifying endpoints with Postman
To check our endpoints, we can use Postman. Postman is an application for testing APIs that works by sending requests to the web server and getting the responses back. We can test our endpoints via Python as well, but it’s nice to have a sleek user interface to make requests with without the hassle of writing a bunch of code just to test them out.
Once we have Postman installed, let’s start testing our endpoints. In this article, we’ll only test our GET and POST requests.
First let’s execute our
app.py file. To check if everything is working, we'll try a GET request. From the dropdown menu, we select GET and send a request to . You should see something like the following image:
In order to test our POST request, we’ll select POST from the dropdown menu. We then update our values using the key value forms provided. As you’re typing in the updated values, notice how our URL updates automatically.
Once we have updated the value, we will hit send again — and voila! We have successfully added a new Book. You can check this by sending a GET request again, and your new book should be in the list.
Conclusion
We just created a Flask web application that provides REST APIs for our books tracker application. As you can see, writing RESTful APIs isn’t hard. Now you have an idea on how to write a RESTful API using Flask.
Because it’s so easy to implement, at least with Flask, you might start thinking more about how you could “API-ify” other web applications. Think about how to determine which resources an online service makes available, how to know who will be accessing the resources, and how to authenticate users and systems which request access to these resources. Further, what’s the best way for your application to pass parameters to your endpoints, and what happens when there are multiple versions of your API?
Python and Flask — optionally using SQLAlchemy to handle the database — are excellent tools to help answer these questions and more, along with the Python and Open Source communities.
Originally published at | https://medium.com/kitepython/building-a-restful-api-with-flask-b689dbaff74?source=collection_home---4------4----------------------- | CC-MAIN-2019-39 | refinedweb | 2,161 | 65.42 |
How to use TypeScript with Node Will You Build?
We will be building a simple
Hello world application, which shows the current date and time using TypeScript and Node.js.
Below is a picture of what you will build:
Why Use TypeScript?
While there are lots of reasons why you should use TypeScript, here are three which I think are convincing enough.
- Optional typing: In JavaScript, a variable is defined thus:
var me = "myname"
In the code above, we leave it to the interpreter to guess which data type the variable is. In TypeScript, we can declare the same variable as:
var me: string = "my name"
This gives the option of declaring what data type we want, which also helps while debugging.
- Interfaces: TypeScript allows you to define complex type definitions in the form of interfaces. This is helpful when you have a complex type that you want to use in your application, such as an object which contains other properties.
Let’s take a look at this example:
interface Usercheck{ name: string; age: int; } class User implements Usercheck { constructor (public name: string, public age: int) { } }
The User class obeys, and must have data-types and definitions to, the interface
Usercheck because it implements
Usercheck. This is a function that is usually appreciated more by developers who use static languages.
- TypeScript makes code easier to read and understand: The key to TypeScript is that it’s a statically typed script. Programming languages can either be statically or dynamically typed; the difference is when type checking occurs. Static languages’ variables are type checked which helps make the code more readable. The availability of classes also makes the code look more readable. For example:
//Typescript class Test { //field name:string; //constructor constructor(name:string) { this.name = name } //function display_name():void { console.log("name is : "+this.name) } } //javascript var Test = (function () { //constructor function Test(name) { this.name = name; } //function Test.prototype.display_name = function () { console.log("name is : " + this.name); }; return Test; }());
In the snippet above, notice that it is easier to read the TypeScript version of the code than the JavaScript Version. In the TypeScript version, we can immediately see that the public variable called
name is of the string data-type. In the JavaScript version, we cannot immediately figure out the data-type of the variable. Also, in the JavaScript Version, we had to use the
prototype function to declare our
display_name function, while in TypeScript we did not have to do that.
Getting Started
To install TypeScript globally so you can call it when needed, run:
npm install -g typescript
For verification of the installed TypeScript, use the
tsc --v command:
tsc --v //Version 2.1.6
Next, install
gulp for the build process. To do that, run:
npm install -g gulp
Now, create a folder that will hold the application. To do this, run
npm init and follow the instructions below:
//create a new directory mkdir type-node //change directory to the new folder cd type-node //run npm init npm init
You should now have all the basic libraries to start configuring for use.
Configuring TypeScript
Given no arguments,
tsc will first check
tsconfig.json for instructions. When it finds the config, it uses those settings to build the project.
Create a new file called
tsconfig.json in the root folder and add the following:
{ "compilerOptions": { "target": "es6", "module": "commonjs" }, "include": [ "**/*.ts" ], "exclude": [ "node_modules" ] }
This defines the three major sections, which include
compilerOptions,
include and
exclude parameters:
- In the compiler options, a target of es6 has been set. This means that the JavaScript engine target will be set to es6, and the module will be set to CommonJS.
- In the include block, a path has been set that scans for all TypeScript files and compile them.
- In the exclude block, node_modules is being defined for it. TypeScript will not scan the node_modules folder for any TypeScript file. If you create a dummy
index.tsfile in the root, and run tsc on the command line, we will see the compiled
**index.js**file.
Configuring Gulp
Although we have installed Gulp globally, we still need to install Gulp in our local project as well as Gulp-TypeScript. To do that, we run:
npm install gulp gulp-typescript typescript --save
Doing just this setup done might be enough, but you would have to go back to the terminal to run the
tsc command every time a change occurs. Instead, you can use Gulp to automate the process and run the compilation each time you change the file. Here’s how.
Add a
gulpfile.js to the root of the directory. This enables automatic compilation of the source files. Now, put the following contents into the
gulpfile.js:
const gulp = require('gulp'); const ts = require('gulp-typescript'); // pull in the project Typescript config const tsProject = ts.createProject('tsconfig.json'); //task to be run when the watcher detects changes gulp.task('scripts', () => { const tsResult = tsProject.src() .pipe(tsProject()); return tsResult.js.pipe(gulp.dest('')); }); //set up a watcher to watch over changes gulp.task('watch', ['scripts'], () => { gulp.watch('**/*.ts', ['scripts']); }); gulp.task('default', ['watch']);
This creates a Gulp task which fires once a watcher triggers. A Gulp watcher watches for changes in the ts files and then compiles to JavaScript.
Now, delete the
index.js file the
tsc command created earlier. If you run Gulp, you will notice that the index.js file is re-created and that Gulp is now watching for changes.
Now that the Gulp watcher is running on one terminal, move to another terminal and continue the configuration.
Creating The App.ts File
First, we need to install the
Express library by running the following:
npm install express --save
The next step is to create the app.ts file that holds the Express app. Create an app.ts file, and copy the following into it:
import * as express from 'express'; // Creates and configures an ExpressJS web server. class App { // ref to Express instance public express: express.Application; //Run configuration methods on the Express instance. constructor() { this.express = express(); this.middleware(); this.routes(); } // Configure Express middleware. private middleware(): void { } // Configure API endpoints. private routes(): void { /* This is just to get up and running, and to make sure what we've got is * working so far. This function will change when we start to add more * API endpoints */ let router = express.Router(); // placeholder route handler router.get('/', (req, res, next) => { res.json({ message: 'Hello World!' }); }); this.express.use('/', router); } } export default new App().express;
This has imported everything from Express and created a TypeScript class called
App. A public property called express was also declared. Additionally, in the constructor, we had assigned an instance of the
Express library to a public property called
express which we defined earlier.
Finally, the middleware and routes function are being called. The middleware function defines middleware like CORS, templating engine, etc, while the routes function registers the routes, which have been set.
Configuring Express
Next, we will create an Express entry point, which we will write in TypeScript.
Create a new file called
main.ts and add the following to it:
import * as http from 'http'; import App from './App'; const port = 3000; App.set('port', port); //create a server and pass our Express app to it. const server = http.createServer(App); server.listen(port); server.on('listening', onListening); //function to note that Express is listening function onListening(): void { console.log(`Listening on port `+port); }
This has imported the HTTP module from node. The
App class which runs the application is being imported.
Get a port value from the environment or set the default port number to 3000. Create the HTTP server, and pass
App to it (Express app). Set up some basic error handling and a terminal log to show when the app is ready and listening.
If you run
node main.js in the terminal, you will get a message that says
listening on port 3000
Go to your browser, and you should see something like this:
Creating the
**npm test** command
Inside our
package.json file, let’s replace the part that defines the script section with the following:
"scripts": { "test": "gulp" }
Now, any time we run
npm test our
Gulp command will run.
Notice that when you run
npm test, it runs the Gulp watch script, and errors like “*****can not find some modules*****” might pop up. This is because TypeScript needs some definitions before it can read raw node libraries.
To install TypeScript definitions, use the
@types notation with the packages. That is, to install Express TypeScript definitions, you will run
npm install @types/express, but all the packages have already been installed, so we will install only the
node types definition.
Open up the terminal and do this:
npm install @types/node --save
If you now run the
npm test, error messages will have cleared up and scripts will re-compile on the go, however make sure you restart the server to check that the error messages do not show up anymore.
To allow the server to refresh the changes on the go, use
nodemon. Nodemon is an npm package that restarts the server any time application files change.
Install Nodemon:
npm install -g nodemon
Once Nodemon has finished installing, instead of
node main.js, run
nodemon main.js. Change what shows on the browser from the simple JSON message to a couple of HTML tags, and go to the
routes function in the
app``**.ts** file to move the part where
res.json appears on line 30, and change the whole block to this:
res.send( `<html> <head> <title>Tutorial: HelloWorld</title> </head> <body> <h1>HelloWorld Tutorial</h1> <p> The current data and time is: <strong>`+new Date()+`</strong> </p> </body> </html>` )
That is it! You have created your first Express application using TypeScript. Your page should look like this:
Conclusion
Following the tutorial, you should have created your first simple application using TypeScript and Node.js. In summary, you should have learned how to use Gulp to automate the building process, as well as using Nodemon to restart the server. Once the scripts have re-compiled, you can decide to add more things like using view engines, serving static files, performing file uploads and a lot more.
The code base to this tutorial is available in a Public Github Repository. You can clone the repository and play around with the code base. | https://blog.pusher.com/use-typescript-with-node/ | CC-MAIN-2017-26 | refinedweb | 1,734 | 65.83 |
[
]
Szilard Nemeth commented on YARN-8059:
--------------------------------------
Hi [~wilfreds]!
Thanks for your comments!
1. If you meant we need to zero the values for memory / cores within the same loop, I did
the fix for that in {{subtractFromNonNegative}}
2. Fixed the name of the variable, it was indeed confusing.
3. Fixed the indents and line breaks.
Tests:
1. You are right, I debugged a bit and also had the memory fully utilizied. It was caused
by the {{RM_SCHEDULER_INCREMENT_ALLOCATION_MB}} scheduler config, I decreased its value to
100 so I could request 400mb * <the no. of containers> so that it haven't reached the
cluster's capacity.
This way, the test still passes but only because the custom resource utilization is reached
the cluster capacity of that resource.
2. I don't really get this. Do you mean to use resource requests with smaller amounts of memory
/ vcores to have the custom resource as the dominant?
What is the problem with the allocation request build?
Uploaded a new patch that fixes most of your comments.
Thanks!
> Resource type is ignored when FS decide to preempt
> --------------------------------------------------
>
> Key: YARN-8059
> URL:
> Project: Hadoop YARN
> Issue Type: Bug
> Components: fairscheduler
> Affects Versions: 3.0.0
> Reporter: Yufei Gu
> Assignee: Szilard Nemeth
> Priority: Major
> Attachments: YARN-8059.001.patch, YARN-8059.002.patch, YARN-8059.003.patch, YARN-8059.004.patch
>
>
> Method Fairscheduler#shouldAttemptPreemption doesn't consider resources other than vcore
and memory. We may need to rethink it in the resource type scenario. cc [~miklos.szegedi@cloudera.com],
[~wilfreds] and [~snemeth].
> {code}
> if (context.isPreemptionEnabled()) {
> return (context.getPreemptionUtilizationThreshold() < Math.max(
> (float) rootMetrics.getAllocatedMB() /
> getClusterResource().getMemorySize(),
> (float) rootMetrics.getAllocatedVirtualCores() /
> getClusterResource().getVirtualCores()));
> }
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org | http://mail-archives.apache.org/mod_mbox/hadoop-yarn-issues/201810.mbox/%3CJIRA.13146957.1521663850000.154937.1539956460078@Atlassian.JIRA%3E | CC-MAIN-2020-50 | refinedweb | 307 | 52.46 |
DBI::DBD::SqlEngine::Developers - Developers documentation for DBI::DBD::SqlEngine { ... }
This document describes the interface of DBI::DBD::SqlEngine for DBD developers who write DBI::DBD::SqlEngine based DBI drivers. It supplements DBI::DBD and DBI::DBD::SqlEngine::HowTo, and
init_valid_attributes.
Provides the tie-magic for
$dbh->{$drv_pfx . "_meta"}. Routes
STORE through
$drv->set_sql_engine_meta() and
FETCH through
$drv->get_sql_engine_meta().
DELETE is not supported, you have to execute a
DROP TABLE statement, where applicable.
Provides the tie-magic for tables in
$dbh->{$drv_pfx . "_meta"}. Routes
STORE though
$tblClass->set_table_meta_attr() and
FETCH though
$tblClass->get_table_meta_attr().
DELETE removes an attribute from the meta object retrieved by
$tblClass->get_table_meta().
Contains the methods to deal with prepared statement handles. e.g.,
$sth->execute () or die $sth->errstr;
Base class for 3rd party table sources:
$dbh->{sql_table_source} = "DBD::Foo::TableSource";
Base class for 3rd party data sources:
$dbh->{sql_data_source} = "DBD::Foo::DataSource";
Base class for derived drivers statement engine. Implements
open_table.
Contains tailoring between SQL engine's requirements and
DBI::DBD::SqlEngine magic for finding the right tables and storage. Builds bridges between
sql_meta handling of
DBI::DBD::SqlEngine::db, table initialization for SQL engines and meta object's attribute management for derived drivers.";
DBI::DBD::SqlEngine::dr:
$phase argument again for phase 1:
$dbh->func( 1, "init_default_attributes" );
Returns a list of DSN's using the
data_sources method of the class specified in
$dbh->{sql_table_source} or via
\%attr:
@ary = DBI->data_sources($driver); @ary = DBI->data_sources($driver, \%attr);
DBI::DBD::SqlEngine doesn't have an overall driver cache, so nothing happens here at all.
This package defines the database methods, which are called via the DBI database handle
$dbh.
DBI::DBD::SqlEngine::db:
Simply returns the content of the
Active attribute. Override when your driver needs more complicated actions here.
Prepares a new SQL statement to execute. Returns a statement handle,
$sth - instance of the DBD:XXX::st. It is neither required nor recommended to override this method.
Called by
FETCH to allow inherited drivers do their own attribute name validation. Calling convention is similar to
FETCH and the return value is the approved attribute name.
return $validated_attribute_name;
In case of validation fails (e.g. accessing private attribute or similar),
validate_FETCH_attr is permitted to throw an exception.BI::DBD::SqlEngine::db or a derived class.
Called by
STORE to allow inherited drivers do their own attribute name validation. Calling convention is similar to
STORE and the return value is the approved attribute name followed by the approved new value.
return ($validated_attribute_name, $validated_attribute_value);
In case of validation fails (e.g. accessing private attribute or similar),
validate_STORE_attr is permitted to throw an exception (
DBI::DBD::SqlEngine::db::validate_STORE_attr throws an exception when someone tries to assign value other than
SQL_IC_UPPER .. SQL_IC_MIXED to
$dbh->{sql_identifier_case} or
$dbh->{sql_quoted_identifier.
This method sets the attributes
f_version,
sql_nano_version,
sql_statement_version and (if not prohibited by a restrictive
${prefix}_valid_attrs)
${prefix}_version.
This method is called at the end of the
connect () phase.
When overriding this method, do not forget to invoke the superior one.
This method is called after the database handle is instantiated as the first attribute initialization.
DBI::DBD::SqlEngine::db::init_valid_attributes initializes the attributes
sql_valid_attrs and
sql_readonly_attrs.
When overriding this method, do not forget to invoke the superior one, preferably before doing anything else.
This method is called after the database handle is instantiated to initialize the default attributes. It expects one argument:
$phase. If
$phase is not given,
connect of
DBI::DBD::SqlEngine::dr expects this is an old-fashioned driver which isn't capable of multi-phased initialization.
DBI::DBD::SqlEngine::db::init_default_attributes initializes the attributes
sql_identifier_case,
sql_quoted_identifier_case,
sql_handler,
sql_init_order,
sql_meta,
sql_engine_version,
sql_nano_version and
sql_statement_version when SQL::Statement is available.
It sets
sql_init_order to and
drv_version are added (when available) to the list of valid and immutable attributes (where
drv_ is interpreted as the driver prefix)._versions takes the
$dbh as the first argument and optionally a second argument containing a table name. The second argument is not evaluated in
DBI::DBD::SqlEngine::db::get_versions itself -.
Returns a SQL::Parser instance, when
sql_handler is set to "SQL::Statement". The parser instance is stored in
sql_parser_object.
It is not recommended to override this method.
Disconnects from a database. All local table information is discarded and the
Active attribute is set to 0.
Returns information about all the types supported by DBI::DBD::SqlEngine.
Returns a statement handle which is prepared to deliver information about all known tables.
Returns a list of all known table names.
Quotes a string for use in SQL statements.
Warns about a useless call (if warnings enabled) and returns. DBI::DBD::SqlEngine is typically a driver which commits every action instantly when executed.
Warns about a useless call (if warnings enabled) and returns. DBI::DBD::SqlEngine is typically a driver which commits every action instantly when executed.
DBI::DBD::SqlEngine::db:
This section describes attributes which are important to developers of DBI Database Drivers derived from
DBI::DBD::SqlEngine.
This attribute contains a hash with priorities as key and an array containing the
$dbh attributes to be initialized during before/after other attributes.
DBI::DBD::SqlEngine initial which_meta method of "DBI::DBD::SqlEngine::Table" to get
$dbh->{sql_meta}->{quick}->{f_dir} being initialized properly.
This attribute is only set during the initialization steps of the DBI Database Driver. It contains the value of the currently run initialization phase. Currently supported phases are phase 0 and phase 1. This attribute is set in
init_default_attributes and removed in
init_done..
Names a class which is responsible for delivering data sources and available tables (Database Driver related). data sources here refers to "data_sources" in DBI, not
sql_data_source.
See "DBI::DBD::SqlEngine::TableSource" for details.
Name a class which is responsible for handling table resources open and completing table names requested via SQL statements.
See "DBI::DBD::SqlEngine::DataSource" for details.).
Contains the methods to deal with prepared statement handles:
Common routine to bind placeholders to a statement for execution. It is dangerous to override this method without detailed knowledge about the DBI::DBD::SqlEngine internal storage structure.
Executes a previously prepared statement (with placeholders, if any).
Finishes a statement handle, discards all buffered results. The prepared statement is not discarded so the statement can be executed again.
Fetches the next row from the result-set. This method may be rewritten in a later version and if it's overridden in a derived class, the derived implementation should not rely on the storage details.
Alias for
fetch.
Fetches statement handle attributes. Supported attributes (for full overview see "Statement Handle Attributes" in DBI) are
NAME,
TYPE,
PRECISION and
NULLABLE. Each column is returned as
NULLABLE which.
Allows storing of statement private attributes. No special handling is currently implemented here.
Returns the number of rows affected by the last execute. This method might return
undef./DBD::File::Table.
Derives from DBI::SQL::Nano::Statement for unified naming when deriving new drivers. No additional feature is provided from here..
Initializes a table meta structure. Can be safely overridden in a derived class, as long as the
SUPER method is called at the end of the overridden method.
It copies the following attributes from the database into the table meta data
$dbh->{ReadOnly} into
$meta->{readonly},
sql_identifier_case and
sql_data_source after bootstrapping a new table_meta and /DBI::DBD::SqlEngine::DataSource storage. This is silently forwarded to
$meta->{sql_data_source}->open_data().. | http://search.cpan.org/~timb/DBI-1.625/lib/DBI/DBD/SqlEngine/Developers.pod | CC-MAIN-2015-18 | refinedweb | 1,222 | 50.63 |
Installation
The easiest way to include webpack and its plugins is through NPM and save it to your
devDependencies:
npm install -D webpack ts-loader html-webpack-plugin tslint-loader
Setup and Usage
The most common way to use webpack is through the CLI. By default, running the command executes
webpack.config.js which is the configuration file for your webpack setup.
Bundle
The core concept of webpack is the bundle. A bundle is simply a collection of modules, where we define the boundaries for how they are separated. In this project, we have two bundles:
appfor our application-specific client-side logic
vendorfor third party libraries
In webpack, bundles are configured through entry points. Webpack goes through each entry point one by one. It maps out a dependency graph by going through each module's references. All the dependencies that it encounters are then packaged into that bundle.
Packages installed through NPM are referenced using CommonJS module resolution. In a JavaScript file, this would look like:
const app = require('./src/index.ts');
or TypeScript/ES6 file:
import { Component } from '@angular/core';
We will use those string values as the module names we pass to webpack.
Let's look at the entry points we have defined in our sample app:
{ ... entry: { app: './src/index.ts', vendor: [ '@angular/core', '@angular/compiler', '@angular/common', '@angular/http', '@angular/platform-browser', '@angular/platform-browser-dynamic', '@angular/router', 'es6-shim', 'redux', 'redux-thunk', 'redux-logger', 'reflect-metadata', 'ng2-redux', 'zone.js', ] } ... }
The entry point for
app,
./src/index.ts, is the base file of our Angular application. If we've defined the dependencies of each module correctly, those references should connect all the parts of our application from here. The entry point for
vendor is a list of modules that we need for our application code to work correctly. Even if these files are referenced by some module in our app bundle, we want to separate these resources in a bundle just for third party code.
Output Configuration
In most cases we don't just want to configure how webpack generates bundles - we also want to configure how those bundles are output.
Often, we will want to re-route where files are saved. For example into a
binor
distfolder. This is because we want to optimize our builds for production.
Webpack transforms the code when bundling our modules and outputting them. We want to have a way of connecting the code that's been generated by webpack and the code that we've written.
Server routes can be configured in many different ways. We probably want some way of configuring webpack to take our server routing setup into consideration.
All of these configuration options are handled by the config's
output property. Let's look at how we've set up our config to address these issues:
{ ... output: { path: path.resolve(__dirname, 'dist'), filename: '[name].[hash].js', publicPath: "/", sourceMapFilename: '[name].[hash].js.map' } ... }
Some options have words wrapped in square brackets. Webpack has the ability to parse parameters for these properties, with each property having a different set of parameters available for substitution. Here, we're using
name(the name of the bundle) and
hash(a hash value of the bundle's content).
To save bundled files in a different folder, we use the
path property. Here,
path tells webpack that all of the output files must be saved to
path.resolve(__dirname, 'dist'). In our case, we save each bundle into a separate file. The name of this file is specified by the
filename property.
Linking these bundled files and the files we've actually coded is done using what's known as source maps. There are different ways to configure source maps. What we want is to save these source maps in a separate file specified by the
sourceMapFilename property. The way the server accesses the files might not directly follow the filesystem tree. For us, we want to use the files saved under dist as the root folder for our server. To let webpack know this, we've set the
publicPath property to
/. | https://angular-2-training-book.rangle.io/handout/project-setup/installation_and_usage.html | CC-MAIN-2018-09 | refinedweb | 681 | 57.47 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How could i use rowspan in the qweb report?
My company is providing the steel roof, that type of product not only has the quantity but also come with the size include length and amount of piece. normally, the sales order has many order lines for the same product. I would like to connect all the product that come with same product_id in to a row as picture below. Could you guy please help me redesign the quoation sale order qweb report and invoice qweb report. Thank for your attention.
- You need to sort the sale order lines by product id, then count the line of each product id (by python code)
google "python sort list of tuples"
a = [(1, "ahbe"), (2,"sss"), (1, "ddjj")]
a.sort(key=lambda tup: tup[0])
a
>>> [(1, 'ahbe'), (1, 'ddjj'), (2, 'sss')]
google "python count element in list of tuples"
from collections import Counter
Counter(tup[0] for tup in a)
>>> Counter({1: 2, 2: 1})
- then edit/create the xml file, the first time of each product id, add 1 cell with rowspan.
Thank Giang, i get your idea, but because the lack of IT knowleadge, could you give me a favor, show me each specific steps to do, thank for your time.
please skype me @fanha99
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-could-i-use-rowspan-in-the-qweb-report-106713 | CC-MAIN-2017-13 | refinedweb | 270 | 68.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.