text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I have a web application that has now become unresponsive to panning and zooming when trying to use the touchscreen functionality on a device. This has occurred sometime over the last month. Everything was working normally before this occurred. I have tested this on 2 different machines from different manufacturers and it does not work on either one of them,so I believe that it has to be something with the applications itself. The buttons on the web app work fine, it's just when the user tries to pan or zoom with their fingers.
I should also note that one of the machines has a stylus and the pan/zoom functionality works with the stylus. I have looked at the touchscreen settings on each machine and they appear to be set up correctly. I've also looked at the settings within the web app itself, but there isn't much there to configure and there is nothing about any touchscreen functionality.
Has anybody else seen this issue? Any ideas on how to fix it?
Thanks,
Solved! Go to Solution.
This has been fixed in the JS API see this thread with the bug ID.
Note: We primarily use Google Chrome and this is the browser that it is not working in. I have tested with Firefox and Safari and it IS working in those browsers. So this leads me to believe that there is a problem with the application in Chrome. This problem was reported before the big upgrade to Chrome this past week, so I don't think that has anything to do with the problem.
Also, I have tested other web maps and they also aren't panning/zooming in Chrome, but they are responsive in the other browsers.
i have similar issue in chrome, especially the drag map feature in touch screen.
i have posted my issue in google chrome forum and no reply yet.
i never see any good responses on ESRI community either.
pls update me when you get a change to resolve this. i will do same.
it is not only our maps. all the ESRI maps have this touch screen issue. whereas google maps doesnt have it.
it is ESRI maps + google chrome issue.
sample esri web map -
Yes. I agree that this is a Google Chrome and ESRI Web issue. I will definitely respond if we find a solutions. Thanks for posting onto the Google forums.
Our work around right now is for the user's to use Firefox instead of Chrome. We would like to go back to Chrome for the other benefits is provides, but are stuck forcing our users to use Firefox for now.
This has been fixed in the JS API see this thread with the bug ID.
Thank you for supplying the link to this thread. It looks like the JS in our AGOL just upgraded to 3.26 and now the maps are working again.
Hello Russel, I have a similar problem, touch control is not working on Chrome mobile on android and Chrome desktop (using mobiles emulators). My web application is using arcgis javascript api 4.14 and angular 8 with types/arcgis-js-api@4.14.0 and esri-loader@2.13.0. Any idea if this was solved? Thanks for your help
These are the chrome version I test:
Version 81.0.4044.113 (Official Build) (64-bit) Linux Ubuntu 18.04
Version 81.0.4044.122 (Official Build) (64-bit) Windows 10
Hello, I found what was the problem: I had not imported the require css, in order to load it just put setDefaultOptions({css: true}) before loadModules, for example:
import { setDefaultOptions, loadModules } from 'esri-loader'; // before loading the modules for the first time,// also lazy load the CSS for the version of// the script that you're loading from the CDNsetDefaultOptions({ css: true }); loadModules(['esri/views/MapView', 'esri/WebMap']) .then(([MapView, WebMap]) => { // the styles, script, and modules have all been loaded (in that order) }); after that it worked! | https://community.esri.com/t5/arcgis-online-questions/web-application-not-responding-to-touchscreen/m-p/215728 | CC-MAIN-2022-21 | refinedweb | 668 | 74.49 |
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]>
Original question:
DRV2605: DRV2605 Problem
Replies: 11
Views: 293
Part Number: DRV2605
Tool/software: Linux
I am trying to test the DRV2605 using an ERM with the Linux driver (github.com/.../drv260x.c) installed. On startup the motor works during calibration, however I am unable to get it to work by writing to the device file as described in. Test code below
#include <stdio.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>
#include <linux/input.h>
#include <sys/ioctl.h>
#define BITS_TO_LONGS(x) \
(((x) + 8 * sizeof (unsigned long) - 1) / (8 * sizeof (unsigned long)))
unsigned long features[BITS_TO_LONGS(FF_CNT)];
int main(const int argc, const char **argv)
{
int fd;
if (argc != 2) {
printf("usage: %s <device-file>\n", argv[0]);
return 1;
}
fd = open(argv[1], O_RDWR);
if (fd < 0) {
printf("Error opening file '%s': %s\n", argv[1], strerror(errno));
return 1;
}
int num_effects;
if (ioctl(fd, EVIOCGEFFECTS, &num_effects) < 0) {
printf("Error getting number of effects playable at the same time: %s\n", strerror(errno));
return 1;
}
printf("%d effects playable at the same time\n", num_effects);
struct ff_effect effect = {
.type = FF_RUMBLE,
.id = -1,
.direction = 0,
.trigger = {0, 0},
.replay = {
.length = 1000,
.delay = 0
}
};
effect.u.rumble.strong_magnitude = 0x7F;
if (ioctl(fd, EVIOCSFF, &effect) < 0) {
printf("Error creating new effect: %s\n", strerror(errno));
return 1;
}
printf("New effect ID: %d\n", effect.id);
struct input_event play = {
.type = EV_FF,
.code = effect.id,
.value = 3
};
if (write(fd, (const void*) &play, sizeof(play)) < 0) {
printf("Error writing effect to file: %s\n", strerror(errno));
return 1;
}
printf("Wrote effect\n");
return 0;
}
The program runs fine with no errors but nothing happens to the ERM.
Hi,
Welcome to E2E and thank you for your interest in our products!
Have you verified if the device files are created according to the 5.2 section? It seems that the utility fftest is also required to test the driver:
Best regards,Luis Fernando Rodríguezuis Fernando Rodríguez S.:
Hi Luis, thanks for your reply. The input device is loaded - the relevant segment from /proc/bus/input/devices is pasted below
I: Bus=0000 Vendor=0000 Product=0000 Version=0000
N: Name="drv260x:haptics"
P: Phys=
S: Sysfs=/devices/soc0/soc/30800000.aips-bus/30a30000.i2c/i2c-1/1-005a/input/input1
U: Uniq=
H: Handlers=event1 evbug
B: PROP=0
B: EV=200001
B: FF=1 7030000 0 0
Unfortunately I don't have fftest on my device although I will look into it
In reply to h:
I managed to get fftest on the device and ran it:
Force feedback test program.
HOLD FIRMLY YOUR WHEEL OR JOYSTICK TO PREVENT DAMAGES
Device /dev/input/event1 opened
Features:
* Absolute axes:
[00 00 00 00 00 00 00 00 ]
* Relative axes:
[00 00 ]
* Force feedback effects types: Periodic, Rumble, Gain,
Force feedback periodic effects: Square, Triangle, Sine,
[00 00 00 00 00 00 00 00 00 00 03 07 01 00 00 00 ]
* Number of simultaneous effects: 16
Setting master gain to 75% ... OK
Uploading effect #0 (Periodic sinusoidal) ... OK (id 0)
Uploading effect #1 (Constant) ... Error: Invalid argument
Uploading effect #2 (Spring) ... Error: Invalid argument
Uploading effect #3 (Damper) ... Error: Invalid argument
Uploading effect #4 (Strong rumble, with heavy motor) ... OK (id 1)
Uploading effect #5 (Weak rumble, with light motor) ... OK (id 2)
Enter effect number, -1 to exit
0
Now Playing: Sine vibration
Enter effect number, -1 to exit
4
Now Playing: Strong Rumble
Enter effect number, -1 to exit
5
Now Playing: Weak Rumble
Enter effect number, -1 to exit
-1
Stopping effects
I tried playing the effects that were uploaded OK and still no luck
In reply to Ivan Salazar:
I have managed to make some progress. On debugging the driver I found the work queue wasn't running the queued function that was supposed to play the driver. I have rewrote much of the driver to configure it as a simple character device instead of an input device so I can write waveform sequences to it. This was a functionality missing from the supplied driver and I couldn't find an obvious way to use the input subsystem to play any specific waveform number so if you know any way to do this that would be preferred to my customised driver solution. Unfortunately I made quite a number of different changes to the driver so couldn't say exactly what caused it to work (I am relatively new to Linux kernel development).
HI,
May I know how you found fftest.
If possible, can you please share the binary.
My platform is ARM based linux.
Regards,
Vijay
In reply to Vijay Kumar76561:
I got the source from here:. You only need fftest.c and bitmaskros.h to compile. I cannot share the binary sorry but it was easy enough to cross compile (using a custom Yocto SDK) since you only need those two. | http://e2e.ti.com/support/motor-drivers/f/38/p/771241/2860892 | CC-MAIN-2019-13 | refinedweb | 837 | 63.59 |
Jupyter Notebook Tutorial: The Definitive Guide
As a web application in which you can create and share documents that contain live code, equations, visualizations as well as text, the Jupyter Notebook is one of the ideal tools to help you to gain the data science skills you need.
This tutorial will cover the following topics:
(To practice pandas dataframes in Python, try this course on Pandas foundations.)
What Is A Jupyter Notebook?.
The Jupyter Notebook App produces these documents.
We'll talk about this in a bit.
For now, you should.
What Is The Jupyter Notebook App?
As a server-client application, the Jupyter Notebook App allows you to edit and run your notebooks via a web browser. The application can be executed on a PC without Internet access, or it can be installed on a remote server, where you can access it through the Internet.
Its two main components are the kernels and a dashboard.
A.
The History of IPython and Jupyter Notebooks
To fully understand what the Jupyter Notebook is and what functionality it has to offer you need to know how it originated.
Let's back up briefly to the late 1980s. Guido Van Rossum begins to work on Python at the National Research Institute for Mathematics and Computer Science in the Netherlands.
Wait, maybe that's too far.
Let's go to late 2001, twenty years later. Fernando Pérez starts developing IPython.
In 2005, both Robert Kern and Fernando Pérez attempted building a notebook system. Unfortunately, the prototype had never become fully usable.
Fast forward two years: the IPython team had kept on working, and in 2007, they formulated another attempt at implementing a notebook-type system. By October 2010, there was a prototype of a web notebook, and in the summer of 2011, this prototype was incorporated, and it was released with 0.12 on December 21, 2011.. IPython is now the name of the Python backend, which is also known as the kernel. Recently, the next generation of Jupyter Notebooks has been introduced to the community. It's called JupyterLab.
After all this, you might wonder where this idea of notebooks originated or how it came about to the creators.
A brief research into the history of these notebooks learns that Fernando Pérez and Robert Kern were working on a notebook just at the same time as the Sage notebook was a work in progress. Since the layout of the Sage notebook was based on the layout of Google notebooks, you can also conclude that also Google used to have a notebook feature around that time.
For what concerns the idea of the notebook, it seems that Fernando Pérez, as well as William Stein, one of the creators of the Sage notebook, have confirmed that they were avid users of the Mathematica notebooks and Maple worksheets. The Mathematica notebooks were created as a front end or GUI in 1988 by Theodore Gray.
The concept of a notebook, which contains ordinary text and calculation and/or graphics, was definitely not new.
Also, the developers had close contact with one another and this, together with other failed attempts at GUIs for IPython and the use of "AJAX" = web applications, which didn't require users to refresh the whole page every time you do something, were two other motivations for the team of William Stein to start developing the Sage notebooks.
If you want to know more details, check out the personal accounts of Fernando Pérez and William Stein about the history of their notebooks. Alternatively, you can read more on the history and evolution from IPython to Jupyter notebooks here.
How To Install Jupyter Notebook
Running Jupyter Notebooks With The Anaconda Python Distribution
One of the requirements here follow the instructions for the installation of Anaconda here for Mac or Windows.
Is something not clear? You can always read up on the Jupyter installation instructions here.
Running Jupyter Notebook The Pythonic Way: Pip
If you don't want to install Anaconda, you just have to make sure that you have the latest version of pip. If you have installed Python, you will typically already have it.
What you do need to do is upgrading pip:
# On Windows python -m pip install -U pip setuptools # On OS X or Linux pip install -U pip setuptools
Once you have pip, you can just run
# Python2 pip install jupyter # Python 3 pip3 install jupyter
If you need more information about installing packages in Python, you can go to this page.
Running Jupyter Notebooks in Docker Containers
Docker is an excellent platform to run software in containers. These containers are self-contained and isolated processes.
This sounds a bit like a virtual machine, right?
Not really. Go here to read an explanation on why they are different, complete with a fantastic house metaphor.
You can easily get started with Docker if you install the Docker Toolbox: it contains all the tools you need to get your containers up and running. Follow the installation instructions, select the "Docker QuickStart Terminal" and indicate to install the Kitematic Visual Management tool too if you don't have it or any other virtualization platform installed.
The installation through the Docker Quickstart Terminal can take some time, but then you're good to go. Use the command
docker run to run Docker "images". You can consider these images as pre-packaged bundles of software that can be automatically downloaded from the Docker Hub when you run them.
Tip: browse the Docker Image Library for thousands of the most popular software tools. You will also find other notebooks that you can run in your Docker container, such as the Data Science Notebook, the R Notebook, and many more.
To run the official Jupyter Notebook image in your Docker container, give in the following command in your Docker Quickstart Terminal:
docker run --rm -it -p 8888:8888 -v "$(pwd):/notebooks" jupyter/notebook
Tip: if you want to download other images, such as the Data Science Notebook that has been mentioned above, you just have to replace the "
Jupyter/notebook" bit by the Repository name you find in the Docker Image Library, such as "
Jupyter/datascience-notebook".
The newest Jupyter HTML Notebook image will be downloaded, and it will be started, or you can open the application. Continue to read to see how you can do that!
How To Use Jupyter Notebooks
Now that you know what you'll be working with and you have installed it, it's time to get started for real!
Getting Started With Jupyter Notebooks
Run the following command to open up the application:
jupyter notebook
Then you'll see the application opening in the web browser on the following address:. This all is demonstrated in the gif below:, and a terminal. Lastly, you will also see the option to make a Python 3 notebook.
Note that this last option will depend on the version of Python that you have installed. Also, if the application shows python [conda root] and python [default] as kernel names instead of Python 3, you can try executing
conda remove _nb_ext_conf or read up on the following Github issue and make the necessary adjustments. the name 'Untitled Folder'.
Thirdly, the terminal is there to support browser-based interactive terminal sessions. It primarily works just like your terminal or cmd application! Give in
python into the terminal, press ENTER, and you're good to go.
Tip: if you would ever need a pure IPython terminal, you can type 'ipython' in your Terminal or Cmd. This can come in handy when, for example, you want to get more clear error messages than the ones that appear in the terminal when you're running the notebook application.
If you want to start on your notebook, go back to the main menu and click the "Python 3" option in the "Notebook" category.
You will immediately see the notebook name, a menu bar, a toolbar and an empty code cell:
You can immediately start with importing the necessary libraries for your code. This is one of the best practices that we will discuss in more detail later on.
After, you can add, remove or edit the cells according to your needs. And don't forget to insert explanatory text or titles and subtitles to clarify your code! That's what makes a notebook a notebook in the end.
Tip: if you want to insert LaTex in your code cells, you just have to put your LaTeX math inside
$$, just like this:
$$c = \sqrt{a^2 + b^2}$$
You can also choose to display your LaTex output:
from IPython.display import display, Math, Latex display(Math(r'\sqrt{a^2 + b^2}'))
Are you not sure what a whole notebook looks like? Hop over to the last section to discover the best ones out there!
Toggling Between Python 2 and 3 in Jupyter Notebooks
Up until now, working with notebooks has been quite straightforward.
But what if you don't just want to use Python 3 or 2? What if you want to change between the two?
Luckily, the kernels can solve this problem for you! You can easily create a new conda environment to use different notebook kernels:
# Python 2.7 conda create -n py27 python=2.7 ipykernel # Python 3.5 conda create -n py35 python=3.5 ipykernel
Restart the application, and the two kernels should be available to you. Very important: don't forget to (de)activate the kernel you (don't) need with the following commands:
source activate py27 source deactivate
If you need more information, check out this page.
You can also manually register your kernels, for example:
conda create -n py27 python=2.7 source activate py27 conda install notebook ipykernel ipython kernel install --user
To configure the Python 3.5 environment, you can use the same commands but replace
py27 by
py35 and the version number by
3.5.
Alternatively, if you're working with Python 3 and you want to set up a Python 2 kernel, you can also do this:
python2 -m pip install ipykernel python2 -m ipykernel install --user
Running R in Your Jupyter Notebook
As the explanation of the kernels in the first section already suggested, you can also run other languages besides Python in your notebook!
If you want to use R with Jupyter Notebooks but without running it inside a Docker container, you can run the following command
Open up the notebook application to start working with R with the usual command.
If you now want to install additional R packages to elaborate your data science project, you can either build a Conda R package by running, for example:
conda skeleton cran ldavis conda build r-ldavis/
Or you can install the package from inside of R via
install.packages or
devtools
::install_github (from GitHub). You just have to make sure to add new package to the correct R library used by Jupyter:
install.packages("ldavis", "/home/user/anaconda3/lib/R/library")
Note that you can also install the IRKernel, a kernel for R, to work with R in your notebook. You can follow the installation instructions here.
Note that you also have kernels to run languages such as Julia, SAS, ... in your notebook. Go here for a complete list of the kernels that are available. This list also contains links to the respective pages that have installation instructions to get you started.
Tip: if you're still unsure of how you would be working with these different kernels or if you want to experiment with different kernels yourself, go to this page, where you can try out kernels such as Apache Toree (Scala), Ruby, Julia, ...
Making Your Jupyter Notebook Magical
If you want to get the most out of your notebooks with the IPython kernel, you should consider learning about the so-called "magic commands". Also, consider adding even more interactivity to your notebook so that it becomes an interactive dashboard to others should be one of your considerations!
The Notebook's Built-In Commands
There are some predefined ‘magic functions’ that will make your work a lot more interactive.
To see which magic commands you have available in your interpreter, you can simply run the following:
%lsmagic
Tip: the regular Python
help() function also still works and you can use the magic command
%quickref to show a quick reference sheet for IPython.
And you'll see a whole bunch of them appearing. You'll probably see some magics commands that you'll grasp, such as
%save,
%clear or
%debug, but others will be less straightforward.
If you're looking for more information on the magics commands or on functions, you can always use the
?, just like this:
# Retrieving documentation on the alias_magic command ?%alias_magic # Retrieving information on the range() function ?range
Note that if you want to start a single-line expression to run with the magics command, you can do this by using
% . For multi-line expressions, use
&& . The following example illustrates the difference between the two:
%time x = range(100) %%timeit x = range(100) max(x)
Stated differently, the magic commands are either line-oriented or cell-oriented. In the first case, the commands are prefixed with the
% character and they work as follows: they get as an argument the rest of the line. When you want to pass not only the line but also the lines that follow, you need cell-oriented magic: then, the commands need to be prefixed with
%%.
Besides the
%time and
%timeit magics, there are some other magic commands that will surely come in handy:
Note that this is just a short list of the handy magic commands out there. There are many more that you can discover with
%lsmagic.
You can also use magics to mix languages in your notebook with the IPython kernel without setting up extra kernels: there is
rmagics to run R code, SQL for RDBMS or Relational Database Management System access and
cythonmagic for interactive work with
cython,... But there is so much more!
To make use of these magics, you first have to install the necessary packages:
pip install ipython-sql pip install cython pip install rpy2
Tip: if you want to install packages, you can also execute these commands as shell commands from inside your notebook by placing a
! in front of the commands, just like this:
# Check, manage and install packages !pip list !pip install ipython-sql # Check the files in your working directory !ls
Only then, after a successful install, can you load in the magics and start using them:
%load_ext sql %load_ext cython %load_ext rpy2.ipython
Let's demonstrate how the magics exactly work))
This is just an initial not nearly everything you can do with R magics, though. You can also push variables from Python to R and pull them again to Python. Read up on the documentation (with easily accessible examples!) here.
Interactive Notebooks As Dashboards: Widgets:
This example was taken from a wonderful tutorial on building interactive dashboards in Jupyter, which you can find on this page.
Share Your Jupyter Notebooks
In practice, you might want to share your notebooks with colleagues or friends to show them what you have been up to or as a data science portfolio for future employers. However, the notebook documents are JSON documents that contain text, source code, rich media output, and metadata. Each segment of the document is stored in a cell.
Ideally, you don't want to go around and share JSON files.
That's why you want to find and use other ways to share your notebook documents with others.
When you create a notebook, you will see a button in the menu bar that says "File". When you click this, you see that Jupyter gives you the option to download your notebook as an HTML, PDF, Markdown or reStructuredText, or a Python script or a Notebook file.
You can use the
nbconvert command to convert your notebook document file to another static format, such as HTML, PDF, LaTex, Markdown, reStructuredText, ... But don't forget to import
nbconvert first if you don't have it yet!
Then, you can give in something like the following command to convert your notebooks:
jupyter nbconvert --to html Untitled4.ipynb
With
nbconvert, you can make sure that you can calculate an entire notebook non-interactively, saving it in place or to a variety of other formats. The fact that you can do this makes notebooks a powerful tool for ETL and for reporting. For reporting, you just make sure to schedule a run of the notebook every so many days, weeks or months; For an ETL pipeline, you can make use of the magic commands in your notebook in combination with some type of scheduling.
Besides these options, you could also consider the following:
- You can create, list and load GitHub Gists from your notebook documents. You can find more information here. Gists are a way to share your work because you can share single files, parts of files, or full applications.
- With jupyterhub, you can spawn, manage, and proxy multiple instances of the single-user Jupyter notebook server. In other words, it's a platform for hosting notebooks on a server with multiple users. That makes it the ideal resource to provide notebooks to a class of students, a corporate data science group, or a scientific research group.
- Make use of binder and tmpnb to get temporary environments to reproduce your notebook execution.
- You can use nbviewer to render notebooks as static web pages.
- To turn your notebooks into slideshows, you can turn to nbpresent and RISE.
- jupyter_dashboards will come in handy if you want to display notebooks as interactive dashboards.
- Create a blog from your notebook with Pelican plugin.
Jupyter Notebooks in Practice
This all is very interesting when you're working alone on a data science project. But most times, you're not alone. You might have some friends look at your code, or you'll need your colleagues to contribute to your notebook.
How should you actually use these notebooks in practice when you're working in a team?
The following tips will help you to effectively and efficiently use notebooks on your data science project.
Tips To Effectively and Efficiently Use Your Jupyter Notebooks
Using these notebooks doesn't mean that you don't need to follow the coding practices that you would usually apply.
You probably already know the drill, but these principles include the following:
- Try to provide comments and documentation to your code. They might be a great help to others!
- Also consider a consistent naming scheme, code grouping, limit your line length, ...
- Don't be afraid to refactor when or if necessary.
In addition to these general best practices for programming, you could also consider the following tips to make your notebooks the best source for other users to learn:
- Don't forget to name your notebook documents!
- Try to keep the cells of your notebook simple: don't exceed the width of your cell and make sure that you don't put too many related functions in one cell.
- If possible, import your packages in the first code cell of your notebook, and
- Display the graphics inline. The magic command
%matplotlib
inlinewill definitely come in handy to suppress the output of the function on a final line. Don't forget to add a semicolon to suppress the output and to just give back the plot itself.
- Sometimes, your notebook can become quite code-heavy, or maybe you just want to have a cleaner report. In those cases, you could consider hiding some of this code. You can already hide some of the code by using magic commands such as
%runto execute a whole Python script as if it was in a notebook cell. However, this might not help you to the extent that you expect. In such cases, you can always check out this tutorial on optional code visibility or consider toggling your notebook's code cells.
Jupyter Notebooks for Data Science Teams: Best Practices
Jonathan Whitmore wrote in his article some practices for using notebooks for data science and specifically addresses the fact that working with the notebook on data science problems in a team can prove to be quite a challenge.
That is why Jonathan suggests some best practices:
- Use two types of notebooks for a data science project, namely, a lab notebook and a deliverable notebook. The difference between the two (besides the obvious that you can infer from the names that are given to the notebooks) is the fact that individuals control the lab notebook, while the deliverable notebook is controlled by the whole data science team,
- Use some type of versioning control (Git, Github, ...). Don't forget to commit also the HTML file if your version control system lacks rendering capabilities, and
- Use explicit rules on the naming of your documents.
Learn From The Best Notebooks
This section is meant to give you a short list with some of the best notebooks that are out there so that you can get started on learning from these examples.
- Notebooks are also used to complement books, such as the Python Data Science Handbook. You can find the notebooks here.
- A report on a Kaggle competition is written down in this blog, generated from a notebook.
- This matplotlib tutorial is an excellent example of how well a notebook can serve as a means of teaching other people topics such as scientific Python.
- Lastly, make sure to also check out The Importance of Preprocessing in Data Science and the Machine Learning Pipeline tutorial series that was generated from a notebook.
Note that this list is definitely not exhaustive. There are many more notebooks out there!
You will find that many people regularly compose and have composed lists with interesting notebooks. Don't miss this gallery of interesting IPython notebooks or this KDnuggets article. | https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook?utm_source=adwords_ppc&utm_campaignid=898687156&utm_adgroupid=48947256715&utm_device=c&utm_keyword=&utm_matchtype=b&utm_network=g&utm_adpostion=1t1&utm_creative=229765585183&utm_targetid=dsa-473406581035&utm_loc_interest_ms=&utm_loc_physical_ms=1005424&gclid=EAIaIQobChMI3Pf9p6y44QIV9xXTCh2bNgxyEAAYASAAEgKhDPD_BwE | CC-MAIN-2019-22 | refinedweb | 3,651 | 60.24 |
Details
- Type:
Bug
- Status: Closed
- Priority:
Minor
- Resolution: Cannot Reproduce
- Affects Version/s: 1.8.6, 2.0-beta-3
- Fix Version/s: None
-
- Labels:None
- Environment:OS:
Mac OSX 10.6.8
Groovy versions tested:
Groovy Version: 2.0.0-beta-3-SNAPSHOT JVM: 1.6.0_29
Groovy Version: 1.8.6 JVM: 1.6.0_29
Groovy Version: 1.7.10 JVM: 1.6.0_29
Description
It seems Groovy does not have a predictable behavior for reading static fields
on sub-interfaces which "hide" a field from their super-interface.
Example:
public interface IA { public static final String NAME = "IA"; } public interface IB extends IA { public static final String NAME = "IB"; }
You would expect, based on Java's behavior, that IB.NAME should always equal "IB",
but groovy sometimes gives "IB" and sometimes "IA."
This seems to happen whether the interface is defined in Java or groovy, and whether or
not the field is static/final.
However, it does seem to only happen with interfaces, not with classes.
i.e. the following works as expected:
public class A { public static final String NAME = "A"; } public class B extends A { public static final String NAME = "B"; }
B.NAME will always give "B" in that case.
See attached test case in bug.zip (unzip bug.zip, cd into 'bug' directory, run test_java.sh and test_groovy.sh)
test_java.sh shows the behavior you get from Java, which is the behavior I would expect, and test_groovy.sh shows the actual behavior (sometimes prints "A", sometimes prints "B"). I can't find any pattern to the wrong behavior. Seems like it's just luck of the draw as to which field gets read (sub-interface or super-interface).
Issue Links
- relates to
GROOVY-5272 Intermittant/random incorrect resolution of sub-interface constant values
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
I cannot reproduce. Can you add:
println GroovySystem.version
at the beginning of your test script?
I can reproduce on Groovy 1.8.5 but can't reproduce on Windows for Groovy 1.8.6.
I am puzzled, seems the very same issue as
GROOVY-5272, but it is supposed to be fixed. I cannot reproduce on master here. Maybe it's related to the Mac OS X JVM?
Formatting tags
Closing as cannot reproduce because of Cédrics comment. | https://issues.apache.org/jira/browse/GROOVY-5335 | CC-MAIN-2016-22 | refinedweb | 386 | 60.92 |
Avalon: Convergence in the Simulacrum
Just as the event log gets a makeover, so does the registry. In this post, I will take a look at some of the work done around how to manage settings and configuration in the OS. The goal is to provide a platform for handling settings in such a way that supports roaming, migration of settings, transactions and rollbacks, etc. There is quite a bit of work underway to improve the way settings and state are managed by the system.
The new settings infrastructure is composed of three main areas. First is configuration schema, which allows settings to be schematized based on -- surprise, surprise -- XML. This allows for complex hierarchies of typed settings to be described. Second is a configuration engine that has a suite of APIs for reading and writing configuration settings. Last is a configuration store, which is an optimized store that can be synched with legacy stores such as the registry, INI files, etc.
As usual, the best way to grok this is to look at it in action. (Note that all of this is preliminary and will evolve as Longhorn moves toward beta.)
First, to browse the new store, open a command prompt and launch %windir%\system32\wcmedit.exe. You can see that already, some Windows components are using the store. In fact, the Event Log, discussed yesterday, already has a complex schema registered. Or, you can see that the Shell is also using the store for values such as which wallpaper is in use on the desktop. As you browse through the store, you can see the values are typed based on XSD types, which of course correllate to CLR types.
Notice that every application, represented by a namespace, has three buckets underneath it: metadata, settings and transactions. Perhaps most interesting is the transactions bucket, which allows you to see the activity that has occurred for that application.
Now, let's take a look at how to register a schema in the store. Here is the schema itself, embedded in the manifest. First, go the the Longhorn SDK Samples and open the WMI tree. Download the WMI.Configuration Sample 1 and unzip it. You will find a file called WcmSample1.man, which is the manifest and schema that the samples application will use. In order to register it, you will need to compile it with the system. In your command prompt, navigate to "%windir%\system32\WMI Config". Then, type WcmCompile.exe %path to sample%\WcmSample.man. Now, go back to your configuration store manager and hit refresh. You will see your application added to the list. This step will eventually be obsolete and schemas will be “compiled“ when the application is installed, either via ClickOnce or some other installation technology, like MSI.
To see the API in action, compile the sample using MSBUILD. When you run the sample, the application will both read and write settings to the store. After running the sample, browse the trancations folder in the store to see what transpired. Every time you run the .exe, you should see three transactions logged.
To get a sense of how the API works, open up the WcmSample.cs file. In this case we are setting values such as the Window title and the Window size coordinates after getting the application namespace from the store. Here is the source code.
In my next post, I'll take a look at how to do some more interesting things, like synch with the registry or set restrictions on the valid values for a given setting. | http://blogs.msdn.com/karstenj/archive/2004/04/13/112493.aspx | crawl-002 | refinedweb | 594 | 73.98 |
Python Requests Integration
The ScrapeOps Python Requests SDK is an extension for your scrapers that gives you all the scraping monitoring, statistics, alerting, and data validation you will need straight out of the box.
To start using it, you just need to initialize the
ScrapeOpsRequests logger in your scraper and use the ScrapeOps
RequestsWrapper instead of the normal Python Requests library.
The ScrapeOps
RequestsWrapper is just a wrapper around the standard Python Requests library so all functionality (HTTP requests, Sessions, HTTPAdapter, etc.) will work as normal and return the stanard requests response object.
Once integrated, the ScrapeOpsRequests logger will automatically monitor your scrapers and send your logs to your scraping dashboard.
🚀 Getting Setup
You can get the ScrapeOps monitoring suite up and running in 4 easy steps.
#1 - Install the ScrapeOps Python Requests SDK:
pip install scrapeops-python-requests
#2 - Import & Initialize the ScrapeOps logger:
Import then initialize the
ScrapeOpsRequests logger at the top of your scraper and add your API key.
## myscraper.py
from scrapeops_python_requests.scrapeops_requests import ScrapeOpsRequests
scrapeops_logger = ScrapeOpsRequests(
scrapeops_api_key='API_KEY_HERE',
spider_name='SPIDER_NAME_HERE',
job_name='JOB_NAME_HERE',
)
Here, you need to include your free ScrapeOps API Key, which you can get for free here.
You also have the option of giving your scraper a:
- Spider Name: This should be the name of your scraper, and can be reused by multiple jobs scraping different pages on a website. When not defined, it will default to the filename of your scraper.
- Job Name: This should be used if the same spider is being used for multiple different jobs so you can compare the stats of similar jobs historically. Example would be a spider scraping a eCommerce store, but have multiple jobs using the same scraper to scrape different products on the website (i.e. Books, Electronics, Fashion). When not defined, the job name will default to the spider name.
#3 - Initialize the ScrapeOps Python Requests Wrapper
The last step is to just override the standard python requests with the ScrapeOps RequestsWrapper.
Our wrapper uses the standard Python Request library but just provides a way for us to monitor the requests as they happen.
Please only initialize the requests wrapper once near the top of your code.
requests = scrapeops_logger.RequestsWrapper()
#4 - Log Scraped Items:
With the ScrapeOpsRequests logger you can also log the data you scrape as items using the
item_scraped method.
## Log Scraped Item
scrapeops_logger.item_scraped(
response=response,
item={'demo': 'test'}
)
Using
item_scraped the logger will log that an item has been scraped and calculate the data coverage so you can see in your dashboard if your scraper is missing some fields.
Example Scraper:
Here is a simple example so you can see how you can add it to an existing project.
from scrapeops_python_requests.scrapeops_requests import ScrapeOpsRequests
## Initialize the ScrapeOps Logger
scrapeops_logger = ScrapeOpsRequests(
scrapeops_api_key='API_KEY_HERE',
spider_name='QuotesSpider',
job_name='Job1',
)
## Initialize the ScrapeOps Python Requests Wrapper
requests = scrapeops_logger.RequestsWrapper()
urls = [
'',
'',
'',
'',
'',
]
for url in urls:
response = requests.get(url)
item = {'test': 'hello'}
## Log Scraped Item
scrapeops_logger.item_scraped(
response=response,
item=item
)
Done!
That's all. From here, the ScrapeOps SDK will automatically monitor and collect statistics from your scraping jobs and display them in your ScrapeOps dashboard. | https://scrapeops.io/docs/monitoring/python-requests/sdk-integration/ | CC-MAIN-2022-40 | refinedweb | 523 | 52.7 |
Search
Create
Advertisement
Upgrade to remove ads
141 terms
Syele
C++ How to program
Chapters 1-10, excluding chapter 5 - from self-review questions
STUDY
PLAY
Apple
The company that popularized personal computing was ________.
IBM
The computer that made personal computing legitimate in business and industry was the ________.
Programs
Computers process data under the control of sets of instructions called computer _________.
input unit - output unit - memory unit - arithmetic and logic unit - central processing unit - secondary storage unit
The six key logical units of the computer are the ________, ________, ________, _________, _________ and the ________.
machine languages - assembly languages - high-level languages
,The three classes of languages discussed in the chapter are ________, ________, and ________.
compilers
The programs that translate high-level language programs into machine language are called ________.
UNIX
C is widely known as the development language of the ________ operating system.
Pascal
The ________ language was developed by Wirth for teaching structured programming.
Multitasking
The Department of Defense developed the Ada language with a capability called ________, which allows programmers to specify that many activities can proceed in parallel.
editor
C++ programs are normally typed into a computer using a(n) ________ program.
preprocessor
In a C++ system, a(n) ________ program executes before the compiler's translation phase begins.
linker
The ________ program combines the output of the compiler with various library functions to produce an executable image.
loader
The ________ program transfers the executable image of a C++ program from disk to memory.
information hiding
Objects have the property of ________although objects may know how to communicate with one another across well-defined interfaces, they normally are not allowed to know how other objects are implemented.
classes
C++ programmers concentrate on creating ________, which contain data members and the member functions that manipulate those data members and provide services to clients.
associations
Classes can have relationships with other classes. These relationships are called ________.
object-oriented analysis and design (OOAD)
The process of analyzing and designing a system from an object-oriented point of view is called ________.
inheritance
OOD also takes advantage of ________ relationships, where new classes of objects are derived by absorbing characteristics of existing classes, then adding unique characteristics of their own.
Unified Modeling Language(UML)
________ is a graphical language that allows people who design software systems to use an industry-standard notation to represent them.
attributes
The size, shape, color and weight of an object are considered
Why might you want to write a program in a machine-independent language instead of amachine-dependent language? Why might a machine-dependent language be more appropriate forwriting certain types of programs?
Machine independent languages are useful for writing programs to be executed on multiple computer platforms. Machine dependent languages are appropriate forwriting programs to be executed on a single platform.
input unit
Which logical unit of the computer receives information from outside the computer foruse by the computer?
computer programming
The process of instructing the computer to solve specific problems is called
assembly language
What type of computer language uses English-like abbreviations for machine language instructions?
output unit
Which logical unit of the computer sends information that has already been processed by the computer to various devices so that the information may be used outside the computer?
memory unit and secondary storage unit
Which logical units of the computer retain information?
arithmetic and logical unit
Which logical unit of the computer performs calculations?
arithmetic and logical unit
Which logical unit of the computer makes logical decisions?
high-level language
The level of computer language most convenient to the programmer for writing programs quickly and easily is
machine language
The only language that a computer directly understands is called that computer's
central processing unit
Which logical unit of the computer coordinates the activities of all the other logical units?
Machine languages are generally
machine dependent
stdin
This refers to the standard input device. The standard input device is normally connected to the keyboard
stdout
This refers to the standard output device. The standard output device is normally connected to the computer screen.
stderr
This refers to the standard error device. Error messages are normally sent to this device which is typically connected to the computer screen.
Why is so much attention today focused on object-oriented programming?
Object-oriented programming enables the programmer to build reusable software components that model items in the real world. Building software quickly, correctly,and economically has been an elusive goal in the software industry. The modular, ob-ject-oriented design and implementation approach has been found to increase pro-ductivity while reducing development time, errors, and cost.
FORTRAN
Developed by IBM for scientific and engineering applications
COBOL
Developed specifically for business applications.
Pascal
Developed for teaching structured programming
Ada
Named after the world's first computer programmer
BASIC
Developed to familiarize novices with programming techniques
C#
Specifically developed to help programmers migrate to .NET.
C
Known as the development language of UNIX
C++
Formed primarily by adding object-oriented programming to C
Java
Succeeded initially because of its ability to create web pages with dynamic content
main
Every C++ program begins execution at the function _________.
{ }
The _________ begins the body of every function and the _________ ends the body of every function.
semicolon
Every C++ statement ends with a(n) _________.
new line
The escape sequence \n represents the _________ character, which causes the cursor to position to the beginning of the next line on the screen.
//
Comments do not cause the computer to print the text after the____ on the screen when the program is executed.
/n
The escape sequence ___, when output with cout and the stream insertion operator, causes the cursor to position to the beginning of the next line on the screen.
variables
All ______ must be declared before they are used.
case sensitive
All C++ Variables are_________.
integer operands
The modulus operator (%) can be used only with __________.
int c, thisIsAVariable, q76354, number;
Declare the variables c, thisIsAVariable, q76354 and number to be of type int.
std::cout << "Enter an integer: ";
Prompt the user to enter an integer. End your prompting message with a colon (:) followed by a space and leave the cursor positioned after the space.
std::cin >> age;
Read an integer from the user at the keyboard and store the value entered in integer variable age.
if ( number != 7 )
std::cout << "The variable number is not equal to 7\n";
If the variable number is not equal to 7, print "The variable number is not equal to 7".
std::cout << "This is a C++ program\n";
Print the message "This is a C++ program" on one line.
std::cout << "This is a C++\nprogram\n";
Print the message "This is a C++ program" on two lines. End the first line with C++.
std::cout << "This\nis\na\nC++\nprogram\n";
Print the message "This is a C++ program" with each word on a separate line.
std::cout << "This\tis\ta\tC++\tprogram\n";
Print the message "This is a C++ program" with each word separated from the next by a tab.
// Calculate the product of three integers
Comment that a program calculates the product of three integers.
int x;
int y;
int z;
int result;
Declare the variables x, y, z and result to be of type int (in separate statements).
cout << "Enter three integers: ";
Prompt the user to enter three integers.
cin >> x >> y >> z;
Read three integers from the keyboard and store them in the variables x, y and z.
result = x
y
z;
Compute the product of the three integers contained in variables x, y and z, and assign the result to the variable result.
cout << "The product is " << result << endl;
Print "The product is " followed by the value of the variable result.
return 0;
Return a value from main indicating that the program terminated successfully.
_____ are used to document a program and improve its readability.
cout
The object used to print information on the screen is _____.
if
A C++ statement that makes a decision is ______.
assignment
Most calculations are normally performed by ______ statements.
valid variable names
_under_bar_, m928134, t5, j7, her_sales, his_account_total, a, b, c, z, z2.
/ %
What arithmetic operations are on the same level of precedence as multiplication? ______.
object
A house is to a blueprint as a(n) _________ is to a class.
class
Every class definition contains keyword _________ followed immediately by the class's name.
.h
A class definition is typically stored in a file with the _________ filename extension.
type, name
Each parameter in a function header should specify both a(n) _________ and a(n) _________.
data member
When each object of a class maintains its own copy of an attribute, the variable that represents the attribute is also known as a(n) _________.
access specifier
Keyword public is a(n) _________
void
Return type _________ indicates that a function will perform a task but will not return any information when it completes its task.
getline
Function _________ from the <string> library reads characters until a newline character is encountered, then copies those characters into the specified string.
binary scope resolution operator (::)
When a member function is defined outside the class definition, the function header must include the class name and the _________, followed by the function name to "tie" the member function to the class definition.
#include
The source-code file and any other files that use a class can include the class's header file via an _________ preprocessor directive.
Sequence, selection and repetition
All programs can be written in terms of three types of control structures:_______, ________and_________.
if...else
The_________selection statement is used to execute one action when a condition is TRue or a different action when that condition is false.
What is the difference between a local variable and a data member?
A local variable is declared in the body of a function and can be used only from the point at which it is declared to the immediately following closing brace. A data member is declared in a class definition, but not in the body of any of the class's member functions. Every object (instance) of a class has a separate copy of the class's data members. Also, data members are accessible to all member functions of the class.
Explain the purpose of a function parameter. What is the difference between a parameter and an argument?
A parameter represents additional information that a function requires to perform its task. Each parameter required by a function is specified in the function header. An argument is the value supplied in the function call. When the function is called, the argument value is passed into the function parameter so that the function can perform its task.
Counter-controlled or definite
Repeating a set of instructions a specific number of times is called_________repetition.
Sentinel, signal, flag or dummy
When it is not known in advance how many times a set of statements will be repeated, a(n)_________value can be used to terminate the repetition.
functions, classes
Program components in C++ are called ________ and ________.
function call
A function is invoked with a(n) ________.
local variable
A variable that is known only within the function in which it is defined is called a(n) ________.
return
The ________ statement in a called function passes the value of an expression back to the calling function.
void
The keyword ________ is used in a function header to indicate that a function does not return a value or to indicate that a function contains no parameters.
scope
The ________ of an identifier is the portion of the program in which the identifier can be used.
return;, return expression; or encounter the closing right brace of a function.
The three ways to return control from a called function to a caller are ________, ________ and ________.
function prototype
A(n)________ allows the compiler to check the number, types and order of the arguments passed to a function.
rand
Function ________ is used to produce random numbers.
srand
Function ________ is used to set the random number seed to randomize a program.
auto, register, extern, static
The storage-class specifiers are mutable, ________, ________, ________ and ________.
auto
Variables declared in a block or in the parameter list of a function are assumed to be of storage class ________ unless specified otherwise.
Storage-class specifier ________ is a recommendation to the compiler to store a variable in one of the computer's registers.
global
A variable declared outside any block or function is a(n) ________ variable.
static
For a local variable in a function to retain its value between calls to the function, it must be declared with the ________ storage-class specifier.
function scope, file scope, block scope, function-prototype scope, class scope, namespace scope
The six possible scopes of an identifier are ________, ________, ________, ________, ________ and ________.
recursive
A function that calls itself either directly or indirectly (i.e., through another function) is a(n) ________ function.
base
A recursive function typically has two components: One that provides a means for the recursion to terminate by testing for a(n) ________ case and one that expresses the problem as a recursive call for a slightly simpler problem than the original call.
overloading
In C++, it is possible to have various functions with the same name that operate on different types or numbers of arguments. This is called function ________.
unary scope resolution operator (::)
The ________ enables access to a global variable with the same name as a variable in the current scope.
const
The ________ qualifier is used to declare read-only variables.
template
A function ________ enables a single function to be defined to perform a task on many different data types.
This creates a reference parameter of type "reference to double" that enables the function to modify the original variable in the calling function.
Why would a function prototype contain a parameter type declaration such as double &?
Each time you run the program, it will generate the same pattern of numbers.
Why is it often necessary to scale or shift the values produced by rand?
arrays, vectors
Lists and tables of values can be stored in __________ or __________.
name, type
The elements of an array are related by the fact that they have the same ________ and ___________.
subscript (or index)
The number used to refer to a particular element of an array is called its ________.
constant variable
A(n) __________ should be used to declare the size of an array, because it makes the program more scalable.
sorting
The process of placing the elements of an array in order is called ________ the array.
searching
The process of determining if an array contains a particular key value is called _________ the array.
two-dimensional
An array that uses two subscripts is referred to as a(n) _________ array.
address
A pointer is a variable that contains as its value the____________ of another variable.
0, NULL, an address
The three values that can be used to initialize a pointer are_____________,__________ and___________.
0
The only integer that can be assigned directly to a pointer is_____________.
dot (.), arrow (->)
Class members are accessed via the ________ operator in conjunction with the name of an object (or reference to an object) of the class or via the ___________ operator in conjunction with a pointer to an object of the class.
private
Class members specified as _________ are accessible only to member functions of the class and friends of the class.
public
Class members specified as _________ are accessible anywhere an object of the class is in scope.
Default memberwise assignment (performed by the assignment operator).
__________ can be used to assign an object of a class to another object of the same class.
To qualify hidden names so that they can be used.
What is the purpose of the scope resolution operator?
member initializers
__________ must be used to initialize constant members of a class.
friend
A nonmember function must be declared as a(n) __________ of a class to have access to that class's private data members.
new, pointer
The __________ operator dynamically allocates memory for an object of a specified type and returns a __________ to that type.
initialized
A constant object must be __________; it cannot be modified after it is created.
static
A(n) __________ data member represents class-wide information.
this
An object's non-static member functions have access to a "self pointer" to the object called the __________ pointer.
const
The keyword __________ specifies that an object or variable is not modifiable after it is initialized.
default constructor
If a member initializer is not provided for a member object of a class, the object's __________ is called.
non-static
A member function should be declared static if it does not access __________ class members.
before
Member objects are constructed __________ their enclosing class object.
delete
The __________ operator reclaims memory previously allocated by new.
Advertisement
Upgrade to remove ads | https://quizlet.com/8683011/c-how-to-program-flash-cards/ | CC-MAIN-2017-43 | refinedweb | 2,858 | 53.41 |
Many developers are still not aware that Portable Executable (PE) files can be decompiled to readable source code. Before learning how to prevent or make it hard for the decompilers to reverse engineer the source code, we need to understand a few basics concepts.
When source code is complied, it generates a Portable Executable (PE) file. Portable Executable (PE) is either a DLL or an EXE. PE file contains MSIL (Microsoft Intermediate Language) and Metadata. MSIL is ultimately converted by CLR into the native code which a processor can understand. Metadata contains assemble information like Assembly Name, Version, Culture and Public Key.
Yes, we can get the source code from DLL or EXE. To demonstrate this, let's create a simple application first.
Open Visual Studio, create a new project and select console based application.
Add some sample code into the Program.cs:
using System;
namespace MyConsoleApp
{
internal class Program
{
private static void Main(string[] args)
{
Console.WriteLine(PublicMethod());
Console.WriteLine(PrivateMethod());
}
public static string PublicMethod()
{
// Your source code here
return "Public Method";
}
private static string PrivateMethod()
{
// Your source code here
return "Private Method";
}
}
}
Now build the application, an EXE will be generated in the bin/debug folder:
Now let's try to get the source code from the EXE file. For the first, open Visual Studio command prompt.
Type ildasm and hit enter. IL DASM is MSIL Disassembler. It basically has an ability to read Intermediate Language.
ildasm
IL DASM will open, now open the EXE file we created.
As we can see, IL DASM disassembles the EXE and lots of useful information can be retrieved, though it does not provide the original source code completely, a lot can be interpreted. The easy way to reverse engineer and get the exact source code there are decompilers available in the market for free such as Telerik JustDecompile and Jet Brains dotPeek through which we can convert the Intermediate Language into the original source code.
As we can see in the above screenshot when we open the EXE with Telerik JustDecompile, we are able to see the original source code, this can lead to piracy and ultimately you can lose your profits.
The process of protecting the EXE and DLL from getting decompiled into the original source code is called Obfuscation. There are a lot of paid and free software available to Obfuscate the .NET assemblies, Dotfucator from PreEmptive Solutions is one of the popular ones and their community edition is free and included with Visual Studio. If you are interested in buying other version, check out this comparison. The Dofuscator community edition has limited features and the professional edition is very expensive. So instead of gaining profits by protecting them from reverse engineering, we will end up spending a lot on Obfuscation.
One of the best alternate utility for obfuscating is ConfuserEx - it is a completely free and opensource. You can ConfuserEx download from here.
After downloading, extract the zip into a folder and then run ConfuserEx.exe.
Drag and drop the EXE you want to protect on the ConfuserEx or you can manually select Base Directory, Output Directory and add the DDL or EXE.
Once you have done setting up the directories and adding DLL or EXE, go to the Settings tab in ConfuserEx. You can either add rules to Gobal settings or set individually for each DLL or EXE.
ConfuserEx
Click on “+” button, you will see “true” under Rules. Now click on edit rule (button below “-”).
On clicking edit rule, a new window will appear as shown below. Click on “+” button.
+
You can select different ways of adding levels of protection. If you want to learn Obfuscation in depth, check out this article.
Select only with “Anti IL Dasm” and “Anti Tamper”, that is enough for making it hard enough to reverse engineer for the decompilers.
After you click on Done, go to Protect tab and click on Protect button.
You can find the protected DLL or EXE in the output directory selected.
Test the EXE or DLL generated by ConfusedEx and check if it is working as usual. Now try to decompile it with a decompiler.
ConfusedEx
As we can see, the confused DLL or EXE which gets generated by ConfuserEx cannot be decompiled any more.
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)
After downloading, extract the zip... blah...
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/1245244/Protect-your-Source-Code-from-Decompiling-or-Rev-2 | CC-MAIN-2019-09 | refinedweb | 764 | 63.8 |
Last year I finished up a long series on SOLID principles.
So, what is SOLID? Well, it is five OOP principles, the first letter of each spelling out SOLID: Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion.
These principles, when combined together, make it easy to develop maintainable and extensible software.
Here at DNC Magazine, we’ve had a couple of readers send questions about SOLID. This issue’s column is my attempt to answer the questions.
Before getting to the questions, I want to summarize SOLID principles.
S
Single Responsibility Principle (SRP) – A class should have only one reason to be changed.
O
Open-Closed Principle (OCP) – A class should be open to extension but closed to modification
L
Liskov Substitution Principle (LSP) – You should be able to replace a class with a subclass without the calling code knowing about the change
I
Interface Segregation Principle (ISP) – Many specific interfaces are better than a single, all-encompassing interface
D
Dependency Inversion Principle (DIP) – Code should depend upon abstractions, not concrete implementations
Now, on to the questions.
The first question is from Eric, who writes,
“I am a Computer Science Graduate who has learnt C#. Can you explain me SOLID (like for beginners)”?
Great question, Eric. Coming out of school with a shiny degree, one can quickly realize they were taught to write code, but not taught to write software. Schools typically teach how to do things in Java, JavaScript, C#, Python and other languages but neglect the important things such as DevOps, Agile, SOLID, and other practices.
SOLID is often difficult to understand for new developers, but yet one of the most important concepts you can understand. Where I work at Quicken Loans, we typically ask about SOLID when interviewing hiring candidates and some of them cannot give a satisfactory answer.
The primary thing to ask yourself when writing code is, “Does this solution solve the problem?” If you can say yes, the code is correct.
However, it may not answer other questions you need to keep in mind. These are things like:
Many of these questions can be answered by applying SOLID.
Let’s review SOLID.
SOLID has been around for years. The concepts were first brought together by Robert “Uncle Bob” Martin. They are a set of concepts that help us create better code.
You shouldn’t think that code will comply with every SOLID principle the moment it leaves your fingers and enters the computer via the keyboard.
Some principles are more applicable to interfaces, some to classes.
And, once the code is originally written, you will typically refactor it before you are satisfied that is as close to SOLID as you can get it.
When learning SOLID, learn one principle at a time, learn it well, then move on to the next one.
Single Responsibility Principle (SRP) answers many of the above questions.
First, it can identify if code does too much. The code should do one thing and one thing only and do it well. And if code does one thing, it’s easier to understand when you read it. It’s also less likely that you’ll have to modify the code and if you do, odds are that unexpected side effects won’t creep in.
But, it isn’t always clear what one thing code should be doing.
Here’s an example. For many years, developers have been creating data classes. These would be used to Create, Read, Update, and Delete (CRUD) data.
One day, someone proposed that Create, Update, and Delete are all similar in that they change data in the database.
Read is a different animal.
It doesn’t change data, but simply tells us what the data is, therefore, following SRP, it should be in a different class. After all, they said, it’s far more likely we’ll have to change code to read the data for different purposes than to modify the data.
We may need a single row, multiple rows, summarized data, data sorted in different ways, totals, subtotals, counts, groupings, etc. This led to a concept called Command Query Responsibility Separation (CQRS).
Others counteracted this by saying that in the end, it’s all data access of some kind, so it all belongs in the same class.
Which one is correct?
The answer is, both are correct. CQRS may add unneeded complexity.
As a general rule, if you have to modify code in two or more places, either in the same class or multiple classes, to apply a single modification (fix a bug or add new functionality) that section of code MIGHT be a candidate for multiple classes, hence making it compliant with SRP.
What if it’s only a single line of code that you move to a new class? Is the additional complexity worth it? Only you and your team can decide this.
Here's a detailed tutorial on Single Responsibility Principle (SRP).
The Open-Closed Principle is all about adding new functionality without modifying the existing code or even assembly. The reason behind this is that every time you modify code, you run the risk of adding bugs to the existing functionality.
Think about the String class in .NET.
There are many operations that you can perform on a string. You have string.Length, string.Split, string.Substring and many others. What if there is a need to reverse all the characters in a string, so that instead of “Dot Net Curry”, you wanted, “yrruC teN, toD”?
If the string class is modified, there is a chance (albeit very small) that existing code can get accidentally changed. Instead, create an extension method that lives in a different assembly. BAM! Done.
Here's a detailed tutorial on Open-Closed Principle (OCP).
Now, let’s look at an oddly named principle, Liskov Substitution Principle (LSP), named after the person who first proposed it.
What it says is that we should be able to replace a class we’re using with a subclass, and the code keeps working.
A common example here is a logging class. Think of all the places you can write a log. It could be to a database, a file on disk, a web service, use the Windows Event Log, and others. The file on disk can take many forms. It could be plain text, XML, JSON, HTML, and many other formats.
But the code that calls the logging methods should not have to call different methods for different ways to log. It should always call a similarly named method for all of them and a specific instance of the logging class handles the details.
You’d have a different class for each way you support for writing a log.
Read more in a detailed tutorial on Liskov Substitution Principle (LSP).
Have you ever looked at the different interfaces in the List class in .Net?
I have.
List inherits from eight different interfaces. Here’s the definition from MSDN.
Why does it do this? The simple reason is to comply with the fourth SOLID principle, the Interface Segregation Principle (ISP).
Think about the functionality defined in each of those interfaces? Now imagine if all that functionality was contained in a single interface. How complex would that be? How many different things would this single, combined interface do? It would be crazy! It would be impossible to properly maintain.
And then imagine if you had a class that didn’t need to implement IReadOnlyList. How would you implement that? Basically, you’d have code that didn’t do anything. Why do you have code that does nothing?
The solution is to have many specialized interfaces so that consumers depend only on what they need.
To delve further into ISP, here's a detailed tutorial on Interface Segregation Principle (ISP).
Let’s return to the logging example above. How do you tell a specific class which type of logging it should use? That’s the job of the Dependency Inversion Principle or as some call it Dependency Injection Principle.
When you instantiate the class, you pass in an instance of the class you want to use through Constructor Injection.
The following code shows how to do this.
public interface ILogger
{
void WriteLog(string message);
}
public class DatabaseLogger : ILogger
{
public void WriteLog(string message)
{
// Format message
// Build parameter
// Open connection
// Send command
// Close connection
}
}
public class TextFileLogger : ILogger
{
public void WriteLog(string message)
{
// Format message
// Open text file
// Write message
// Close text file
}
}
public class Main
{
public Main()
{
var processor = new Processor(new TextFileLogger());
}
}
public class Processor
{
private readonly ILogger _logger;
public Processor(ILogger logger)
{
_logger = logger;
}
public void RunProcessing()
{
try
{
// Do the processing
}
catch (Exception ex)
{
_logger.WriteLog(ex.Message);
}
}
}
At the top, the interface is defined followed by two classes, DatabaseLogger and TextFile logger that implement the interface.
In the Main class, an instance of the TextFileLogger is created and passed to the constructor of the Processor class. This is Dependency Injection. The Processor class depends on an instance of ILogger and instead of creating it every time, the instance is injected into the constructor. This also loosely couples the specific logger instance from the Processor class.
The instance of the logger is then used inside the Try/Catch block. If you want to change the logger type, all you need to do is change the calling program so that you instantiate DatabaseLogger and send it instead.
Further Reading - Dependency Inversion Principle (DIP).
And there you have a simplified explanation of SOLID. I encourage you to look at my earlier columns on this topic. All are linked to above. You may benefit from finding a senior level developer on your team and using him as a mentor. Mention to him that you’re interested in learning SOLID. That will help the learning curve and give you someone to rely on to help you in your early career.
Now, onto the second question.
With apologies to William Shakespeare, let’s look at the second question.
Adam asks, “When should I use SOLID and when should I not use SOLID?”
Well, Adam, the easy answer is always use SOLID, but as I discussed above, it doesn’t always make sense. Sometimes it adds complexity.
The real answer is well, more complex.
Overly complex applications are an ugly side of software development.
I’ve seen them over the years. I’ve tried to fix them.
Sometimes the complexity is not in the code, but in the architectural directions taken in the application. Bad architecture is often impossible to fix without a complete rewrite.
While SOLID is mostly a coding concept, there is an architectural aspect. Should you split something into multiple classes? How many interfaces should you use?
This list goes on.
Sometimes, the application of SOLID comes along as you design what the class is doing. See the above discussion on List. As the designers were defining the functionality of the class, the interfaces began to take shape and they learned which ones needed to be implemented.
In my work, once I have the code working correctly, I may do additional refactoring to follow a specific Design Pattern or to make the code more self-documenting. This is also a good time to look at applying some SOLID.
Asking questions like, “Does this class do too much and if it does, will I add complexity with multiple classes?” need to be asked. You may need to discuss with your team members as it may not be obvious.
Take the DatabaseLogger implementation above. There are five steps listed in the WriteLog method. Should each of those be a separate method? Should it be relegated to a different class? Or, should it all stay in that method?
The answer to each of these questions is, “Maybe.”
You may also find that the same SOLID questions come into play when modifying a class, either by adding new functionality (most likely SOLID comes into play) or fixing a bug (maybe SOLID should be used).
There’s another situation when you may not use SOLID and that’s when you’re time restrained. Alright, I accept that every project is time constrained, but there are times you just need to get the code out and other times when you can “do it right”. If you’re seriously time constrained, adding the technical debt of not using SOLID when you should may be acceptable.
But as with all technical debt, you should have a plan to pay it off.
One final place where SOLID won’t come into play is with prototypes and throw-away code. These are both places where you don’t plan to put the code into production and it will be removed soon. Don’t spend the time worrying about SOLID here.
Now, having said all this, as you gain experience, you start to see how to use SOLID earlier in the architectural and code writing process. Not all the time. You’ll still have programming puzzles that require you to refactor to SOLID, but because of accumulated experience, you should be able to complete the refactoring more quickly and with better results.
As with many software development concepts, SOLID has a place in your toolbox. It has a learning curve as steep as there is, but it’s one to master, not only learning the principles but when to use them and when not to use them. And by understanding SOLID and using it properly your software can be green, lush, about Software Gardening.! | https://www.dotnetcurry.com/software-gardening/1365/solid-principles | CC-MAIN-2019-35 | refinedweb | 2,252 | 65.42 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
odoo 8 how to do sorting in point of sale "sale details report" ?
I would like to sort sale details report in ascending order, could someone help?
I have identified 3 files and sorted function but not sure what to do with them.
/addons/point_of_sale/views/report_detailsofsales.xml
/addons/point_of_sale/wizard/pos_details.py
/addons/point_of_sale/report/pos_details.py
records.sorted(key=lambda r: r.name)
You must inherit class pos_details() in your module and add order='date_start asc' clause, like this:
class pos_details(osv.osv_memory):
_inherit = 'pos.details'
def print_report(self, cr, uid, ids, context=None):
if context is None:
context = {}
datas = {'ids': context.get('active_ids', [])}
res = self.read(cr, uid, ids, ['date_start', 'date_end', 'user_ids'], order='date_start asc', context=context)
# or # res = self.read(cr, uid, ids, ['date_start', 'date_end', 'user_ids'], order='date_end asc', context=context)
res = res and res[0] or {}
datas['form'] = res
if res.get('id',False):
datas['ids']=[res['id']]
return self.pool['report'].get_action(cr, uid, [], 'point_of_sale.report_detailsofsales', data=datas, context=context)
pos details as far as I know only kept the start date, end date, user ID etc for printing pos orders and details , I wonder how the above class can sort the order base on order date?
zbik, currently I can make changes directly in addons/views/report_detailsofsales.xml and it will get change after I do an upgrade, but when I try to make changes in addons/report/pos_details.py , follow the same process to upgrade the changes is not reflected at all. could you help?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/odoo-8-how-to-do-sorting-in-point-of-sale-sale-details-report-83405 | CC-MAIN-2017-43 | refinedweb | 308 | 50.94 |
everyone!
I am a beginner in python programming. I am writing a python program to reverse a given input list. Following is the code for it:
L1=list(input("Enter the numbers of list to be reversed : "))
L2=[]
def rever(La,Lb):
if len(Lb)==0:
return La
else:
return rever(La.append(Lb.pop(0)),Lb)
print rever(L2,L1)
For example, if we input,
1,2,3
The output should be,
[3,2,1]
But this is not happening. Python is giving the following error:
Traceback (most recent call last):
File "Q3.py", line 10, in <module>
print rever(L2,L1)
File "Q3.py", line 8, in rever
return rever(La.append(Lb.pop(0)),Lb)
File "Q3.py", line 8, in rever
return rever(La.append(Lb.pop(0)),Lb)
AttributeError: 'NoneType' object has no attribute 'append'
I don't get it. Please help me out!!
may be you should look at this, instead of pop and append
In [5]: L1=list(input("Enter the numbers of list to be reversed : ")) Enter the numbers of list to be reversed : 1,2,3,4,5 In [6]: L1 Out[6]: [1, 2, 3, 4, 5] In [7]: L2 = L1[::-1] In [8]: L2 Out[8]: [5, 4, 3, 2, 1]
There are a couple issues. First the working way:
def rever(La,Lb): if len(Lb)==0: return La else: La.append(Lb.pop()) return rever(La,Lb)
list.append appends in place, meaning it returns None. Since you are passing
La.append as an argument in the recursion, you get an error on the second iteration.
If you wanted to do it that way then you could do
return rever(La + [Lb.pop()], Lb)
The second issue is you are popping off the front and appending. So you will get the same order. Instead, pop off the end (no argument to pop) and append.
The
append method operates in-place, altering the original list and returning
None, giving you the error. Try this instead:
def rever(La, Lb): if len(Lb) == 0: return La else: La.append(Lb.pop()) return rever(La, Lb)
You could also refactor your code to something like this:
def rever(La, Lb): if Lb: La.append(Lb.pop()) return rever(La, Lb) return La
This uses the facts that an empty list is
False in a boolean context, and there can only be one return value for any function. | http://www.dlxedu.com/askdetail/3/ee01693920d445170c6676cc49352eff.html | CC-MAIN-2018-47 | refinedweb | 407 | 74.79 |
Storage¶
Sometimes you need to store useful information. Such information is stored as data: representation of information (in a digital form when stored on computers). If you store data on a computer it should persist, even if you switch the device off and on again.
Happily MicroPython on the micro:bit allows you to do this with a very simple file system. Because of memory constraints there is approximately 30k of storage available on the file system.
What is a file system?
It’s a means of storing and organising data in a persistent manner - any data stored in a file system should survive restarts of the device. As the name suggests, data stored on a file system is organised into files.
A computer file is a named digital resource that’s stored on a file system.
Such resources contain useful information as data.. For example,
.txt indicates a text file,
.jpg a JPEG image and
.mp3 sound data encoded as MP3.
Some file systems (such as the one found on your laptop or PC) allow you to organise your files into directories: named containers that group related files and sub-directories together. However, the file system provided by MicroPython is a flat file system. A flat file system does not have directories - all your files are just stored in the same place.
The Python programming language contains easy to use and powerful ways in which to work with a computer’s file system. MicroPython on the micro:bit implements a useful subset of these features to make is easy to read and write files on the device, while also providing consistency with other versions of Python.
Figyelem.
Open Sesame¶
Reading and writing a file on the file system is achieved by the
open
function. Once a file is opened you can do stuff with it until you close it
(analogous with the way we use paper files). It is essential you close a file
so MicroPython knows you’ve finished with it.
The best way to make sure of this is to use the
with statement like this:
with open('story.txt') as my_file: content = my_file.read() print(content)
The
with statement uses the
open function to open a file and assign it
to an object. In the example above, the
open function opens the file called
story.txt (obviously a text file containing a story of some sort).
The object that’s used to represent the file in the Python code is called
my_file. Subsequently, in the code block indented underneath the
with
statement, the
my_file object is used to
read() the content of the
file and assign it to the
content object.
Here’s the important point, the next line containing the
with statement is only
the single line that reads the file. Once the code block associated with the
with statement is closed then Python (and MicroPython) will automatically
close the file for you. This is called context handling and the
open
function creates objects that are context handlers for files.
Put simply, the scope of your interaction with a file is defined by the code
block associated with the
with statement that opens the file.
Confused?
Don’t be. I’m simply saying your code should look like this:
with open('some_file') as some_object: # Do stuff with some_object in this block of code # associated with the with statement. # When the block is finished then MicroPython # automatically closes the file for you.
Just like a paper file, a digital file is opened for two reasons: to read its
content (as demonstrated above) or to write something to the file. The default
mode is to read the file. If you want to write to a file you need to tell the
open function in the following way:
with open('hello.txt', 'w') as my_file: my_file.write("Hello, World!")
Notice the
'w' argument is used to set the
my_file object into write
mode. You could also pass an
'r' argument to set the file object to read
mode, but since this is the default, it’s often left off.
Writing data to the file is done with the (you guessed it)
write
method that takes the string you want to write to the file as an argument. In
the example above, I write the text “Hello, World!” to a file called
“hello.txt”.
Simple!
Megjegyzés
When you open a file and write (perhaps several times while the file is in an open state) you will be writing OVER the content of the file if it already exists.
If you want to append data to a file you should first read it, store the content somewhere, close it, append your data to the content and then open it to write again with the revised content.
While this is the case in MicroPython, “normal” Python can open files to write in “append” mode. That we can’t do this on the micro:bit is a result of the simple implementation of the file system.
OS SOS¶
As well as reading and writing files, Python can manipulate them. You certainly need to know what files are on the file system and sometimes you need to delete them too.
On a regular computer, it is the role of the operating system (like Windows,
OSX or Linux) to manage this on Python’s behalf. Such functionality is made
available in Python via a module called
os. Since MicroPython is the
operating system we’ve decided to keep the appropriate functions in the
os
module for consistency so you’ll know where to find them when you use “regular”
Python on a device like a laptop or Raspberry Pi.
Essentially, you can do three operations related to the file system: list the files, remove a file and ask for the size of a file.
To list the files on your file system use the
listdir function. It
returns a list of strings indicating the file names of the files on the file
system:
import os my_files = os.listdir()
To delete a file use the
remove function. It takes a string representing
the file name of the file you want to delete as an argument, like this:
import os os.remove('filename.txt')
Finally, sometimes it’s useful to know how big a file is before reading from
it. To achieve this use the
size function. Like the
remove function, it
takes a string representing the file name of the file whose size you want to
know. It returns an integer (whole number) telling you the number of bytes the
file takes up:
import os file_size = os.size('a_big_file.txt')
It’s all very well having a file system, but what if we want to put or get files on or off the device?
Just use the
microfs utility!
File Transfer¶
If you have Python installed on the computer you use to program your BBC
micro:bit then you can use a special utility called
microfs (shortened to
ufs when using it in the command line). Full instructions for installing
and using all the features of microfs can be found
in its documentation.
Nevertheless it’s possible to do most of the things you need with just four simple commands:
$ ufs ls story.txt
The
ls sub-command lists the files on the file system (it’s named after
the common Unix command,
ls, that serves the same function).
$ ufs get story.txt
The
get sub-command gets a file from the connected micro:bit and saves it
into your current location on your computer (it’s named after the
get
command that’s part of the common file transfer protocol [FTP] that serves the
same function).
$ ufs rm story.txt
The
rm sub-command removes the named from from the file system on the
connected micro:bit (it’s named after the common Unix command,
rm, that
serves the same function).
$ ufs put story2.txt
Finally, the
put sub-command puts a file from your computer onto the
connected device (it’s named after the
put command that’s part of FTP that
serves the same function).
Mainly main.py¶
The file system also has an interesting property: if you just flashed the
MicroPython runtime onto the device then when it starts it’s simply waiting
for something to do. However, if you copy a special file called
main.py
onto the file system, upon restarting the device, MicroPython will run the
contents of the
main.py file.
Furthermore, if you copy other Python files onto the file system then you can
import them as you would any other Python module. For example, if you had
a
hello.py file that contained the following simple code:
def say_hello(name="World"): return "Hello, {}!".format(name)
...you could import and use the
say_hello function like this:
from microbit import display from hello import say_hello display.scroll(say_hello())
Of course, it results in the text “Hello, World!” scrolling across the
display. The important point is that such an example is split between two
Python modules and the
import statement is used to share code.
Megjegyzés
If you have flashed a script onto the device in addition to the MicroPython
runtime, then MicroPython will ignore
main.py and run your embedded
script instead.
To flash just the MicroPython runtime, simply make sure the script you
may have written in your editor has zero characters in it. Once flashed
you’ll be able to copy over a
main.py file. | https://microbit-micropython-hu.readthedocs.io/hu/latest/tutorials/storage.html | CC-MAIN-2019-30 | refinedweb | 1,581 | 70.73 |
.
The old IDE
React encourages you to layout and compose your components as they appear on the page. Our workspace started out looking something like this:
<Workspace> <Split> <Editor /> <Console /> </Split> </Workspace>
But this lacks configurability. For every language, we have a slightly different configuration. Some have tabs, console, a web viewer, or language-specific components like python turtle. Additionally, every language has a different engine powering it with its own interface and set of capabilities. All this configuration logic used to get crammed into the top-level component with ever-increasing branching logic. Furthermore, runtime configuration must be explicitly written for every language. Modifying the layout for one language leads to more hardcoded logic and components and single-use components. This, in turn, caused making changes to one component a game of updating and testing every possible parent.
Rewrite
Starting out we had a few goals, most importantly, it needs to load quickly even over a slow connection; server-side rendering is essential here. It also must be easily extensible, and be configurable enough to take the shape of any workspace environment we need in the future. We also wanted to avoid rewriting as much code as possible from our old environment. Looking around we didn't find any existing solution that quite fit the bill, most environments afforded us too little customization, were too hefty, and server-side rendering was never going to happen without major changes to the core.
We ended up settling on building a new lightweight core (around 3000 LOC) to achieve this. It primarily functions as a window manager and a middleman for events. All components are bundled up into a plugin which can expose a render target or internal state management. This was achieved using React and Redux (although the general design does not depend on them).
Plugins
Every workspace starts out empty (a valid state). We bootstrap the initial state by dispatching actions dictated by the configuration. A nice side effect of this is all configurations must be able to be reached at runtime. This is great for debugging: open Redux Devtools you can see the state evolving from the point of creation and can easily time-travel back and forward. Furthermore, this makes debugging production errors a lot easier -- Redux actions tell you the whole story!
All a plugin have to do to build up its state is expose a reducer. Here is what a simple plugin to display the running status might look like:
const Component = ({ running }) => <div>{running ? 'running' : 'stopped'}</div>; function reducer(state = { running: false }, action) { switch (action.type) { case 'EVAL_STARTED': return { running: true }; case 'EVAL_ENDED': return { running: false }; } return state; } export { Component, reducer };
When the workspace loads an instance of this plugin it will mount the reducer within its own state.
Plugins can also register their own middleware, we call this the receiver. So
now that we have a plugin that shows the evaluation status, we need one to
actually do the evaluation. For that we simply expose a receiver and listen on
an
EVAL_CODE action (which might be dispatched by say a "run" button):
function receiver(dispatch, action) { switch (action.type) { case 'EVAL_CODE': dispatch(evalCode(action.code)); } } function evalCode(code) { return (dispatch) => { dispatch({ type: 'EVAL_STARTED' }); eval(code) // don't actually do this dispatch({ type: 'EVAL_ENDED' }); }; } export { receiver };
These plugins work together when loaded but have no direct dependency on each other. Our evaling plugin could easily be swapped out with something else, say something that executes the code on the server instead of the client.
Layout
To actually render React components we mount the window manager as the workspace's root and pass it a tree that looks something like this:
┌─────┐ │split│ └─────┘ / \ ┌────┐ ┌───────┐ │tabs│ │console│ └────┘ └───────┘ / | \ / | \ ┌───────┐┌───────┐┌───────┐ │editor1││editor2││editor3│ └───────┘└───────┘└───────┘
Each node is either a built-in window managing component (tabs and splits) or the instance id of a plugin. All relavent state for the layout nodes is contained within the layout (i.e. tabs have an active tab, splits have a position), all changes to the layout can be dispatched via built-in actions. This makes it trivial for any plugin to make changes to the layout. With the layout outside of the plugin's control and all state handled within the plugin, it becomes very easy to drop any plugin anywhere.
We can easily do something like (if we're being silly enough):
Because the layout is also a Redux state we can easily change it at runtime. Here is for example how the debugger plugin splits shows itself below the console:
function show({ wid, pluginId }) { return (dispatch, getState) => { const { layout, parts } = getState().workspace[wid]; if (!Layout.has(layout, pluginId)) { const { path, to } = Layout.insert( layout, pluginId, 'below', Layout.byName(layout, parts, 'console') || Layout.root(), ); dispatch( updateLayout({ wid, path, to, }), ); } }; } function reciever(wid, pluginId, action) { switch (action.type) { case 'DEBUG_STARTED': return show({ wid, pluginId }); case 'DEBUG_ENDED': return hide({ wid, pluginId }); } return null; } export { Debugger, reducer, reciever };
Server-side rendering
One of the worst things about the modern web is the spinner (or is it throbber?)
so for the rewrite we decided we'd never do that and try to render as much as
possible on the server and show something on the screen as early as possible. For that we used Next.js which makes
server-side rendering a lot less painful, bootstrapping the initial state from
the server is especially nice. For the most part we try to have parity between
server and client but some components are so DOM-specific that it's almost
impossible to render on the server without including something like JSDom. For
this we have a property that the window manager sends to all plugins
static to inform them that, if they need to, they can render a
static version of themselves (for now it's only the editor that requires this).
Conclusion
Going forward we're focusing on making the core framework as simple and as correct as possible. Flowtype made Redux a whole lot easier to reason about because every action has clear definition, but we think it can be better and are exploring rewriting the core framework in ReasonML. We're hoping to open-source this in the future and open it up for anyone to write plugins for.
This rewrite already unlocked for us a lot of features that can now be easily implemented. Look out for a filetree component and a unit test runner coming to an online REPL near you. | https://repl.it/site/blog/ide | CC-MAIN-2019-09 | refinedweb | 1,075 | 52.19 |
Offline Syncing in Ionic 2 with PouchDB & CouchDB
By Josh Morony
Since I started travelling around Australia I’ve tried to keep things pretty minimal. I got rid of all the clothes I didn’t need, shoes, kitchen ware, furniture, cleaning products and so on. Basically, I tried to get rid of everything I didn’t need… yet I still have 2 iPhones, an iPad, 3 Android phones, a Mac and a PC.
To be fair, I’m a mobile developer, so although I probably don’t need these they certainly make my life easier. Other people might not have that many devices, but I think it would be pretty rare to find someone who just has a single “smart” device. So if you’re building an app that only stores data locally on one device, you might be causing issues for a user who would like to use your app on their iPhone and their iPad.
The solution is to store the data remotely on a server somewhere, which will allow the user to access the data from any device. However, this introduces a new problem: not everybody is connected to the Internet all the time (I should know, I’m writing this from the middle of the Australian outback). So we’re going to tackle two issues:
- How to store data remotely, and;
- How to provide offline functionality with online syncing
In this tutorial, we will be creating a todo list application called ‘ClouDO’. Unlike a previous tutorial which only stored the todo data locally, this todo application will store the data in a remote database and locally. The local data will be synced to the remote database when an Internet connection is available, and any new data from the remote database will also be synced to the local database if there is new data available.
The result will be a todo application where the user can access their todos from any device, and they will even be able to view and edit the todos on their device even when no Internet connection is available. In the end, it’s going to look like this:
Syncing offline and online data sounds like quite the task (and it is), but two bits of technology are going to make this process pretty straightforward for us: CouchDB and PouchDB.
CouchDB is a document style NoSQL database that is built for the web. It is very similar to MongoDB which we used in Building a Review App with Ionic 2, MongoDB & Node. Perhaps the biggest advantage CouchDB has over MongoDB is its ability to easily replicate databases across devices (CouchDB can run just about anywhere), which is great for facilitating offline functionality with online sync. If this is not a requirement for your application, then MongoDB may be the better choice (which has better support for ad hoc queries).
PouchDB was inspired by CouchDB (hence the name), but it is designed for storing local data and then syncing to a CouchDB database when a connection is available. In this case we will be using CouchDB as the remote database, but you don’t have to use CouchDB specifically, you can use any database that supports the CouchDB protocol (of which there are many)..
Introduction to CouchDB
We’re not going to go into too much detail here because we will cover most of what we need to know while building out the app. I do want to cover some key concepts, though.
CouchDB uses a REST API for interfacing with the database, which means we use HTTP methods like PUT, POST and DELETE to interact with the database.
All documents stored in a CouchDB database must have an
_id field, which you can specify manually or CouchDB can generate it automatically for you. After creating a document, CouchDB will also assign it a
_rev (revision) field, which changes each time the document is changed. In order to update a document you must supply both the
_id and the
_rev, if either of these are incorrect then the update will fail. This is the key concept to ensure data is not corrupted in a CouchDB database. If you were to supply the
_rev you retrieved, but the document had been updated by someone else after you retrieved that
_rev, the update would fail since there has already been an update that we didn’t know about.
Another key concept is the replication. A CouchDB database can easily be replicated to another database (which we will be making use of in this tutorial), and this replication can be:
- One way (PouchDB database is replicated to the CouchDB database)
- Bi-directional (PouchDB database is replicate to the CouchDB database, and vice versa)
- Ad hoc (replication is triggered manually)
- Continuous (database is continually replicated as necessary, changes are replicated instantly)
In this tutorial, we will be using PouchDB to interface with CouchDB, not CouchDB itself, but the concepts are the same. If you wanted to, you could also just interact with the CouchDB database directly using the REST API rather than using PouchDB (but then you would lose the awesome offline sync functionality).
We will be setting up a bi-directional and continuous replication, so as soon as we make a change to our local data it will be reflected in the remote database, and as soon as we make a change to the remote data it will be replicated in the local data (it’s quite cool to play around with).
Setting up CouchDB
The first thing we need to do is get our CouchDB database set up. We will be using a locally installed version of CouchDB for ease of development, so you won’t be able to access the data outside of the machine you are developing on. However, if you are following this tutorial for a real application you are building, you will just need to set up CouchDB on your server, rather than on your local machine.
To set up CouchDB you need to do is head to the CouchDB website, download it, and then it should be as simple as extracting the files and opening the CouchDB application.
Once you have it installed you should be able to able to navigate to:
or
to open up Futon, which is CouchDB’s built in administration interface. Which will look like this:
NOTE: If you are running this on a server and not a local development machine, make sure to fix the “Welcome to the admin party!” message.
What we need to do now is create a new database for our application. Click the
Create Database option in the top left to create a new database called
cloudo. Once you create it, you will automatically be taken inside of the database where you can create a new document. If you click ‘Add Document’ you will see something like this:
As I mentioned earlier, you can either manually create your own
_id field or accept the default from CouchDB. We will just use the default, so click the tick icon to the right to accept the
_id.
Once you have done that you can click the
Add Field button to add some fields to this document. Create a new field called
title and give it a value of
hello. Once you are done click
Save Document. Now you will see something like this:
Notice that the
_rev field has been automatically added now, and it is prefixed with
1- to indicate that this is the first revision of the document. If you now change the
hello value to
hello world and then click
Save Document again, that
_rev field will change to
2-xxxx.
If you would like, you can create some more documents in this database but that’s all we need to do for our application. We will be creating the ability to add documents through the application so there’s no need to manually modify anything in here.
Generating a new Ionic 2 Application
Now that we have our backend ready to use, let’s start building the front end. We will start by generating a new Ionic 2 application.
Generate a new Ionic 2 application with the following command:
ionic start cloudo blank --v2
Once it has finished generating, we will need to switch into it.
Run the following command to make the new project your working directory:
cd cloudo
We are going to be creating a provider to handle interfacing with the database, so let’s create that now.
Create a Todos provider with the following command:
ionic g provider Todos
and we are also going to need to install PouchDB.
Install PouchDB with the following command:
npm install pouchdb --save
The TypeScript compiler doesn’t know what PouchDB is, so it’s going to throw some errors at us if we try to use it. To get around this we need to install the types for PouchDB.
Run the following command to install the types for PouchDB:
npm install @types/pouchdb --save --save-exact
In order to be able to make use of the provider we created, we will need to add it { Todos } from '../providers/todos'; @NgModule({ declarations: [ MyApp, HomePage ], imports: [ IonicModule.forRoot(MyApp) ], bootstrap: [IonicApp], entryComponents: [ MyApp, HomePage ], providers: [{provide: ErrorHandler, useClass: IonicErrorHandler}, Todos] }) export class AppModule {}
There’s one more thing we need to do to set up our application. By default, we are going to run into CORS (Cross Origin Resource Sharing) issues when trying to interact with CouchDB. You may get an error like this:
XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access.
To fix this, you can simply install the
add-cors-to-couchdb package.
Run the following command:
npm install -g add-cors-to-couchdb
Then run the following command:
add-cors-to-couchdb
If it has worked, you should get a message saying “Success”. This will configure CouchDB correctly, and you only ever need to run this once (if you create another project using CouchDB you won’t need to do this again).
Creating the Front End
Now we’ve got all the theory and configuration out of the way, it’s time to jump into the fun stuff. We’ll start off with the most interesting thing which is the Todos provider.
Modify
todos.tsto reflect the following:
import { Injectable } from '@angular/core'; import PouchDB from 'pouchdb'; @Injectable() export class Todos { data: any; db: any; remote: any; constructor() { this.db = new PouchDB('cloudo'); this.remote = ''; let options = { live: true, retry: true, continuous: true }; this.db.sync(this.remote, options); } getTodos() { } createTodo(todo){ } updateTodo(todo){ } deleteTodo(todo){ } handleChange(change){ } }
We’ve set up the basic structure for our provider above. The first thing to note is that we have imported PouchDB, which lets us create a new PouchDB database in the constructor, which we set up as
db. We will be able to use that database reference throughout this provider to interact with its various methods.
In the constructor we also define our remote database, which is just followed by the name of the database, which in this case is
cloudo. Then we call PouchDB’s
sync method, which will set up two way replication between our local PouchDB database, and the remote CouchDB database. If you only wanted one way replication you could instead use
this.db.replicate.to('').
Then we have defined a bunch of functions which will perform various tasks for our application, we are going to go through those one by one now.
Modify
getTodos()to reflect the following:
getTodos() { if (this.data) { return Promise.resolve(this.data); }); }); }); }
This function will return a promise containing the data from our database. If the data has already been fetched then it just returns it right away, otherwise it fetches the data from our database. We use the
this.db.allDocs method to return all of the documents in our database, and then we process the result by pushing all of the data into our
this.data array.
We also set up a
db.changes listener here, which will trigger every time there is a change to the data (i.e. if we manually edited the data in Futon). It will sent the change through to the
handleChange function, which we will define shortly.
Modify the
handleChangefunction to reflect the following:
handleChange(change){); } } }
This function is provided information about the change that occurred..
Now if there is a change in the remote data, we are going to see it reflected immediately in our local
this.data array. Let’s finish off the rest of the functions now.
Modify the
createTodo,
updateTodo, and
deleteTodofunctions to reflect the following:
createTodo(todo){ this.db.post(todo); } updateTodo(todo){ this.db.put(todo).catch((err) => { console.log(err); }); } deleteTodo(todo){ this.db.remove(todo).catch((err) => { console.log(err); }); }
These functions are quite straight forward, we simply call PouchDB’s methods to create, delete or update a document. Remember how I said that you need to provide both the
_id and
_rev when updating a document? You can just supply these manually, or you can do what we have done here and just provide the whole document (which will contain both the
_id and the
_rev).
PouchDB does all the heavy lifting behind the scenes for us, but you could also interact with CouchDB directly by using the Http service and the POST, PUT, and DELETE methods.
Now all we need to do is create our interface, which is going to be a simple one page list. We will handle adding new todos and updating todos with Alerts.
Modify
home.tsto reflect the following:
import { Component } from "@angular/core"; import { NavController, AlertController } from 'ionic-angular'; import { Todos } from '../../providers/todos'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { todos: any; constructor(public navCtrl: NavController, public todoService: Todos, public alertCtrl: AlertController) { } ionViewDidLoad(){ this.todoService.getTodos().then((data) => { this.todos = data; }); } createTodo(){ let prompt = this.alertCtrl.create({ title: 'Add', message: 'What do you need to do?', inputs: [ { name: 'title' } ], buttons: [ { text: 'Cancel' }, { text: 'Save', handler: data => { this.todoService.createTodo({title: data.title}); } } ] }); prompt.present(); } updateTodo(todo){ let prompt = this.alertCtrl.create({ title: 'Edit', message: 'Change your mind?', inputs: [ { name: 'title' } ], buttons: [ { text: 'Cancel' }, { text: 'Save', handler: data => { this.todoService.updateTodo({ _id: todo._id, _rev: todo._rev, title: data.title }); } } ] }); prompt.present(); } deleteTodo(todo){ this.todoService.deleteTodo(todo); } }
In this class we are importing our Todos service and loading in the data from it. We create two methods for creating and updating todos through an Alert, which both call our Todos service, as well as a method for deleting todos. Notice that when we create the todo create a JSON object only containing the title, but when we update it we supply the
_id,
_rev, and
title. When deleting we just pass through the entire document from our template.
Now let’s get the template sorted
Modify
home.htmlto reflect the following:
<ion-header no-border> <ion-navbar <ion-title> ClouDO </ion-title> <ion-buttons end> <button ion-button icon-only (click)="createTodo()"><ion-icon</ion-icon></button> </ion-buttons> </ion-navbar> </ion-header> <ion-content> <ion-list no-lines> <ion-item-sliding * <ion-item> {{todo.title}} </ion-item> <ion-item-options> <button ion-button icon-only <ion-icon</ion-icon> </button> <button ion-button icon-only <ion-icon</ion-icon> </button> </ion-item-options> </ion-item-sliding> </ion-list> </ion-content>
Pretty straightforward here, we just create a simple list to display the data, with sliding items to reveal both the
Edit and
Delete functions.
We’re pretty much done now but let’s add a bit of styling to pretty things up a bit.
Modify home.scss to reflect the following:
.ios, .md { page-home { .scroll-content { background-color: #ecf0f1; display: flex !important; justify-content: center; } ion-list { width: 90%; } ion-item-sliding { margin-top: 20px; border-radius: 20px; } ion-item { border: none !important; font-weight: bold !important; } } }
Modify the
$colorsin src/theme/variables.scss to reflect the following:
$colors: ( primary: #95a5a6, secondary: #3498db, danger: #f53d3d, light: #f4f4f4, dark: #222, favorite: #69BB7B );
You should now have something that looks like this:
Go ahead and add some items to your todo list, and what’s even cooler is that you can open up Futon again, change some data in the database, and watch the data update live in your app!
Summary
I think it’s pretty clear to see the benefit that replicating databases and offline syncing provides, and the PouchDB + CouchDB combo makes this really easy to pull off. In a later tutorial we will cover how to do some more advanced things with CouchDB (like using MapReduce). | https://www.joshmorony.com/offline-syncing-in-ionic-2-with-pouchdb-couchdb/ | CC-MAIN-2020-10 | refinedweb | 2,790 | 59.53 |
6478/unsupportedclassversionerror-unsupported-minor-version version of Java may be old or too new.!
I've learned that error messages like this ...READ MORE
If you always want to use the ...READ MORE
"N/A" is not integer. It must throw NumberFormatException if you ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
Do I need to understand the difference ...READ MORE
I will recommend not to upgrade the ...READ MORE
You can use Java Runtime.exec() to run python script, ...READ MORE
First, find an XPath which will return ...READ MORE
import org.json.*;
JSONObject obj = new JSONObject(" .... ...READ MORE
Padding to 10 characters:
String.format("%10s", "foo").replace(' ', '*');
String.format("%-10s", ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/6478/unsupportedclassversionerror-unsupported-minor-version | CC-MAIN-2020-05 | refinedweb | 125 | 72.32 |
Rob Spoor wrote:The only way I can think of is to create your own custom cursor, using Toolkit.createCustomCursor. The Image required for that can contain text if you want to.
Rob Spoor wrote:I've already shown you the method you need. Now all you need to do is create an Image. You can use BufferedImage for that, or one of many, many external image manipulation programs.
Rob Spoor wrote:An image can contain text as well. And please UseCodeTags from now on.
Punit Jain wrote:
Rob Spoor wrote:An image can contain text as well. And please UseCodeTags from now on.
thank you i will use CodeTags from of now.
let me explain you what i need to do exactly, i m trying to create a desktop application for java tests takers,
here i require to do user drag his/her answer from the choices and put that to the proper place, he/she not require to select, he/she only need to drag and drop, This is about my desktop application. Just like the simulators provided by Whizlabs.
What extra/updation i want in this is:
when user drag the component/their answer, the text which is the answer, i mean to say the answer should also comes as the tooltip/cursor when the user drag and remains upto the user drop.
What i m think to achieve this is:
i wrote a code for drag nd drop is, jst for example not the actual code:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class scjp extends TransferHandler
{
JTextField txtField;
JRadioButton lbl=new JRadioButton("Hello");
public static void main(String[] args)
{
scjp sdd = new scjp();
transfer th=new transfer();
}
public scjp()
{
MouseListener ml = new MouseAdapter()
{
public void mousePressed(MouseEvent e)
{
JComponent jc = (JComponent)e.getSource();
TransferHandler th = jc.getTransferHandler();
th.exportAsDrag(jc, e, TransferHandler.COPY);
}
};
MouseMotionListener m2=new MouseAdapter()
{
public void mouseDragged(MouseEvent e)
{
}
};
JFrame frame = new JFrame("SCJP");
txtField = new JTextField(20);
lbl.setTransferHandler(new TransferHandler("text"));
lbl.addMouseListener(ml);
lbl.addMouseMotionListener(m2);
JPanel panel = new JPanel();
panel.add(txtField);
frame.add(lbl, BorderLayout.CENTER);
frame.add(panel, BorderLayout.NORTH);
frame.setSize(400, 400);
frame.setVisible(true);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setResizable(false);
}
}
Code is working properly, bt when i drag the radio, it shows a a [color=green]rectangle and plus sign below the cursor, and i want here the string/text which is the radio button text.[/color]
for this i think to do it using cursor, but still i am not achieving this, now can you please help me in this, this is my actual problem???
Thank You. | http://www.coderanch.com/t/550000/GUI/java/unable-change-cursor-string | CC-MAIN-2015-06 | refinedweb | 440 | 54.63 |
Metric charts are a special type of Insights visualization that display metric timeslice data reported to New Relic by your agent(s). Since metric charts use a different data type, and are not based on NRQL, they have different display and customization options than event-based NRQL charts.
Add metric charts to dashboard
Add metric charts to Insights dashboards to:
- Chart, organize, and monitor metric data in a centralized place
- Track metrics that the New Relic UI does not chart by default
Add metric charts to an Insights dashboard via the Metric Explorer in New Relic Insights or via the standard Add to dashboard option on any supported New Relic UI metric chart.
You can only chart one metric namespace per chart.
Customize metric charts
To customize a metric chart from an Insights dashboard:
- From the dashboard where the chart is, select the chart's menu [fa-ellipsis-h] icon.
- Customize the chart options.
- Select Save changes.
Use any of the available options to customize metric charts:
Filter on agent metric charts
If your dashboard has filtering enabled, you can still filter on NRQL charts. However, filtering does not apply to any agent metric charts on the dashboard. | https://docs.newrelic.com/docs/insights/use-insights-ui/manage-dashboards/add-customize-metric-charts | CC-MAIN-2019-39 | refinedweb | 198 | 58.21 |
Re: "Gzip for Molecular Similarity"
In catching up in the world of chemical informatics blogs, I see Noel commented on my post last month about SMILES parsing and compression. He also links to commentary by Rajarshi on that paper.
I started to add a comment to that system but it grew, and the markup language support is limited on Blogger/Blogspot, so I moved it here. Imagine the rest of this essay in the context of "adding a comment to Rajarshi's post."
My response to Rajarshi's commentary on Melville et al.
I also tried out the Melville et al. algorithm, though I didn't compare the results to an existing similarity algorithm or result. I was trying to get an idea of when it would fail, and what the implementation would be like.
The paper says
It is easy to see from eq 2 that the form of the NCD makes it impossible for negative distances to occur, as file sizes are always positive and compressing two files together will never make the output smaller than the compressed size of one file.This isn't correct. For example,
% echo -n "CCO.CCO" | gzip -f | wc -c 27 % echo -n "CCO.CCO.CCO" | gzip -f | wc -c 26 - or without a "." separator - % echo -n "CCOCCO" | gzip -f | wc -c 26 % echo -n "CCOCCOCCO" | gzip -f | wc -c 25However, failures like that will be rare in real compounds.
I took the first 9999 compounds in the NCI SMILES strings distributed with OpenBabel and tested all of the triangle inequalities After computing the 4,999,500 similarities and 999,700,029,999 tests I found 29 violations. In other words, the zlib/gzip approach rarely violates the triangle inequality .. and computers are amazingly fast these days.
Using zlib instead of gzip
I did not do those tests with gzip on the command-line. The overhead of starting gzip would have been too much, and inelegant. You (Rajarshi) instead used Python's gzip module, which would save a lot of the time overhead. My choice was to skip gzip completely and go directly to zlib, which gzip uses to do the compression. The gzip output format always includes header and trailer fields (see the gzip spec) so the zlib output should be consistently "better."
Here's Python code. Note that the second "getsize" replaces the first, which I keep for posterity's sake.
import subprocess impot zlib # calling gzip on the command-line (slow) def getsize(s): x = subprocess.Popen(["gzip", "-f"], stdin=subprocess.PIPE, stdout=subprocess.PIPE) x.stdin.write(s) x.stdin.close() return len(x.stdout.read()) # using zlib directly def getsize(s): return len(zlib.compress(s)) def dist(a, b): n1 = getsize(a); n2=getsize(b) return (min(getsize(a+b), getsize(b+a)) - min(n1,n2))/float(max(n1,n2))and for fun, Ruby code that starts to do the same thing
require 'zlib'; z = Zlib::Deflate.new(); s = z.deflate("c1ccccc1O", Zlib::FINISH); s.sizeI included this Ruby code because the authors of the paper used Ruby for their testing. The zlib module is trivially available to Ruby, Python and Java and likely Perl as well, as part of the normal install. I don't think there's big need to go though the command-line version to do the calculations.
Failures
As I said, some compounds fail. Here are a few:
x = S(Sc1c([N+](=O)[O-])cc([N+](=O)[O-])cc1)c1c([N+](=O)[O-])cc([N+](=O)[O-])cc1 y = N(C(C)=O)c1ccc(C(CCl)=O)cc1 z = C(C(C)=O)c1ccc([N+](=O)[O-])cc1 dist(x,y), dist(x,z), dist(z,y) using zlib: 0.714285714286 0.27027027027 0.432432432432 using gzip: 0.531914893617 0.204081632653 0.326530612245 x = S(Cc1ccccc1)(=O)(=O)c1ccc(C)cc1 y = O(CCO)c1c(Cl)cc(Cl)cc1 z = C(CC)(=O)c1ccc(C)cc1 dist(x,y), dist(x,z), dist(z,y) zlib: 0.642857142857 0.214285714286 0.428571428571 gzip: 0.45 0.15 0.3 x = C([C@@H](C(OCC)=O)C(C)=O)C(OCC)=O y = C([C@@](C(C)(C)C)(C(=O)O)C)C(C)(C)C z = C([C@@H](C(=O)O)C)C(C)(C)C dist(x,y), dist(x,z), dist(z,y) zlib: 0.58064516129 0.34375 0.21875 gzip: 0.418604651163 0.25 0.159090909091
Why use compression as a similarity score?
In your blog you (Rajarshi) said:
it's not clear as to why this would be useful as it (intuitively) seems that without a strict metric function, the resultant clustering would be unstable. This might be something interesting to look at.The paper points out that you don't need any special chemical informatics tools as gzip is included with many machines (well, except perhaps MS Windows). You can do a perhaps somewhat naive but reasonable analysis using tools included these days in most languages. Though since OpenBabel's fp implementations are easily available, this isn't a strong reason.
As for the applicability in clustering, what I bear in mind is that converting a molecule into a bitstring is only an approximation. Generating a perfect clustering (not that one exists) from the fp doesn't mean that the chemistry clusters correctly. If the compressability value were a better estimate of chemical similarity than a binary fingerprint then a fault-tolerant clustering algorithm might still give good results.
This paper and LINGOS suggest that syntax alone (at the SMILES level) gives some chemical understanding, without needing software with a better understanding of chemistry. It's putting some doubt into my mind that bitstring fingerprints do a good job of estimating chemical similarity. Why have all the processing overhead for just a few percent better results?
One thing I've been thinking of is the gzip/zlib algorithm works by identifying repeated string patterns. SMILES linearizes the structure so in a rough sense zlib is generating many of the subpaths of length roughly 1-5 and comparing the distribution spectrums between the two compounds. That explains [*] what bzip2 (using the block transform) doesn't work as well - it loses some of that neighbor information. Plus, bzip works best on a large block. SMILES strings are likely too small. The paper does talk about "padding the string to twice its length" by repeating the basic structure. This would help minimize some of those length problems.
[*] Explanations can be coincidence. Correlation does not imply causation. Past performance is not indicative of future results. But I think it's a reasonable interpretation.
| http://www.dalkescientific.com/writings/diary/archive/2007/07/26/gzip_for_molecular_similarity.html | crawl-002 | refinedweb | 1,106 | 57.06 |
Continuing the thread from my earlier update on the “Fubu Reboot.” In an MVC web application (I think this really could apply to WebForms as well, but not to the same extent) you frequently need to resolve the Url that points to a specific subject. In our application at Dovetail, we have the route pattern: “sites/edit/{Id}” for the page that edits a “Site” object. When we place links in the views for a given “Site” object, we need to replace “{Id}” in the route with the value of the Site.Id property. In another circumstance, we have the routing pattern “query/for/{QueryName}/{QueryParam1}” for a controller action that takes in this object as its single argument:
// The [RouteInput] attributes are *a* way to direct Fubu to
// make these properties by automatically scanned as part of the
// route pattern.
// This should only be necessary in exception cases
// My hope is that conventions take you 90% of the way home
public class QueryForRequest
{
[RouteInput]
public string QueryName { get; set; }
[RouteInput("")]
public string QueryParam1 { get; set; }
}
At many, many times in our application we need to determine the Url string that points to a particular subject or occasionally to a controller action. At the same time, it would be very, very nice to keep the individual controllers and views ignorant of exactly what those Url patterns happen to be in order to make them easier to change. In FubuMVC, that’s all done with the IUrlRegistry interface that is automatically placed into your IoC container:
// This service is injected into your IoC tool of choice as a singleton
// to give you access to url’s in a type safe way
// Please note that this implementation in no way, shape, or form
// locks you into a rigid url structure
public interface IUrlRegistry
{
string UrlFor(object model);
string UrlFor(object model, string category);
string UrlFor<TController>(Expression<Action<TController>> expression);
string UrlForNew<T>();
string UrlForNew(Type entityType);
// Not sure these two methods won’t get axed. They could just be extension methods in Dovetail code
string UrlForPropertyUpdate(object model);
string UrlForPropertyUpdate(Type type);
string UrlFor(Type handlerType, MethodInfo method);
}
In the FubuMVC model, we’re basically assuming that controller actions (Fubu actions don’t have to be on special Controller classes, btw) take in 0 or 1 objects as their single input. Taking another step, if you make the input model types unique per controller action, FubuMVC can actually use that type to “know” what controller action receives that type. Therefore, when I need the Url string that points to a particular Site object, I just pass in that Site object to the UrlRegistry.For(object) method. In the more complex case of the QueryForRequest object above, I do the exact same thing – even though QueryForRequest clearly points to a different Route. For controller actions that don’t take in any input arguments (think HomeController.Index()), you can still use UrlRegistry.UrlFor<HomeController>(x => x.Index()).
For those of you familiar with ASP.Net MVC’s model, here’s some other facts:
- The lookup of a Url for a Controller Type / Method combination makes no, let me repeat that, no assumptions about the Url pattern. SomethingController.Method1() does not imply that the Url is “something/method1.” FubuMVC is literally hashing the exact Route pattern for each Controller action and looks up the exact Url at runtime.
- The call to UrlFor() is completely independent of whether or not the Route in question was registered as part of the main application or as part of an Area/Slice. Unlike MVC2, when you’re determining the Url to a certain controller action or input object, you do not have to worry about where I think the MVC team thoroughly screwed up their Area support and I’d surely hope they scrap it for something better in MVC3. If you’re using the MVC framework today, I’d strongly recommend you use the bits in MvcContrib instead of MVC2 for areas.
- No magic strings of any kind. Anywhere.
Lastly,
The Url resolution is static typed. That’s valuable to help prevent coding mistakes and Intellisense is also nice. Honestly, my favorite part is how much more traceable it makes the code rather than relying on strings. One quick CTRL-B shortcut takes you the the controller action behind the Url. In the case of finding the Url for an object, it’s one more bounce with CTRL-ALT-F7 (one of my favorite R# shortcuts). In real usage, we have convenience methods on our view types to get at action urls, as well as consuming the IUrlRegistry in our FormFor() and ActionUrlFor() type HtmlHelpers.
Now that we mostly rely on IUrlRegistry.For(object), IUrlRegistry is relatively easy to mock in most tests. If your tests have to rely on an Expression in IUrlRegistry.UrlFor<T>(x => Method()), I’d go for some sort of hand rolled stub.
Ok, this may be vague, so please ask question. Also, this stuff isn’t locked down, so we can actually change it to suit. And I won’t even get all paternalistic on you telling you that “UrlFor() doesn’t really mean UrlFor()” if you don’t like the API. | http://codebetter.com/jeremymiller/2010/01/25/url-resolution-in-fubumvc/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+CodeBetter+%28CodeBetter.Com%29 | crawl-003 | refinedweb | 871 | 56.29 |
Question:
srand(time(null)); printf("%d", rand());
Gives a high-range random number (0-32000ish), but I only need about 0-63 or 0-127, though I'm not sure how to go about it. Any help?
Solution:1
rand() % (max_number + 1 - minimum_number) + minimum_number
So, for 0-65:
rand() % (65 + 1 - 0) + 0
(obviously you can leave the 0 off, but it's there for completeness).
Note that this will bias the randomness slightly, but probably not anything to be concerned about if you're not doing something particularly sensitive.
Solution:2
check here
For any of these techniques, it's straightforward to shift the range, if necessary; numbers in the range [M, N] could be generated with something like
M + rand() / (RAND_MAX / (N - M + 1) + 1)
Solution:3
Taking the modulo of the result, as the other posters have asserted will give you something that's nearly random, but not perfectly so.
Consider this extreme example, suppose you wanted to simulate a coin toss, returning either 0 or 1. You might do this:
isHeads = ( rand() % 2 ) == 1;
Looks harmless enough, right? Suppose that RAND_MAX is only 3. It's much higher of course, but the point here is that there's a bias when you use a modulus that doesn't evenly divide RAND_MAX. If you want high quality random numbers, you're going to have a problem.
Consider my example. The possible outcomes are:
rand() freq. rand() % 2 0 1/3 0 1 1/3 1 2 1/3 0
Hence, "tails" will happen twice as often as "heads"!
Mr. Atwood discusses this matter in this Coding Horror Article
Solution:4
As others have noted, simply using a modulus will skew the probabilities for individual numbers so that smaller numbers are preferred.
A very ingenious and good solution to that problem is used in Java's
java.util.Random class:; }
It took me a while to understand why it works and I leave that as an exercise for the reader but it's a pretty concise solution which will ensure that numbers have equal probabilities.
The important part in that piece of code is the condition for the
while loop, which rejects numbers that fall in the range of numbers which otherwise would result in an uneven distribution.
Solution:5
You can use this:
int random(int min, int max){ return min + rand() / (RAND_MAX / (max - min + 1) + 1); }
From the:
comp.lang.c FAQ list · Question 13.16
Q:)
References: K&R2 Sec. 7.8.7 p. 168 PCS Sec. 11 p. 172
Quote from:
Solution:6
double scale = 1.0 / ((double) RAND_MAX + 1.0); int min, max; ... rval = (int)(rand() * scale * (max - min + 1) + min);
Solution:7
If you don't overly care about the 'randomness' of the low-order bits, just rand() % HI_VAL.
Also:
(double)rand() / (double)RAND_MAX; // lazy way to get [0.0, 1.0)
Solution:8
The naive way to do it is:
int myRand = rand() % 66; // for 0-65
This will likely be a very slightly non-uniform distribution (depending on your maximum value), but it's pretty close.
To explain why it's not quite uniform, consider this very simplified example:
Suppose RAND_MAX is 4 and you want a number from 0-2. The possible values you can get are shown in this table:
rand() | rand() % 3 ---------+------------ 0 | 0 1 | 1 2 | 2 3 | 0
See the problem? If your maximum value is not an even divisor of RAND_MAX, you'll be more likely to choose small values. However, since RAND_MAX is generally 32767, the bias is likely to be small enough to get away with for most purposes.
There are various ways to get around this problem; see here for an explanation of how Java's
Random handles it.
Solution:9
Updated to not use a #define
double RAND(double min, double max) { return (double)rand()/(double)RAND_MAX * (max - min) + min; }
Solution:10
rand() will return numbers between 0 and RAND_MAX, which is at least 32767.
If you want to get a number within a range, you can just use modulo.
int value = rand() % 66; // 0-65
For more accuracy, check out this article. It discusses why modulo is not necessarily good (bad distributions, particularly on the high end), and provides various options.
Solution:11
I think the following does it semi right. It's been awhile since I've touched C. The idea is to use division since modulus doesn't always give random results. I added 1 to RAND_MAX since there are that many possible values coming from rand including 0. And since the range is also 0 inclusive, I added 1 there too. I think the math is arranged correctly avoid integer math problems.
#define MK_DIVISOR(max) ((int)((unsigned int)RAND_MAX+1/(max+1))) num = rand()/MK_DIVISOR(65);
Solution:12
if you care about the quality of your random numbers don't use rand()
use some other prng like or one of the other high quality prng's out there
then just go with the modulus.
Solution:13
Just to add some extra detail to the existing answers.
The mod
% operation will always perform a complete division and therefore yield a remainder less than the divisor.
x % y = x - (y * floor((x/y)))
An example of a random range finding function with comments:
uint32_t rand_range(uint32_t n, uint32_t m) { // size of range, inclusive const uint32_t length_of_range = m - n + 1; // add n so that we don't return a number below our range return (uint32_t)(rand() % length_of_range + n); }
Another interesting property as per the above:
x % y = x, if x < y
const uint32_t value = rand_range(1, RAND_MAX); // results in rand() % RAND_MAX + 1 // TRUE for all x = RAND_MAX, where x is the result of rand() assert(value == RAND_MAX); result of rand()
Solution:14
2 cents (ok 4 cents): n = rand() x = result l = limit n/RAND_MAX = x/l Refactor: (l/1)*(n/RAND_MAX) = (x/l)*(l/1) Gives: x = l*n/RAND_MAX int randn(int limit) { return limit*rand()/RAND_MAX; } int i; for (i = 0; i < 100; i++) { printf("%d ", randn(10)); if (!(i % 16)) printf("\n"); } > test 0 5 1 8 5 4 3 8 8 7 1 8 7 5 3 0 0 3 1 1 9 4 1 0 0 3 5 5 6 6 1 6 4 3 0 6 7 8 5 3 8 7 9 9 5 1 4 2 8 2 7 8 9 9 6 3 2 2 8 0 3 0 6 0 0 9 2 2 5 6 8 7 4 2 7 4 4 9 7 1 5 3 7 6 5 3 1 2 4 8 5 9 7 3 1 6 4 0 6 5
Solution:15
Just using rand() will give you same random numbers when running program multiple times. i.e. when you run your program first time it would produce random number x,y and z. If you run the program again then it will produce same x,y and z numbers as observed by me.
The solution I found to keep it unique every time is using srand()
Here is the additional code,
#include<stdlib.h> #include<time.h> time_t t; srand((unsigned) time(&t)); int rand_number = rand() % (65 + 1 - 0) + 0 //i.e Random numbers in range 0-65.
To set range you can use formula : rand() % (max_number + 1 - minimum_number) + minimum_number
Hope it helps!
Solution:16
Or you can use this:
rand() / RAND_MAX * 65
But I'm not sure if it's the most random or fastest of all the answers here.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/10/tutorial-how-do-i-get-specific-range-of.html | CC-MAIN-2019-04 | refinedweb | 1,281 | 66.57 |
I'm just starting and working on some very easy programs. I was doing a few little programs that use the scanf() function, such as taking the input of 4 numbers and printing out the average or a program that subtracts 2 numbers from one another, and couldnt get anything to work. I'm using ms visual c++ 6.0 on win98.
So then I tried this program:
#include <stdio.h>
main()
{
int a;
scanf("enter a value %d ", &a);
printf("a contains %d ", a);
return 0;
}
The value of a (and all values I try scan f'ing in) ends up being something ridiculous like -823460123. Any ideas what is up with this?
Problem With scanf() This thread says scanf has problems...is this my case?
I've tried the suggestions in these threads:
scanf problems
and another 1 I cant find the link to now. Any help appreciated. | http://cboard.cprogramming.com/c-programming/24163-mvcplusplus-6-0-scanf-probs.html | CC-MAIN-2014-42 | refinedweb | 150 | 83.86 |
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
Transcription
1 INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A COMPREHENSIVE VIEW OF HADOOP ER. AMRINDER KAUR Assistant Professor, Department of Computer Science & Engineering, University Institute of Engineering & Technology, Kurukshetra University, Haryana. Accepted Date: 24/09/2015; Published Date: 01/10/2015 Abstract: - As data is evolving day by day with tremendous amount like exabytes (2 18), at high speed with various varieties like financial data, weather forecasting, social media, and list go on. This vast amount of data is not warehoused on one site and therefore, a need of a framework required that dispense the data across multiple clusters and provide distributed computing to answer the queries. Hadoop is the solution of above discussed points. Hadoop is open source and fault tolerant. Hadoop provide high available services to the cluster of computers. Keywords: Hadoop, map reduce, client-server, task manager, job tracker \ Corresponding Author: ER. AMRINDER KAUR Access Online On: How to Cite This Article: PAPER-QR CODE 122
2 1. INTRODUCTION Hadoop is free ware; its programming is based on java framework. It supports distributed computing environment in which processing of large data set is to be done. Hadoop is batch oriented where jobs are queued and then executed, and processing of jobs may take minutes or hours. The basic storage mechanism in Hadoop is Hadoop Distributed File System (HDFS) [1] The MapReduce framework is proposed by Google. The framework is responsible of everything else such as parallelization, fail-over etc. With Hadoop's distributed file system, mapreduce framework read and writes its data. Usually, Hadoop MapReduce uses the distributed file system of hadoop known as HDFS, is the open source complement of the Google File System (GFS). Therefore, Hadoop MapReduce job s input and output performance strongly depends on HDFS. 2. ARCHITECTURE Hadoop is open source framework this is composed of hadoop distributed file system and map reduce engine. Hadoop is scalable fault-tolerant distributed system for processing of data and forage. Hadoop provide framework for analysis and transformation of extreme data using map reduce paradigm data storage. Hadoop provide framework for analysis and transformation of extreme data using map reduce paradigm.[3][5] 2.1. Hadoop Distributed File System HDFS is responsible for managing the data or files which is present on different clusters. Meta data of a file and application data which is needed during job are stored separately. Meta data is stored on name node; a dedicated server. Application data are stored on data node; other servers. These servers are connected to each other & communication done with the help of TCP based protocols. In HDFS Raid structure is used to provide data durability. For reliability file content is duplicated on multiple data nodes [2][3][6]. HDFS structure is shown in figure below 123
3 Name Node Figure: HDFS's Architecture Responsibility of name node is for maintaining the directory structure of HDFS. It is also known as namespace. On namenode inode is used to represent directory and files and it also record attributes like permissions, access times, modification etc. Namenode only maintains mapping between blocks of file and the datanodes on which blocks are stored. Each block of the file is replicated on multiple datanode. For performing any operation on file (like open close, delete, rename) initially client contact the namenode. For example to perform a read operation on a file, a client first contact the namenode for the location of datanode and after that read the data from the closest datanode to the client. And when clients want to write the data onto a file it requests the namenode to nominate three datanodes which contains the block replicas. And writing is performed in a pipeline fashion. Currently a single name node is nominated for each cluster. But datanodes can be hundreds or thousands and may execute multiple tasks concurrently.[2][3][6] Data Node Datanode is used to hold the block replica and these block replica is represented by two files in the local host native file system. First file contains the data itself and second file contains metadata of block. Except name node all other node will act as a datanode. Each node hold file blocks on the behalf of local or remote host. On the request of name node blocks are created or destroyed on data nodes. Name node is responsible for validating and processing requests from clients. Clients communicate directly with datanode for data in order to read or write data at 124
4 HDFS block level. In startup phase datanode connects to namenode and perform handshake. Handshake is done to verify the namespace ID and software version of datanode. If ID doesn t match with namenode then datanode automatically shuts down. Each datanode sends heartbeat signal within few seconds and if it fails to send these signals then it will be considered as out of service and namenode find other datanodes for block replica.[2][3][6] 2.2. Map Reduce Hadoop Map Reduce framework is one of the popular implementation of map reduce framework which is proposed by google. It become popular because it is easy to use, scalable and fault tolerent. It is used for processing big data in industry and academia also. It consist of two functions i.e. map and reduce. Both these functions are user defined. The map function take input in the form of (k,v) where k refers to key and v refers to value. After that map function is applied on each pair of (k,v). After that it will generate intermediate key value pair which is showed as (k', v'). Iteratively on each intermediate key value pair reduces function is called and after that reduce function merge all the intermediate values on the basis of a single key. [4][5] Map Reduce Architecture is shown in figure below. Figure: Map Reduce Architecture 125
5 Job Tracker Job tracker is responsible for maintaining the list of the processing resources available in the clusters. A Job tracker run on master node and it is responsible for distributing the different map reduce jobs into the cluster. When there is a request for job then it schedules the job and assigns the job to the task tracker running on the datanodes. Initially client node submits its job to job tracker Then job tracker is incharge of determining the location of data in datanode. After locating the datanode its corresponding task tracker node is located which is nearest to the data or which have available slots. Job is then assigned to task tracker node. Tasktracker nodes are monitored continuously and if they don't respond with heartbeat signals then they are considered as failed and the job is scheduled on some other task tracker. Due to some reason if job fails then task tracker notify the jobtracker. Then job tracker will decide whether to submit the job somewhere else or to restart the job on same task tracker node. And on the completion of job, jobtracker updates its status about a particular job. And then client node asks the job tracker for information. The diagram below show the cluster setup in the network. [7] Figure: Cluster Setup in the network 126
6 Task Tracker Duty of task tracker is to execute the job that is assigned by jobtracker node and then report the status of jobs back to job tracker. In the cluster, task tracker daemon runs on every slave node. Therefore processing of data and its storage is also done by tasktracker. Map, reduce and shuffle operations are performed by task tracker and these operations are assigned by job tracker. Tasktracker maintain a set of slots. Some slots are alloted for map tasks and some are alloted for reduce tasks. When jobtracker want to schedule task, then it first determine the available empty slot on the server that contain needed data in the datanode. And if empty slot is not available then jobtracker look for another empty slot in the same rack. Meanwhile of processing tasktracker generate heartbeat signal in few minutes for jobtracker to assure that tasktracker is alive and performing its job. This heartbeat is also useful in determining the number of available slots. After completion of job, tasktracker report back to jobtracker with the status of the job.[7] 3. APPLICATIONS OF HADOOP Hadoop is used to analyze the risks which are life threatening for mankind It is used to identify the security breaches by analyzing the warning signs Hadoop is used to understand the perception of people about company or organization by analyzing their social media conversations. By analyzing sales data based on various factors like weather, days, weekends etc., it will help to understand when to sell which products. With the help of log files which are generated by software contains very useful data. By analyzing these log files one can find security breaches and usage statistics It is used in various fields like politics, data storage, financial services, health care, telecoms, human science, travel etc.[8],[9] 127
7 4. CONCLUSION Big data is data which is accumulating from different sources and with different varieties like social media, sensor s data, s etc. on tremendous speed. Today s data's volume range in petabytes but in future it will range from few exabytes to thousands of exabytes. To handle such a volume of data an efficient tool is needed which is able to analyze and mine some useful knowledge from such a large volume of data. Hadoop is the answer of these needs raised due to big data. Hadoop is applicable in all the fields of life like health, science, telecoms, data storage etc. therefore it can be able to answer the different questions raised in different fields. 5. REFERENCES 1. Revolution Analytics White Paper, Advanced Big Data Analytics with R and Hadoop, Konstantin Shvachko etal., The Hadoop Distributed File System, IEEE, Hadoop. September, Jens Dittrich et al., Efficient Big Data Processing in Hadoop MapReduce, The 38th International Conference on Very Large Data Bases, Istanbul, Turkey. Proceedings of the VLDB Endowment, Vol. 5, No. 12, August 27th 31 st, Harshawardhan S. Bhosale etal., A Review Paper on Big Data and Hadoop, International Journal of Scientific and Research Publications, Volume 4, Issue 10, October HDFS (Hadoop distributed filesystem) Architecture, September 2015, rg/common/docs/current/hdfs design.html. 7. Hadoop Map Reduce framework, September, 2015, mapred_tutorial.html 8. web_link1: 9. web_link2: 128
Hadoop Architecture. Part 1
Hadoop Architecture Part 1 Node, Rack and Cluster: A node is simply a computer, typically non-enterprise, commodity hardware for nodes that contain data. Consider we have Node 1.Then we can add more nodes, Storage Options for Hadoop Sam Fineberg, HP Storage
Sam Fineberg, HP Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Fault Tolerance in Hadoop for Work Migration
1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerousFS),
Sector vs. Hadoop. A Brief Comparison Between the Two Systems
Sector vs. Hadoop A Brief Comparison Between the Two Systems Background Sector is a relatively new system that is broadly comparable to Hadoop, and people want to know what are the differences. Is Sector
Hadoop Parallel Data Processing
MapReduce and Implementation Hadoop Parallel Data Processing Kai Shen A programming interface (two stage Map and Reduce) and system support such that: the interface is easy to program, and suitable,
Design of Electric Energy Acquisition System on Hadoop
, pp.47-54 Design of Electric Energy Acquisition System on Hadoop Yi Wu 1 and Jianjun Zhou 2 1 School of Information Science and Technology, Heilongjiang University
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets
!"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;
Google Bing Daytona Microsoft Research
Google Bing Daytona Microsoft Research Raise your hand Great, you can help answer questions ;-) Sit with these people during lunch... An increased number and variety of data sources that generate large MOCK TEST HADOOP MOCK TEST HADOOP MOCK TEST Copyright tutorialspoint.com This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at
Open source Google-style large scale data analysis with Hadoop
Open source Google-style large scale data analysis with Hadoop Ioannis Konstantinou Email: ikons@cslab.ece.ntua.gr Web: Computing Systems Laboratory School of Electrical
Keywords: Big Data, HDFS, Map Reduce, Hadoop
Volume 5, Issue 7, July 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Configuration Tuning Professor,
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A SURVEY ON BIG DATA ISSUES AMRINDER KAUR Assistant Professor, Department of Computer
Anumol Johnson et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 6 (1), 2015, 127-132
Big Data Processing Using Hadoop MapReduce Programming Model Anumol Johnson #1 Master Of Technology, Computer Science And Engineering Havinash P.H #2 Assistant Professor. Computer Science And Engineering
TP1: Getting Started with Hadoop
TP1: Getting Started with Hadoop Alexandru Costan MapReduce has emerged as a leading programming model for data-intensive computing. It was originally proposed by Google to simplify development of web
Distributed Filesystems
Distributed Filesystems Amir H. Payberah Swedish Institute of Computer Science amir@sics.se April 8, 2014 Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 1 / 32 What is Filesystem? Controls:
5 10 15 20 25 30 35 A platform for massive railway information data storage # SHAN Xu 1, WANG Genying 1, LIU Lin 2** (1. Key Laboratory of Communication and Information Systems, Beijing Municipal Commission:
5 HDFS - Hadoop Distributed System
5 HDFS - Hadoop Distributed System 5.1 Definition and Remarks HDFS is a file system designed for storing very large files with streaming data access patterns running on clusters of commoditive hardware.
Storage Architectures for Big Data in the Cloud
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas | http://docplayer.net/14119103-International-journal-of-pure-and-applied-research-in-engineering-and-technology.html | CC-MAIN-2018-43 | refinedweb | 2,462 | 53.51 |
MouseArea: Pressed and hover at the same time?
Hi,
There is a little thing that I absolutely need for the application I am developping: I have to be able of dragging an object onto another one and at least one of them should notice that they are intersecting.
So, the point is that one of the items must accept the onEntered signal even though the mouse is pressed form outside.
For example:
@
import QtQuick 1.0
Rectangle{
id: base
width: 500
height: 500
MouseArea{ //Even without this mousearea I don't get what i want.
anchors.fill: parent
//onPressed:{console.log("big")}
}
Rectangle{ id: t width: 100 height: 100 color: "red" MouseArea{ anchors.fill: parent hoverEnabled: true onPressed:{console.log("little-press")} onEntered:{console.log("little-enter")} drag.target: t } }
}
@
What I wnat is to press the moue button outside the red square, and move it without releasing the button. When the mouse passes over the red rectangle, I want the signal onEntered to be emitted. I don't understang why it is not emited because onEntered should only care about the mouse beeing inside the mouseArea, not about the buttons.
Any idea of how to do it? (It is quite important for what i'm developping...)
Thank you very much.
and, there is no solution? Did I understand the bugreport well?....
well, I'll something different then...
It mentions QT-1099 in some internal Nokia system in 2010. Well, that does not sound very positive. I personally could use this in one my project.
Well, finally I did it differnetly and it works.
I post here the "algorithmic" soultion in my case (it may help someone)
I wanted to get when I was intersecting with a line while I was dragging a rectangle, so what I did was:
@
Rectangle{
id: dragged
MouseArea{
drag.target: dragged
onPositionChanged:{
for each line I may intercept{
d = distance(line, mouseX,mouseY) //This gives me the line from (mouseX,mouseY) to the line
if(d < line.width){
//it means I'm on the line while dragging the rectangle.
}
}
}
}
}
@
well, thank you again for the quick answer ;) | https://forum.qt.io/topic/23003/mousearea-pressed-and-hover-at-the-same-time | CC-MAIN-2018-26 | refinedweb | 352 | 66.74 |
Events
Introduction
Except for the main class of your program (the class
that contains the Main() method),:
[attributes] [modifiers] event type declarator;
[attributes] [modifiers] event type member-name {accessor-declarations};
The attributes factor can be a normal C#
attribute.
The modifier can be one or a combination of the
following keywords: public, private, protected,
internal, abstract, new, override, static,
virtual, or extern.
The event keyword is required. It is followed by
the name of the delegate that specifies its behavior. If the event is
declared in the main class, it should be made static. Like everything in a
program, an event must have a name. This would allow the clients to know
what (particular) event occurred. Here is an example:
using System;
delegate void dlgSimple();
class Exercise
{
public static event dlgSimple Simply;
public static void Welcome()
{
}
}
When the event occurs, its delegate would be invoked.
This specification is also referred to as hooking up an event. As the event
occurs (or fires), the method that implements the delegate runs. This
provides complete functionality for the event and makes the event ready to
be used. Before using an event, you must combine it to the method that
implements it. This can be done by passing the name of the method to the
appropriate delegate, as we learned when studying delegates. You can then
assign this variable to the event's name using the += operator. Once this is
done, you can call the event. Here is an example:
using System;
delegate void dlgSimple();
class Exercise
{
public static event dlgSimple Simply;
public static void Welcome()
{
}
public static void SayHello()
{
Simply();
}
static int Main()
{
Simply += new dlgSimple(Welcome);
SayHello();
return 0;
}
}
Instead of the += operator used when initializing the
event, you can implement add and remove of the event
class. Here is an example:
using System;
delegate void dlgSimple();
class Exercise
{
public event dlgSimple Simply
{
add
{
Simply += new dlgSimple(Welcome);
}
remove
{
Simply -= new dlgSimple(Welcome);
}
}
public void Welcome()
{
}
}
Events and Windows Controls
An application is made of various objects or controls.
During the lifetime of an application, its controls regularly send messages
to the operating system to do something on their behalf. above. The most common events
have already been created for the objects of the .NET Framework controls so
much that you will hardly need to define new events, at least not in the
beginning of your GUI programming adventure. Most of what you will do
consists of implementing the desired behavior when a particular event fires.
To start, you should know what events are available, when they,, “Raise your
handâ€. In this case, suppose everything is alright, the arm does not
ask, “how do I raise my hand?â€. passed as the second
parameter. studio would initiate
the default event and open the Code Editor. The cursor would be positioned
in the body of the event, ready to receive your instructions. Another
technique you can use consists of displaying the first and clicking either
the form or the control that will fire the event. Then, in the Properties
window, click the Events button
,
and double-click the name of the event you want to use.:
public event PaintEventHandler Paint;
This event is carried by a PaintEventHandler
delegate declared as follows:
public delegate void PaintEventHandler(object sender, PaintEventArgs e);. | http://www.functionx.com/vcsharp2010/topics/events.htm | CC-MAIN-2017-30 | refinedweb | 548 | 61.16 |
- no lonely braces on a single line, except:
- when a namespace, or a proc larger then 24 lines (buh) ends, or
- for delimiting data
- braces in if {expr} only when it is an expression
- braces around arguments only if there are more then one
- rather return, continue, break then else
- don't while, foreach, for, if you can avoid it - recurse
Illustrating with an example:
proc ccfs param { if !$param return puts "we are in" if [llength $param] {puts "and we've got a list"} switch -- [lindex $param] { stop {return} continue { # test the second parameter set test [lindex $param 1]} default {puts "don't know what todo with: $param}}} if {$test eq "stop"} { puts "we stopped on the second param"}}Observations:
- If a line is empty, it clearly denotes that something new is going to happen.
- Of course I use a syntax aware editor to get the indenting and the parent matching right
- Conditional are: either flags - $var, results - script, or expressions - {expr}, which is visually conveyed by the CCFS
- Some say (A question of style) that run time is slightly longer; I don't care! (If you need it fast, postprocess the code to insert the braces).
Mhm.. if/then/else is bad, however, sometimes we need it though. If an if/else branch is too long to fit into the same line, you have to decide upon formatting. What about?
Traditional CCFS 1: tame CCFS 2: else = } { CCFS 3: wild you might spare on more line i like this one if '..else..' is short it gets the } out of sight if {cond} { if {cond} { if {cond} { if {cond} { ..then.. ..then.. ..then.. ..then.. } \ } else { } else { } { else { ..else.. ..else..} ..else..} ..else..} } continuation.. continuation.. continuation.. continuation..
Here is some real code, cut&paste from ttp.tcl from TTP. Please don't try to understand the code, just skim over the text to see the syntactic patterns:
CCFS extreme example of standard syntax ..inside a namespace... proc tcl {args} { proc tcl {args} { variable state variable state if [llength $args] { if {[llength $args]} { switch $state { switch $state { parse {set state tclline}} parse { catch {eval $args} result set state tclline return $result } } else { } switch $state { catch {eval $args} result parse {set state tclstart} return $result tcl {set state tclend}}} } else { return} switch $state { parse { # cmd: preprocess lines instead of subst set state tclstart proc cmd {args} { } variable state tcl { variable cmdLine set state tclend switch $state { } parse { } set cmdLine $args } set state cmdstart}}} return } namespace export out parse skipline -- literal tcl cmd } # cmd: preprocess lines instead of subst proc cmd {args} { namespace import ::ttp::* variable state variable cmdLine proc stamp {} { switch $state { set host "" parse { if [info exists ::env(HOST)] {set host $::env(HOST)} set cmdLine $args if [info exists ::env(HOSTNAME)] {set host $::env(HOSTNAME)} ............... if {$host eq ""} {catch {exec hostname} host} } ...the procedure continues here... }
Loops'for' and 'while' use expressions for iteration, however for reasons explained elsewhere you must enclose the expression within braces, which is ugly. Use the intrinsic list processing of the Tcl 'proc' instead. The command line parser of TTP is an example for this. Iff:
- the iteration is not very deep
- does not get called all the time
proc printlist args { while {[llength $args]} { puts [lindex $args 0] set args [lreplace $args 0 0]}}Good looking:
proc printlist {item args} { puts $item if [llength $args] {eval printlist $args}}Of course this is a very constructed example since the following is the way to do it:
proc printlist args {foreach item $args {puts $item}}However i hope to illustrate the point of using 'proc' and 'args' for list iteration. For counting stuff consider:
proc forloop i { if $i {puts $i; forloop [incr i -1]}}Oh.. this counts down and stops with '1'! Ahem, does that really matter? Yes! Then use:
proc forloop {i n} { if $n {puts $i; forloop [incr i 1] [incr n -1]}}See Tail call optimization for more on recursion and: program language specialists please jump in.LEG
RLH That code is hard to read. Much more so than regular Tcl syntax style.jcw - Check out an indentation syntax for Tcl ...LEG - in fact i read that before making up this page. an indentation syntax for Tcl however changes the syntactic rules, CCFS does not. I was rather inspired by Lisp than Python. However i would like to see a back-and-forth code reformatter between CCFS and standard Tcl Syntax: would your programm be the right tool to take as a start? | http://wiki.tcl.tk/19862 | CC-MAIN-2016-44 | refinedweb | 748 | 64.75 |
Back to: ASP.NET Web API Tutorials For Begineers and Professionals
How to Implement DELETE Method in Web API Application
In this article, I am going to discuss how to Implement DELETE Method in Web API Application with an example. Please read our previous article where we discussed how to Implement PUT Method Web API before proceeding to this article as we are going to work with the same example. As part of this article, we are going to discuss the following pointers.
- How to Implement the Delete Method in Web API Application?
- Testing Delete Method in Web API.
How to Implement the DELETE Method in ASP.NET Web API?
The Delete Method in Web API allows us to delete an item. We want to delete a specified employee from the Employees database table. To achieve this Include the following Delete method in EmployeesController.
public class EmployeesController : ApiController { public void Delete(int id) { using (EmployeeDBContext dbContext = new EmployeeDBContext()) { dbContext.Employees.Remove(dbContext.Employees.FirstOrDefault(e => e.ID == id)); dbContext.SaveChanges(); } } }
At this point build the solution, run the application and fire up the Fiddler and issue a Delete request.
- Set the HTTP verb to DELETE
- Content-Type: application/json. This tells that we are sending JSON formatted data to the server
- Finally, click on the execute button as shown below
When we click on the Execute button, it will give us the below response
This works fine and deletes the employee record from the database as expected. The problem here is that since the return type of the Delete method is void, we get status code 204 No Content. When the Deletion is successful, we want to return status code 200 OK indicating that the deletion is successful.
Also when we try to delete an employee whose Id does not exist we get back HTTP status code 500 Internal Server Error. We get status code 500, because of a NULL reference exception. If an item is not found, then we need to return status code 404 Not Found.
How to Fix the above issues?
To fix both of these issues modify the code in the Delete method as shown below.
public class EmployeesController : ApiController { public HttpResponseMessage Delete(int id) { try { using (EmployeeDBContext dbContext = new EmployeeDBContext()) { var entity = dbContext.Employees.FirstOrDefault(e => e.ID == id); if (entity == null) { return Request.CreateErrorResponse(HttpStatusCode.NotFound, "Employee with Id = " + id.ToString() + " not found to delete"); } else { dbContext.Employees.Remove(entity); dbContext.SaveChanges(); return Request.CreateResponse(HttpStatusCode.OK); } } } catch (Exception ex) { return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ex); } } }
At this point, issue another DELETE request from the Fiddler. Notice in the response header we have status code 200 OK. Also, when we try to delete an employee whose id does not exist, we get status code 404 Not Found instead of 500 Internal Server Error
In the next article, I am going to discuss how to create Custom Method Names in ASP.NET Web API Application. Here, in this article, I try to explain Implementing the DELETE Method in WEB API step by step with a simple example. | https://dotnettutorials.net/lesson/delete-method-in-web-api/ | CC-MAIN-2020-05 | refinedweb | 512 | 57.16 |
I am new at C-been studing it for 2 weeks, and really confused. I thought if I joined a forum you guys who are more experienced could explain some of the concepts better than a teacher because maybe My problem is writing a C programming that displays some taxes. 3 different towns have different sales tax but the same purchase amount.
Berlin=7.25%
Marlo=7.5%
Teymon=7.75%
The purchase amount=125
I started this way:
#include <stdio.h>
main()
{
char 1 = Berlin
char 2 = Marlo
char 3 = Teymon
a = .0725
b = .075
c = .0775
pur_amt = 125
I know it's simple, but to me it's like a foreign language. I guess I sort of get lost from there. Please offer any suggestions or let me know if I'm on the wrong/right track. Please explain, no just give me answe because I want to learn from the example you give me. I am using Miracle C-is that ok? | http://cboard.cprogramming.com/c-programming/74906-new-c.html | CC-MAIN-2014-41 | refinedweb | 165 | 81.53 |
Qt Quick 3D - Simple Example
Demonstrates how to render a simple scene in Qt Quick 3D.
Simple demonstrates how to render a scene in Qt Quick 3D.
Setting Up the Scene
We set up the entire scene in the main.qml file.
To be able to use the types in the QtQuick3D module, we must import it:
import QtQuick3D 1.14
First of all, we define the environment of our simple scene. We just clear the background color with 'skyblue' in this example.
environment: SceneEnvironment { clearColor: "skyblue" backgroundMode: SceneEnvironment.Color }
And then, we define a camera which represents the viewport of the rendered scene. In this example, we use PerspectiveCamera which shows perspective viewport in a general 3D scene. Because we want to define some objects around origin, we move this camera to the rear position and rotate slightly.
PerspectiveCamera { position: Qt.vector3d(0, 200, -300) rotation: Qt.vector3d(30, 0, 0) }
For the objects in the scene to be rendered correctly we need to add a light source, in this example we'll be using a DirectionalLight
DirectionalLight { rotation: Qt.vector3d(30, 70, 0) }
Draw Simple Objects
Now, we draw some built-in objects. In this example, we draw a red cylinder and a blue sphere using Model. However, just drawing objects is too simple, so we make a round plate with the red cylinder and add bouncing animation for the sphere. } } }. | https://doc.qt.io/qt-5.14/qtquick3d-simple-example.html | CC-MAIN-2021-04 | refinedweb | 233 | 65.83 |
Go to the source code of this file.
Metaprogramming tools for transforming functor types.
Sometimes it is necessary to build and remould a function signature, e.g. for creating a functor or a closure based on an existing function of function pointer. This is a core task of functional programming, but sadly C++ in its current shape is still lacking in this area. (C++11 significantly improved this situation). As an pragmatic fix, we define here a collection of templates, specialising them in a very repetitive way for up to 9 function arguments. Doing so enables us to capture a function, access the return type and argument types as a typelist, eventually to manipulate them and re-build a different signature, or to create specifically tailored bindings.
If the following code makes you feel like vomiting, please look away, and rest assured: you aren't alone.
Definition in file function.hpp.
#include "lib/meta/typelist.hpp"
#include "lib/meta/util.hpp"
#include <functional> | https://lumiera.org/doxy/function_8hpp.html | CC-MAIN-2018-47 | refinedweb | 163 | 58.58 |
I am curious to know how I can make my own library and use it for my C/C++ programs? I mean I want to make a library say "maths_example" which will have 2-3 files having some basic functions of maths. Then, I want to use that library for my programs which I will write after writing
#include "addition.h" #include "substraction.h"
So that now, I can use my functions written in my own library. I just want to learn the things how we make library, how we compile all files in library simultaneously, how we use our own written functions in our written codes as we do for "printf()", "sqrt()" etc. Can anyone help me how to start with this? Can you give me links from where I should read about this and start learning this?
If you can give me some example, then it would be beneficial. Any help would be appreciated. | https://www.daniweb.com/programming/software-development/threads/484695/how-to-make-my-own-library-and-use-it-for-my-programs | CC-MAIN-2017-09 | refinedweb | 156 | 81.43 |
It's easy to use the VEML7700 sensor with CircuitPython and the Adafruit CircuitPython VEML7700 module. This module allows you to easily write Python code that reads the ambient light levels, including Lux, from the sensor.
You can use this sensor with any CircuitPython microcontroller board or with a computer that has GPIO and Python thanks to Adafruit_Blinka, our CircuitPython-for-Python compatibility library.
First wire up a VEML7700 to your board exactly as follows. Here is an example of the VEML7700 wired to a Feather using I2C: VEML7700 CircuitPython starter guide has a great page on how to install the library bundle.
For non-express boards like the Trinket M0 or Gemma M0, you'll need to manually install the necessary libraries from the bundle:
- adafruit_veml7700.mpy
- adafruit_bus_device
- adafruit_register
Before continuing make sure your board's lib folder or root filesystem has the adafruit_veml7700.mpy, adafruit_bus_device, and adafruit_register files and-veml7700
If your default Python is version 3 you may need to run 'pip' instead. Just make sure you aren't trying to use CircuitPython on Python 2.x, it isn't supported!
To demonstrate the usage of the sensor we'll initialize it and read the ambient light levels from the board's Python REPL.
Run the following code to import the necessary modules and initialize the I2C connection with the sensor:
import time import board import busio import adafruit_veml7700 i2c = busio.I2C(board.SCL, board.SDA) veml7700 = adafruit_veml7700.VEML7700(i2c)
import time import board import busio import adafruit_veml7700 i2c = busio.I2C(board.SCL, board.SDA) veml7700 = adafruit_veml7700.VEML7700(i2c)
Now you're ready to read values from the sensor using these properties:
- light - The ambient light data.
- lux - The light levels in Lux.
For example to print ambient light levels and lux values:
print("Ambient light:", veml7700.light) print("Lux:", veml7700.lux)
print("Ambient light:", veml7700.light) print("Lux:", veml7700.lux)
For more details, check out the library documentation.
That's all there is to using the VEML7700 sensor with CircuitPython!
#) | https://learn.adafruit.com/adafruit-veml7700/python-circuitpython | CC-MAIN-2021-49 | refinedweb | 334 | 58.28 |
Learn how to Write to a File in Java without overwriting in your Java program
This tutorial teaches you how to write to a file in Java without overwriting the existing content. This type of application is useful in writing the application log data into a file. In this this we have a "log.txt" file which already contains the data and our program will write to this file without overwriting the content of "log.txt" file. Next time if you write something to this file then the content will be appended to it.
In this program we are using the FileWriter class of the java.io package.
Our examples code appends a line to the log.txt file when executed. Following is the code which appends the line to the log.txt file:
out.write("Line Added on: " + new java.util.Date()+"\n");
Our program uses the Java API for this purpose. The object of FileWriter is created using the following code:
FileWriter fstream = new FileWriter("log.txt",true);
Here is the syntax of the FileWriter class:
FileWriter(File file, boolean append)
The FileWriter class takes two parameters:
1. File file: the name of the file to be opened.
2. boolean append: If this parameter is true then the data is written to the end of the file. In other words it appends the data at the end of file.
So, if in your program you have a requirement to append the data into file then you should pass the true as parameter.
Here is the complete example code of the program:
import java.io.*; class WriteToFileWithoutOverwriting { public static void main(String args[]) { try{ FileWriter fstream = new FileWriter("log.txt",true); BufferedWriter out = new BufferedWriter(fstream); out.write("Line Added on: " + new java.util.Date()+"\n"); out.close(); }catch (Exception e){ System.err.println("Error while writing to file: " + e.getMessage()); } } }
To compile the program type following line on command prompt:
javac WriteToFileWithoutOverwriting.java
To execute the program type following on the command prompt:
java WriteToFileWithoutOverwriting
In the above program you have learned how to write code to to append the content in a text file through your Java program. Program explained here writes the content to a file (text file) without overwriting the content of the file. This type of program is very useful in writing log information to a file.
Advertisements
Posted on: April Write to a File in Java without overwriting
Post your Comment | http://www.roseindia.net/java/javafile/How-to-Write-to-a-File-in-Java-without-overwriting.shtml | CC-MAIN-2015-18 | refinedweb | 408 | 66.74 |
See also I tried new `-X importtime` option to `import requests`. Full output is here: Currently, it took about 110ms. And major parts are from Python stdlib. Followings are root of slow stdlib subtrees. import time: self [us] | cumulative | imported package import time: 1374 | 14038 | logging import time: 2636 | 4255 | socket import time: 2902 | 11004 | ssl import time: 1162 | 16694 | http.client import time: 656 | 5331 | cgi import time: 7338 | 7867 | http.cookiejar import time: 2930 | 2930 | http.cookies *1. logging* logging is slow because it is imported in early stage. It imports many common, relatively slow packages. (collections, functools, enum, re). Especially, traceback module is slow because linecache. import time: 1419 | 5016 | tokenize import time: 200 | 5910 | linecache import time: 347 | 8869 | traceback I think it's worth enough to import linecache lazily. *2. socket* import time: 807 | 1221 | selectors import time: 2636 | 4255 | socket socket imports selectors for socket.send_file(). And selectors module use ABC. That's why selectors is bit slow. And socket module creates four enums. That's why import socket took more than 2.5ms excluding subimports. *3. ssl* import time: 2007 | 2007 | ipaddress import time: 2386 | 2386 | textwrap import time: 2723 | 2723 | _ssl ... import time: 306 | 988 | base64 import time: 2902 | 11004 | ssl I already created pull request about removing textwrap dependency from ssl. ipaddress and _ssl module are bit slow too. But I don't know we can improve them or not. ssl itself took 2.9 ms. It's because ssl has six enums. *4. http.client* import time: 1376 | 2448 | email.header ... import time: 1469 | 7791 | email.utils import time: 408 | 10646 | email._policybase import time: 939 | 12210 | email.feedparser import time: 322 | 12720 | email.parser ... import time: 599 | 1361 | email.message import time: 1162 | 16694 | http.client email.parser has very large import tree. But I don't know how to break the tree. *5. cgi* import time: 1083 | 1083 | html.entities import time: 560 | 1643 | html ... import time: 656 | 2609 | shutil import time: 424 | 3033 | tempfile import time: 656 | 5331 | cgi cgi module uses tempfile to save uploaded file. But requests imports cgi just for `cgi.parse_header()`. tempfile is not used. Maybe, it's worth enough to import it lazily. FYI, cgi depends on very slow email.parser too. But this tree doesn't contain it because http.client is imported before cgi. Even though it's not problem for requests, it may affects to real CGI application. Of course, startup time is very important for CGI applications too. *6. http.cookiejar and http.cookies* It's slow because it has many `re.compile()` *Ideas* There are some places to break large import tree by "import in function" hack. ABC is slow, and it's used widely without almost no real need. (Who need selectors is ABC?) We can't remove ABC dependency because of backward compatibility. But I hope ABC is implemented in C by Python 3.7. Enum is slow, maybe slower than most people think. I don't know why exactly, but I suspect that it's because namespace dict implemented in Python. Anyway, I think we can have C implementation of IntEnum and IntFlag, like namedtpule vs PyStructSequence. It doesn't need to 100% compatible with current enum. Especially, no need for using metaclass. Another major slowness comes from compiling regular expression. I think we can increase cache size of `re.compile` and use ondemand cached compiling (e.g. `re.match()`), instead of "compile at import time" in many modules. PEP 562 -- Module __getattr__ helps a lot too. It make possible to split collection module and strings module. (strings module is used often for constants like strings.ascii_letters, but strings.Template cause import time re.compile()) Regards, -- Inada Naoki <songofacandy at gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/python-dev/2017-October/149658.html | CC-MAIN-2020-29 | refinedweb | 636 | 71 |
I just wanted to chime in and say we are having the same issue. In an e-mail (defined below) to colleagues, I was under the impression that setting the environment variable 'FC' would do the trick. It wasn't until after looking through the installed CMake modules that 'FC' is completely ignored on Windows when using a Visual Studio generator.
First E-mail to colleagues: I did a little research this morning during the planning, and it appears you only need to set the environment variable 'FC' to point to ifort.exe. I assume your systems already have this variable based on what is installed by Intel, so all that really needs to be done is set the path internally. Here is an example that you should be able to plug into your CMakeLists.txt for BigTac for testing. Just update the paths to the required ifort.exe for the appropriate generator. cmake_minimum_required(VERSION 3.5.2 FATAL_ERROR) # this must be performed before calling project # to set the fortran compiler... for intel, FC # is used to identify the fortran compiler that # cmake will pick up on and use to determine abi if (CMAKE_GENERATOR MATCHES "^Visual Studio") if (CMAKE_GENERATOR MATCHES "^Visual Studio 10") # set the path to the fortran compiler for vs2010 set(ENV{FC} "C:/Path/To/VS2010/ifort.exe") elseif (CMAKE_GENERATOR MATCHES "^Visual Studio 14") # set the path to the fortran compiler for vs2015 set(ENV{FC} "C:/Path/To/VS2015/ifort.exe") else ( ) # remove from the environment just in case one is set unset(ENV{FC}) endif ( ) if (NOT EXISTS "$ENV{FC}") # indicate to the user that the fortran library could not be found # but allow the process to continue, as it should still generate c / c++ / c# message(AUTHOR_WARNING "Unable to set the Intel Fortran compiler for " "the specified MSVC generator '${CMAKE_GENERATOR}'!!!") else ( ) # message the path to the compiler to use message(STATUS "Fortran compiler: '$ENV{FC}'") endif ( ) endif ( ) project(cmake-fortran-test) --------------------- First E-mail to colleagues End --------------------------------- Second E-mail to colleagues: So, I did a bit more looking into this and this appears to be an issue with how Intel integrates with Visual Studio. I would say if you create a simple Fortran project that the wrong version of the Intel compiler will be used to compile the project. Please do try the statements below to see if this corrects the issue; though, I am not optimistic it will based on my research. I overlooked the internal files related to the FC environment variable, as it is used if Visual Studio is not used. I'll pose the question to CMake, as I am not finding good information on this. >From what I see, the try compile of a temp project sets a post build command >that sets the compiler path. for %%i in (ifort.exe) do @echo CMAKE_Fortran_COMPILER=%%~$PATH:i Modifying the PATH environment variable will not work because visual studio appends to the front of the PATH certain paths, one of them being the location of the Fortran compiler. This path appears to be associated to the registry variable HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Intel\Compilers\Fortran\121.258\IA32\VSNet2010\BinDir. To be able to help here more, I would need to get a machine with both compilers (VS and Intel) installed on a system to test some ideas out. It may be as easy as making sure that when you install to your system the Intel compilers, make sure to install the older Intel compiler first to make the association to VS2010 and then install the newer Intel compiler making only the association to VS2015. It sounds like there should be the capabilities to indicate which version to integrate with based on the following article: It may be possible to just reset the registry variable for VS2010. I am assuming that it is getting messed up, but I cannot tell without having both on a system to play around with. I'll post the question to the CMake forum tomorrow to see if anyone can help with this situation. This cannot be unique to us. I'll keep you apprised, but if you have a machine I may be able to test with, please let me know. Thanks and good day guys. --------------------- Second E-mail to colleagues End --------------------------------- I have not sat down to try and figure this out any further, but I would be very interested in knowing what is the best solution. For our case, there is a project that builds under VS2010 and Intel's VS2010 compatible Fortran compiler. Installing VS2015 on the same machine with Intel's VS2015 compatible Fortran compiler causes issues when trying to generate a VS2010 version of the solution, as Intel's VS2015 compatible Fortran compiler is always selected. In the article posted, there is a way to preserve older compiler versions within older visual studio versions, but I have not looked into that as an option yet. I hope to get there one day. It would be nice if this was a bit more configurable from within CMake so that such operations are not required to be performed. Ryan H. Kawicki From: CMake [mailto:cmake-boun...@cmake.org] On Behalf Of Arjen Markus Sent: Friday, April 13, 2018 2:03 AM To: William Clodius <w.clod...@icloud.com>; cmake@cmake.org Subject: Re: [CMake] Testing with multiple Fortran compilers Hi William, The compiler is not determined via this line - that merely retrieves the name component from the full path to the actual compiler executable. CMake uses a number of methods, if I may express it that way, to determine which compiler (Fortran or otherwise to use). One of them is the FC environment variable. If that is not set, it searches through a list of possible compiler commands. As for CMakeCache.txt and the like: it is best to start a build in a clean directory. Left-overs from a previous build may confuse the configuration and actual building of the program or programs. (As for get_file_component: In the PLplot project the PATH part of the full path to the compiler is used to fine-tune the directories where the compiler libraries are to be found.) Regards, Arjen Arjen Markus Sr. Adviseur/Onderzoeker T +31(0)88 335 8559 E arjen.mar...@deltares.nl [Logo]<><> Postbus 177 2600 MH Delft [Deltares Twitter] <> [Deltares LinkedIn]<> [Deltares Facebook]<> [Think before printing]Please consider the environment before printing this email > -----Original Message----- > From: CMake [mailto:cmake-boun...@cmake.org] On Behalf Of William Clodius > Sent: Friday, April 13, 2018 4:34 AM > To: cmake@cmake.org > Subject: [CMake] Testing with multiple Fortran compilers > > I have been using CMake with gfortran for a number of years, and now want test > my code with ifort. I want to switch easily switch between compilers. My > CMakeLists.txt file is based on the fortran example from make.org an appears > to > have most of the infrastructure needed, but I don't understand how the line > > get_filename_component (Fortran_COMPILER_NAME > ${CMAKE_Fortran_COMPILER} NAME) > > determines the Fortran compiler to be used. Does it examine the FC system > variable? Does it require the full pathname to the compiler executable? Do I > have to > delete the CMakeCache.txt, Makefile, and cmake_install.cmake each time I > change > compilers? > -- > >: | https://www.mail-archive.com/cmake@cmake.org/msg58723.html | CC-MAIN-2018-43 | refinedweb | 1,211 | 60.55 |
Contents
The aim of this text is to show how simple procedure which change case of letters could be rewritten to SWAR version gaining significant boost. In the article method "to lower case" is explored, however the opposite conversion is very easy to derive.
To be honest I have no idea if changing latter case is crucial task in any problem. My knowledge and experiences suggest that the answer is "no", but who knows.
The basic version of the procedure reads one character (a byte) and then classifies it. If the character is an upper case ASCII letter then fast conversion could be used; if the character is an extended ASCII then the system procedure tolower is called.
Note: lower and uppercase letters in ASCII encoding differs only in single, 5th bit.
void to_lower_inplace(char* s, size_t n) { for (size_t j=0; j < n; j++) { if (s[j] >= 'A' && s[j] <= 'Z') { s[j] ^= (1 << 5); } else if (static_cast<unsigned char>(s[j]) >= '\x7f') { s[j] = tolower(s[j]); } } }
In SWAR approach a fast-path could be used if all character within a chunk are not from extended ASCII set. Then all character are classified if are uppercase or not, resulting in a mask having set certain bits. The last step is to xor the mask with the input chunk to selective flip the bit in upper-case letters.
bool is_ascii(const uint64_t chars); uint64_t to_lower_ascii_mask(const uint64_t chars); uint64_t to_lower_ascii(uint64_t chars); uint64_t to_lower_inplace_swar(char* input, size_t size) { char* s = input; { const size_t k = n / 8; for (size_t i=0; i < k; i++, s+=8) { uint64_t* chunk = reinterpret_cast<uint64_t*>(s); if (is_ascii(*chunk)) { *chunk = to_lower_ascii(*chunk); // [***] } else { to_lower_inplace(s, 8); } } } { const size_t k = n % 8; if (k) { to_lower_inplace(s, k); } } }
The first step (is_ascii) and the last step (to_lower_ascii) of the algorithm are simple. Is ASCII tests only if highest bits and to lower swaps 5th bit:
#define packed_byte(b) ((uint64_t(b) & 0xff) * 0x0101010101010101u) bool is_ascii(uint64_t chars) { return (chars & packed_byte(0x80)) == 0; } uint64_t to_lower_ascii(uint64_t chars) { // MSB in mask could be set const uint64_t mask = to_lower_ascii_mask(chars) >> 2; // change case (toggle 5th bit) const uint64_t result = chars ^ mask; return result; }
Vectorization of comparison s[j] >= 'A' && s[j] <= 'Z' is the most interesting part. The key is a vector compare for relation "greater than", i.e. x > contant; x = 0..127. The expression have to be rewritten:
s[j] >= 'A' && !(s[j] >= 'Z' - 1)
When we add a byte of value 128 - constant, then for x greater than constant result would be 128 + something; otherwise the result would be less than 128. In other words, the result of comparison is saved in the highest bit of a byte; lower bits contain garbage that have to be cleared later.
A comparison is expressed by single addition. Thus the full, rewritten expression requires: 2 additions, 1 negation, and 2 bit-ands:
A = s[j..j+8] + packed_byte(128 - 'A'); Z = s[j..j+8] + packed_byte(128 - 'Z' - 1); result = (A & ~Z) & packed_byte(0x80);
An observation: it's not possible that x is less than 'A' (A is false) and at the same time greater than 'Z' (Z is true). Thanks to that the last expression could be simplified to:
result = (A ^ Z) & packed_byte(0x80);
Final version requires:
The sample program is available at github.
The test program loads given file to a 100 MiB memory region and then run the selected procedure: a scalar one, or improved. Two files were examined: | http://0x80.pl/notesen/2016-01-06-swar-swap-case.html | CC-MAIN-2020-16 | refinedweb | 581 | 65.86 |
/>
In a previous post I introduced the Pillow imaging library and demonstrated some of its core functionality. In this post I'll show a few more features including the seemingly Dark Art of getting decent black and white images from a colour photo.
In this post I will cover these topics:
- Converting an image to black and white the wrong way!
- Converting an image to black and white the right way
- Improving a flat and dull B&W image by increasing its contrast
- Splitting an image into its RGB channels, editing them, and then gluing them back together
- Showing how to save images at various qualities
Starting to Code
Create a new folder and within it create an empty file called morepillow.py. (You can download the source code or clone/download from Github if you prefer.)
Source Code Links
In this file we will write a number of functions to demonstrate various manipulations on this JPEG image of Google's London office at Central St. Giles. This image is included in the download zip and the Github repository but you might like to substitute your own./>
Imports, main and image information
This is the first part of morepillow.py. This part is basically a recap of the first part of my previous post but I have included the show_image_info function again as it will come in useful in a moment. Most of the function calls are commented out so we can uncomment and run them one at a time.
morepillow.py part 1
import PIL from PIL import Image from PIL import ImageEnhance def main(): print("-----------------") print("| codedrome.com |") print("| More Pillow |") print("-----------------\n") openfilepath = "central_st_giles.jpg" show_image_info(openfilepath) # The desaturate function is the wrong way to change an image # to black and white and is included here with show_image_info # to demonstrate that it leaves the image with a mode of RGB #desaturate(openfilepath, "central_st_giles_desaturated.jpg") #show_image_info("central_st_giles_desaturated.jpg") # This is the correct way to convert an image to B&W. # Calling show_image_info will show a mode of L #mode_L(openfilepath, "central_st_giles_mode_L.jpg") #show_image_info("central_st_giles_mode_L.jpg") #contrast("central_st_giles_mode_L.jpg", "central_st_giles_mode_L_increased_contrast.jpg", 2.0) #bands_brightness(openfilepath, "central_st_giles_bands_brightness.jpg", 1.0, 1.0, 2.0) #quality_demo("central_st_giles.jpg") def show_image_info(openfilepath): """ Open an image and show a few attributes """ try: image = Image.open(openfilepath) print("filename: {}".format(image.filename)) print("size: {}".format(image.size)) print("width: {}".format(image.width)) print("height: {}".format(image.height)) print("format: {}".format(image.format)) print("format description: {}".format(image.format_description)) print("mode: {}\n".format(image.mode)) except IOError as ioe: print(ioe)
Run the program as it is with the following command:
Running the Program
python3.7 morepillow.py
This will give us the following output. Note in particular that the mode is RGB.
Program Output
----------------- | codedrome.com | | MorePillow | ----------------- filename: central_st_giles.jpg size: (600, 450) width: 600 height: 450 format: JPEG format description: JPEG (ISO 10918) mode: RGB
Converting an Image to Black and White
I mentioned in the previous post that reducing the saturation to 0 has the effect of converting the image to black and white. The first of the following functions does that, and the second does the same job but by using the convert method with an argument of "L". (Despite extensive Googling I have been unable to find out what "L" stands for.)
morepillow.py part 2
def desaturate(openfilepath, savefilepath): """ Convert an image to black and white the wrong way. This method still leaves the image with a colour depth of 24 bit RGB. The correct method is to use convert("L") """ try: image = Image.open(openfilepath) enhancer = ImageEnhance.Color(image) image = enhancer.enhance(0.0) image.save(savefilepath) print("Image desaturated") except IOError as ioe: print(ioe) except ValueError as ve: print(ve) def mode_L(openfilepath, savefilepath): """ The correct way to convert an image to black and white. Do not use ImageEnhance.Color to reduce saturation to 0 as that leaves the colour depth at 24 bit. """ try: image = Image.open(openfilepath) image = image.convert("L") image.save(savefilepath) print("Mode changed to L") except IOError as ioe: print(ioe) except ValueError as ve: print(ve)
Uncomment desaturate and mode_L in main and run the program again. This will give us the following output.
Program Output
----------------- | codedrome.com | | MorePillow | ----------------- Image desaturated filename: central_st_giles_desaturated.jpg size: (600, 450) width: 600 height: 450 format: JPEG format description: JPEG (ISO 10918) mode: RGB Mode changed to L filename: central_st_giles_mode_L.jpg size: (600, 450) width: 600 height: 450 format: JPEG format description: JPEG (ISO 10918) mode: L
If you go to the folder where you have your source code and images you'll find a couple more images have been created. The first, central_st_giles_desaturated.jpg, was created by the desaturate function and appears to be black and white but as you can see from the above output it is technically an RGB image which happens to contain only shades of grey.
The second function, mode_L, does the conversion properly and creates central_st_giles_mode_L.jpg which does actually have an 8-bit colour depth or a mode of L - this is that image./>
Improving Contrast
The above image doesn't look too bad but a very common problem is that images converted from colour to black and white look rather flat and boring. To solve this we need to increase the contrast, often by quite a lot.
The following function does this, and is a more general-purpose version of the contrast function in the previous post. Instead of having the contrast amount hard-coded for demo purposes it takes a value as an argument.
morepillow.py part 3
def contrast(openfilepath, savefilepath, amount): """ A general-purpose function to change the contrast by the specified amount and save the image. """ try: image = Image.open(openfilepath) enhancer = ImageEnhance.Contrast(image) image = enhancer.enhance(amount) image.save(savefilepath) print("Contrast changed") except IOError as ioe: print(ioe) except ValueError as ve: print(ve)
Uncomment the call to contrast in main and run the program. It will create this image which is a lot punchier, and the clouds and sky in particular look much better./>
Splitting and Editing Colour Bands
The three colour channels (or bands to use Pillow's terminology) of an RGB image can be separated, edited individually, and then put back together. Most of the time you'll want to edit the whole image but editing individual channels allows you to alter the colour balance, and the following function does that by altering the brightnesses of the red, green and blue channels by the specified amounts.
After opening the image it calls the split() method. This returns a tuple of three images but as we need to overwrite them the tuple is cast to a list.
It then uses ImageEnhance.Brightness which I introduced in the earlier post, but on each of the three colour channels. Finally we stick the channels back together into a single image using merge() and then save that image.
morepillow.py part 4
def bands_brightness(openfilepath, savefilepath, r, g, b): """ Split the image into colour channels (bands), change the brightness of each by the specified amount, merge the channels and save the image. """ try: image = Image.open(openfilepath) # image.split() returns a tuple so we need to convert # it to a list so we can overwrite the bands. bands = list(image.split()) enhancer = ImageEnhance.Brightness(bands[0]) bands[0] = enhancer.enhance(r) enhancer = ImageEnhance.Brightness(bands[1]) bands[1] = enhancer.enhance(g) enhancer = ImageEnhance.Brightness(bands[2]) bands[2] = enhancer.enhance(b) image = PIL.Image.merge("RGB", bands) image.save(savefilepath) print("Band brightnesses changed") except IOError as ioe: print(ioe) except ValueError as ve: print(ve)
In main I have called bands_brightness with values of 1.0 for red and green, but 2.0 for blue. This is an attempt to replicate those wonderful old Kodachrome images from the 1950s when skies and seas were impossibly bright and vivid. Kodak revised Kodachrome in 1962 to give more natural looking colours and the world became a more miserable place.
Uncomment the function call in main and run the program. This is the result./>
Saving Images at Various Qualities
Finally, lets look at changing the quality of files while saving them. The save method has an optional argument called quality which can be any value between 1 and 100, the higher the number the better the quality.
The following function demonstrates this by saving the supplied image at 25, 50, 75 and 100 using a loop, and with the qualities as file names.
morepillow.py part 5
def quality_demo(openfilepath): """ Save the specified image at several different quality levels for demonstration purposes. Quality can be any value between 1 (awful) to 100 (best). Anything < 50 is unlikely to be acceptable. """ try: image = Image.open(openfilepath) for q in range(25, 101, 25): filename = str(q) + ".jpg" image.save(filename, quality=q) print("Image saved at various qualities") except IOError as ioe: print(ioe) except ValueError as ve: print(ve)
If you run it from main you'll get four new files of various sizes, the smallest and worst being this one, 25.jpg./>
I tried saving the image with a quality of 1 but the result was too dreadful to inflict on anyone, and even 25 is pretty poor as you can see.
The image saved at 75 was very good and acceptable for use on a web site but in an age of huge and cheap storage and fast internet connections I don't think there is really any need to trade quality for file size. If you really need to cut down on file sizes I think most people would rather see physically smaller images in good quality than larger images of poor quality. | https://www.codedrome.com/more-image-manipulations-in-python-with-pillow/ | CC-MAIN-2021-31 | refinedweb | 1,625 | 55.74 |
Float objects represent real numbers using the native architecture's double-precision floating point representation.
Comparable.
Integergreater than or equal to flt.
1.2.ceil → 2 2.0.ceil → 2 (-1.2).ceil → -1 (-2.0).ceil → -2
trueor
false
trueif flt is a valid IEEE floating point number (it is not infinite, and
nan?is
false).
1.2.floor → 1 2.0.floor → 2 (-1.2).floor → -2 (-2.0).floor → -2
nil, -1, +1
nil, -1, or +1 depending on whether flt is finite, -infinity, or +infinity.
(0.0).infinite? → nil (-1.0/0.0).infinite? → -1 (+1.0/0.0).infinite? → 1
trueor
false
trueif flt is an invalid IEEE floating point number.
a = -1.0 → -1.0 a.nan? → false a = Math.log(a) → NaN a.nan? → true
def round return floor(self+0.5) if self > 0.0 return ceil(self-0.5) if self < 0.0 return 0.0 end
1.5.round → 2 (-1.5).round → -2
Integer.
NaN”, “
Infinity”, and “
-Infinity”.. | http://www.ruby-doc.org/docs/ProgrammingRuby/ref_c_float.html#Float | crawl-003 | refinedweb | 169 | 65.89 |
WIth the new SDK released I noticed the Metadata entities, not sure if these existed before hand, anyways.
Metadata.Relationship
Metadata.RelationshipMetadata
Metadata.Entity
Am I correct in assuming these can be used to get all relationships based on a entity ID? If so are there any examples someone can provide on how to properly query this. If not can more context be provided on what the metadata entities provide. I was hoping to be able to get all relationships from by passing in the nodeID or entityID, simliar to how the perfstack API operates. I could then populate some nice group views.
The entities in the Metadata namespace have always been there. Their purpose is to describe the schema. They describe types, not instances. PerfStack uses various metadata queries to figure out what types are available and how they are related as well as what metrics exist on each.
SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy. | https://thwack.solarwinds.com/thread/111765 | CC-MAIN-2018-13 | refinedweb | 184 | 57.37 |
In Loklak Search the post items contain links, which are either internal or external. These links include the hashtags, mentions, and URLs. From the backend server we just received the message in the plain text format, and thus there is need to parse the plain text and render it as clickable links. These clickable links can be either internal or external. Thus we need an auto-linker component, which takes the text and render it as links.
The API of the Component
The component takes as the input the plain text, then four arrays of strings. Each containing the text to be linked. These are hashtags, mentions, links and the unshorten attribute which is used to unshorten the shortened URLs in the post. These attributes are used by the component to render the text in the appropriate format.
export class FeedLinkerComponent implements OnInit { @Input() text: string; @Input() hashtags: string[] = new Array<string>(); @Input() mentions: string[] = new Array<string>(); @Input() links: string[] = new Array<string>(); @Input() unshorten: Object = {}; }
The Logic of the Component
The basic logic of the component works as the following, we divide the text into chunks known as shards, we have three basic data structures for the component to work
- The ShardType which is the type of the chunk it specifies whether it is plain, hashtags, mentions, and links.
- The Shard which is the simple object containing the text to show, its type and the link it refers to
The StringIndexdChunks, they are utilized to index the chunks in the order in which they appear in the text.
const enum ShardType { plain, // 0 link, // 1 hashtag, // 2 mention // 3 } class Shard { constructor ( public type: ShardType = ShardType.plain, public text: String = '', public linkTo: any = null, public queryParams: any = null ) { } } interface StringIndexedChunks { index: number; str: string; type: ShardType; }
First we have a private method of the component which searches for all the elements (strings) in the text. Here we have an array which maintains the index of those chunks in the text.
private generateShards() { const indexedChunks: StringIndexedChunks[] = []; this.hashtags.forEach(hashtag => { const indices = getIndicesOf(this.text, `#${hashtag}`, false); indices.forEach(idx => { indexedChunks.push({index: idx, str: `#${hashtag}`, type: ShardType.hashtag}); }); }); this.mentions.forEach(mention => { const indices = getIndicesOf(this.text, `@${mention}`, false); indices.forEach(idx => { indexedChunks.push({index: idx, str: `@${mention}`, type: ShardType.mention}); }); }); }
Then we sort the chunks according to their indexes in the text. This gives us sorted array which consists of all the chunks sorted according to the indexes as they appear in the text.
indexedChunks.sort((a, b) => { return (a.index > b.index) ? 1 : (a.index < b.index) ? -1 : 0; });
The next part of the logic is to generate the shard array, an array which contains each chunk, once. To do this we iterate over the Sorted Indexed array created in the previous step and use it split the text into chunks. We iterate over the text and take substrings using the indexes of each element.
let startIndex = 0; const endIndex = this.text.length; indexedChunks.forEach(element => { if (startIndex !== element.index) { const shard = new Shard(ShardType.plain, this.text.substring(startIndex, element.index)); this.shardArray.push(shard); startIndex = element.index; } if (startIndex === element.index) { const str = this.text.substring(startIndex, element.index + element.str.length); const shard = new Shard(element.type, str); switch (element.type) { case ShardType.link: { if (this.unshorten[element.str]) { shard.linkTo = str; shard.text = this.unshorten[element.str]; } else { shard.linkTo = str; } break; } case ShardType.hashtag: { shard.linkTo = ['/search']; shard.queryParams = { query : str }; break; } case ShardType.mention: { shard.linkTo = ['/search']; shard.queryParams = { query : `from:${str.substring(1)}` }; break; } } this.shardArray.push(shard); startIndex += element.str.length; } }); if (startIndex !== endIndex) { const shard = new Shard(ShardType.plain, this.text.substring(startIndex)); this.shardArray.push(shard); }
After this we have generated the chunks of the text, now the only task is to write the view of the component which uses this Shard Array to render the linked elements.
<div class="textWrapper"> <span * <span * <!-- Plain --> {{shard.text}} </span> <span * <!-- URL Links --> <a>{{shard.text}}</a> </span> <span * <!-- Hashtag --> <a [routerLink]="shard.linkTo" [queryParams]="shard.queryParams">{{shard.text}}</a> </span> <span * <!-- Mention --> <a [routerLink]="shard.linkTo" [queryParams]="shard.queryParams">{{shard.text}}</a> </span> </span> </div>
- This renders the chunks and handles the links of both internal and external type.
- It also also makes sure that the links get unshortened properly using the unshorten API property.
- Uses routerLink, angular property to link in application URLs, for asynchronous reloading while clicking links.
Resources and Links
This component is inspired from the two main open source libraries.
- Autolinker.js by @gregjacobs
- Angular Linky package
Earlier these libraries were used in the project, but as the need of unshortening and asynchronous linking appeared in the application, a custom implementation was needed to be implemented. | https://blog.fossasia.org/tag/autolinker/ | CC-MAIN-2018-09 | refinedweb | 792 | 60.11 |
Ok iv got 2 questions off the bat, and would also like my code to be looked at to see if you can see any major problems offhand. I mean like style problems, but first, questions hehe.
1.) 'Data Storage'
I am making a little text based game just to get the syntax of basic functions into my head, its all going fine but i came up to a little problem. In this game you roll Six dice, well to keep track of each die i used an array like "array[7]" and used 1-6 to keep track of the dice. 0 is an empty value just cuz this seemed easier lol. Well now i need to check the data to see if any of the dice randomly generated exist 3 times. Here is an example.
I roll: 3, 1, 2, 3, 5, 3.
I need to check how many times each number exists (numbers 1-6, on dice ofcourse.). But i cant quite think of a good way to do it. in C++ that is lol. Im not too sure my array data storage is the best the way im using it already. So how would i do that? A simple loop running through each number and die ( number[1] dice[1] ) and storing its number then after the loop is done checking how many numbers exist, then looping through the number array and seeing if any of the values in each element is more than 3 would work i think.. (still with me?) but is all this the best way?
2.) 'Dynamic Variables'
Is there such an operator to allow umm joining? like "myStringVar = word1 $+ word2" would be "myStringVar = word1word2", or in such a case, dynamic variable naming, like "myVarOne $+ x = SomeValue", where x would be for example a number running in a loop. So if the loop ran 6 times you would have 6 variables. "myVarOne1, myVarOne2, myVarOne3" ect.
3.) 'My Horrible Code'
If you got a sec, view my horrible code lol. Keep in mind i am a TOTAL C++ NUB lol. And that this code IS NOT DONE! The bottom is left undone, i relize there are loops that have to break in them yet. But none the less just little things or big things. If your confused by anything, like what the point of the game is, or what for example "x" is representing, dont worry about it. Im not in need of examination yet, but since im posting i figured id post and see whats happends. Thanks!
*Warning, n00b code below. Shield your Eyes*
Code:#include <iostream> #include <iomanip> #include <ctime> #include <algorithm> #include <cstdlib> #include <cstring> using namespace std; int Dice_Ret(int numOfDie); int Random_Range(int lowest_number, int highest_number); int Rand_Test(); int menu(); void Game_Stage1(); int main() { cout<<"Welcome to Zonc Alpha\nDisplaying Main Menu..\n"; int exit = 0; while(1){ switch(menu()){ case 1: Rand_Test(); break; case 2: Game_Stage1(); break; case 4: cout<<"Exiting...\n"; exit = 1; break; case 5: cout<<"Random Dice Generating: "<<Random_Range(1,6)<<"\n\n"; break; default: cout<<"Invalid Response, Please choose again.\n"; break; } if(exit == 0) continue; else break; } cout<<"Exited.\n"; } int Rand_Test() { int arreh[7]; arreh[1] = 0, arreh[2] = 0, arreh[3] = 0, arreh[4] = 0, arreh[5] = 0, arreh[6] = 0; int c, r = 60000; cout<<"Out of "<<r<<" rolls of one die, these are the numbers generated.\n"; for(int a = 0; a <= r; a++){ c = Random_Range(1,6); arreh[c] = arreh[c]++; } for(int a = 1; a <= 6; a++){ cout<<a<<" Was Generated \""<<arreh[a]<<"\" Times.\n"; } } int Random_Range(int x, int y) { if(x > y){ swap(x,y); } int range = y - x + 1; return x + int(range * rand()/(RAND_MAX + 1.0)); } int Dice_Ret(int numofDie) { } int menu() { cout<<"\n\n"; cout<<"]----------------------------[\n"; cout<<" 1: Displays the dice randomization test\n"; cout<<" 2: Enter Beta Zonc 1\n"; cout<<" 3: Display Menu Again\n"; cout<<" 4: Exit\n"; cout<<" 5: Roll Die\n"; cout<<"]----------------------------[\n"; int a; cout<<"What do you choose: "; cin>>a; return a; } void Game_Stage1() { int dice[1] = 0, dice[2] = 0, dice[3] = 0, dice[4] = 0, dice[5] = 0, dice[6] = 0; unsigned int totalscore = 0, side = 0, totalcurrent = 0, input, zonc = 0; cout<<"\n\nWelcome to Zonc Beta 1, This is a single player game.\n" "The object of this is to test the coding of the game.\nIf you dont know how to play,\ninstructions should be in the main menu.\n" "if not, well your out of luck then because Zeus hasent coded them yet.\n\n"; cout<<"Please choose a Gameplay Goal (the number you win if you reach, must be higher than 1k): "; unsigned int goal; cin>>goal; while((goal <= 1000) || (goal > 50000)){ cout<<"\n\nError, Your Goal is too small, Please input a new Goal: "; cin>>goal; } cout<<"\n\nCommands;\nQ - Type at anytime to exit.\nRoll - Rolls the dice, assuming you can by the rules.\n" "Keep - Keeps a single die, use multiple times for multiple die.\nScore - Displays your total score (including your current roll and on the side).\n"; int continue = 1, roll = 1; // Roll checks if the user has put away atleast one die, allowing them to roll again. On be default. while(continue){ //abandoned this type of tracking multiples of the same dice number. int zc-one = 0, zc-two = 0, zc-three = 0, zc-four = 0, zc-five = 0, zc-six = 0; //Zonc Check Variables, each one stands for a dice number, a loop checks for 3 of one number. for(int a = 1; a <= 6; a++) dice[a] = Random_Range(1,6); cout<<"Rolling..\nDisplaying Roll Results..\n" "Dice1: "<<dice[1]<<", Dice2: "<<dice[2]<<", Dice3: "<<dice[3]<<", Dice4: "<<dice[4]<<", Dice5: "<<dice[5]<<", Dice6: "<<dice[6]<<"\n"; for(int a = 1; a <= 6; a++){ //Check if the player Zonced zonc = 1; if((dice[a] == 1) || (dice[a] == 5)) zonc = 0; } } } | http://cboard.cprogramming.com/cplusplus-programming/58101-data-storage-question-dynamic-variables.html | CC-MAIN-2015-11 | refinedweb | 989 | 79.09 |
There is a bug in mangle_name.c which leads to mangling any name containing two underscores followed by a capital "U", even if those three characters are not consecutive. For example, the following program will not link:
[begin Test.java]
public class Test {
public static final native void x_y_NewUser();
public static final native void xy__User();
public static void main(String[] args) {
x_y_NewUser();
xy__User();
}
}
[end Test.java]
[begin natTest.cpp]
#include "Test.h"
void Test::x_y_NewUser() {
return;
}
void Test::xy__User() {
return;
}
[end natTest.cpp]
I will attach a patch which fixes the first case but not the second (since I'm not sure how the second case was intended to be handled).
Created attachment 11931 [details]
patch to reset uuU variable when a non-underscore is encountered
Over the past month I have been trying to make a largish Java project accessible from Perl using SWIG and GCJ. I have been very pleased with the way GCJ allowed me to accomplish this. Unfortunately, late in the project I was hit by this bug; several of the enumerations used in the Java code contained uppercase U's which were converted to _U (underscore followed by uppercase U). This eventually resulted in linking errors.
The patch attached to this bug solved the issue on GCC 4.3.1. I would therefore like to add my vote for this patch to be submitted to what is going to be GCC 4.4.
Subject: Bug 28474
Author: aph
Date: Tue Oct 20 16:01:21 2009
New Revision: 153021
URL:
Log:
2009-10-20 Joel Dice <dicej@mailsnare.net>
PR java/28474
* mangle_name.c (append_unicode_mangled_name): Fix mangling
of names with multiple underscores and "U".
(unicode_mangling_length): Likewise.
Modified:
trunk/gcc/java/ChangeLog
trunk/gcc/java/mangle_name.c
Closing as won't fix as the Java front-end has been removed from the trunk. | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=28474 | CC-MAIN-2017-43 | refinedweb | 307 | 65.22 |
How To Add days to Date in Java
In this tutorial, we will be going to solve and understand the problem how to add days to Date in Java.
We will be looking forward to which all packages to import and how it is implemented.
How to Add Days to Date in Java
In this tutorial of solving the problem of how to add days to a date in Java, we are importing initially “java.util.Calendar” package.
This is basically the “Calendar” class which basically provides us methods for converting date between a specific instant and set of Calendar fields.
Some of methods of this package include getWeekYear(), getTimeZone(),etc.
Also, we used “java.text.SimpleDateFormat“, which provides various methods to format date and time in java.
Inside the main block, we created an instance of “SimpleDateFormat“ for formatting the date in a desired particular pattern.
Also, we created an object of class Calendar with the help of method Calendar.getInstance.
This helps in getting the input of the date from the system.
Also, read:
This method returns the calendar.
Now, we displayed the current date in a formatted manner.
Our objective here is to add days to a date, for which we are provided with a method to add a number of days to a constraint of date.
We passed the no. of the day of the month and along with we passed no. of days we want to add to it.
Now simply formatting the updated result of date, we can display the added days result of date.
Below is the Program for the above problem in java for Topic “How To add days in a date”:
import java.text.SimpleDateFormat; import java.util.Calendar; class Caladd { public static void main(String args[]) { SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd"); Calendar cal = Calendar.getInstance(); System.out.println("Current Date: "+sdf.format(cal.getTime())); cal.add(Calendar.DAY_OF_MONTH, 30); String newDate = sdf.format(cal.getTime()); System.out.println("Date after Addition: "+newDate); } }
After the successful compilation of the above program, we will be getting our expected result with added days to a date in Java.
Below is the output for the above program:
| https://www.codespeedy.com/how-to-add-days-to-date-in-java/ | CC-MAIN-2020-40 | refinedweb | 365 | 57.27 |
On Thu, May 08 2003, Linus Torvalds wrote:> > On Thu, 8 May 2003, Jens Axboe wrote:> > > > Maybe a define or two would help here. When you see drive->addressing> > and hwif->addressing, you assume that they are used identically. That> > !hwif->addressing means 48-bit is ok, while !drive->addressing means> > it's not does not help at all.> > Why not just change the names? The current setup clearly is confusing, and> adding defines doesn't much help. Rename the structure member so that the> name says what it is, aka "address_mode", and when renaming it you'd go> through the source anyway and change "!addressing" to something more> readable like "address_mode == IDE_LBA48" or whatever.Might not be a bad idea, drive->address_mode is a heck of a lot more tothe point. I'll do a swipe of this tomorrow, if no one beats me to it.> (Anyway, I'll just drop all the 48-bit patches for now, since you've > totally confused me about which ones are right and what the bugs are ;)I think we can all agree on the last one (attached again, it's short) isok. The 'only use 48-bit when needed' can wait until Bart gets thetaskfile infrastructure in place, until then I'll just have to eat theoverhead :)diff -Nru a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c--- a/drivers/ide/ide-disk.c Thu May 8 14:32:59 2003+++ b/drivers/ide/ide-disk.c Thu May 8 14:32:59 2003@@ -1479,7 +1483,7 @@ static int set_lba_addressing (ide_drive_t *drive, int arg) {- return (probe_lba_addressing(drive, arg));+ return probe_lba_addressing(drive, arg); } static void idedisk_add_settings(ide_drive_t *drive)@@ -1565,6 +1569,18 @@ } (void) probe_lba_addressing(drive, 1);++ if (drive->addressing == 1) {+ ide_hwif_t *hwif = HWIF(drive);+ int max_s = 2048;++ if (max_s > hwif->rqsize)+ max_s = hwif->rqsize;++ blk_queue_max_sectors(&drive->queue, max_s);+ }++ printk("%s: max request size: %dKiB\n", drive->name, drive->queue.max_sectors / 2); /* Extract geometry if we did not already have one for the drive */ if (!drive->cyl || !drive->head || !drive->sect) {diff -Nru a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c--- a/drivers/ide/ide-probe.c Thu May 8 14:32:59 2003+++ b/drivers/ide/ide-probe.c Thu May 8 14:32:59 2003@@ -998,6 +998,7 @@ static void ide_init_queue(ide_drive_t *drive) { request_queue_t *q = &drive->queue;+ ide_hwif_t *hwif = HWIF(drive); int max_sectors = 256; /*@@ -1013,8 +1014,10 @@ drive->queue_setup = 1; blk_queue_segment_boundary(q, 0xffff); - if (HWIF(drive)->rqsize)- max_sectors = HWIF(drive)->rqsize;+ if (!hwif->rqsize)+ hwif->rqsize = hwif->addressing ? 256 : 65536;+ if (hwif->rqsize < max_sectors)+ max_sectors = hwif->rqsize; blk_queue_max_sectors(q, max_sectors); /* IDE DMA can do PRD_ENTRIES number of segments. */-- Jens Axboe-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2003/5/8/156 | CC-MAIN-2018-09 | refinedweb | 477 | 63.09 |
A functional and reactive JavaScript framework for predictable code. cycle.js.org
xxx$.log('xxx').map(...
import {HTTPContext} from '../../http-context-driver' function FrontPage ({HTTP}) { return { HTTP: HTTP.map((ctx: HTTPContext) => { const outgoing: HTTPContext = { request: ctx.request, response: ctx.response, body: 'FrontPage' } return outgoing }) } } export default FrontPage
@mariuslundgard awesome! You got a github star :-)
I hope I can start wrapping as a cycle.js driver in the future. And I would love to have the req/res-part available as well.
I just don’t have the time to figure out if I need cycle.js or not in my projects. But I do use most of the thoughts I’ve heard about MVI, but as model, view, interactions, and state. | https://gitter.im/cyclejs/cyclejs?at=571b5d7c47b4c6480ff9940d | CC-MAIN-2020-24 | refinedweb | 121 | 71.1 |
Forgot a ";" |
|
recursion yes; |
allow-recursion { 10.5.4.0/24 ; 10.4.3.0/24; };
--
jbeasley@sdf.lonestar.org
SDF Public Access UNIX System -
This is a discussion on Re: Turned recursion off and now lookups not working - DNS ; Steve Ingraham wrote: > Kevin Darcey wrote: > >> It's only the *external* clients you don't want to recurse for. You >> > still >may need to recurse for your *internal* clients, unless they > don't require >resolvability of Internet ...
Steve Ingraham wrote:
> Kevin Darcey wrote:
>
>> It's only the *external* clients you don't want to recurse for. You
>>
> still >may need to recurse for your *internal* clients, unless they
> don't require >resolvability of Internet names (e.g. if everything is
> behind application->level proxies), or, alternatively, you intend to
> host the whole Internet >DNS namespace on your computer (biiiiiig box).
>
>
>> Options: run separate boxes for hosting versus recursion, separate BIND
>>
>
>
>> instances on the same box, separate "view"s within the same instance,
>>
> or
>
>> control queries and/or recursion via allow-query and/or
>>
> allow-recursion.
>
>> Note that BIND 9.4.0 just came out with an "allow-query-cache" option,
>> which makes allow-recursion a little more palatable -- previously,
>>
> since
>
>> answers from the cache do not require recursion, this data was
>>
> available
>
>> to external clients regardless of the allow-recursion settings, which
>> was arguably "information leakage" that might not make one's security
>> administrators/auditors very happy.
>>
>
>
>> There was recently a thread here on a very similar topic. See the posts
>>
>
>
>> with the subject line "recursion question" at
>>
>>
> &q=b
>
>
> I am the person who originated that original question you are referring
> to. I am still somewhat fuzzy on the recursion thing. I have set up
> the named.conf file with the option line also:
>
> {
> recursion no;
> };
>
> I have not seen any problems with user access to the internet. I do
> have an internal DNS server inside the firewall running Windows 2000 as
> an internal DNS server. In my ignorance of much of the issues
> associated with DNS I have concluded that this internal DNS is allowing
> our client machines to resolve names. Is this a correct assumption on
> my part?
>
Think of "recursion no" as an evil shrink ray that turns your mighty
superhero resolver into a meek little non-recursive nameserver,
basically little more than a specialized database server. Once
diminished like that, it can *only* answer from its own authoritative
data (i.e. data in zones that are defined as type master or type slave),
and won't lift a finger to query other nameservers on a client's behalf.
But, at least with its recursive capabilities wing-clipped, its
query-answering powers can only be used for good :-)
If a nameserver has "recursion no", therefore, I think reasonable to
conclude that the internal stub resolvers (e.g. end-user clients)
pointed to that nameserver, if any, don't actually need to resolve
Internet names. Presumably this is because all of their interaction with
the Internet is done through application-level proxies (e.g. web
proxies, mail gateways, etc.), and it's the *proxies*, not the end-user
clients, that are doing the Internet name resolution, using their own
resources.
As for resolving internal names, "recursion no" imposes the burdensome
requirement that every internal zone needed by a given community of stub
resolvers be defined as master or slave on the nameserver (or view)
which serves those stub resolvers. This doesn't scale very well,
especially if you have diverse business units which need to co-ordinate
the setup and ongoing maintenance of multiple master/slave relationships
between each other's servers. It can also be viewed as overkill to slave
a zone for which queries are infrequent (how _much_ overkill depends on
a variety of factors, e.g. REFRESH setting relative to the TTLs of the
more-popular RRsets, frequency of changes to the zone, size of the zone,
whether the master and slave both support IXFR, etc.). Regardless of
those considerations, sometimes it's necessary to slave a zone, just to
provide maximum redundancy/availability.
Just because you slave a zone, of course, doesn't mean you attract query
traffic for that zone from foreign resolvers. You can be a "stealth
slave", which doesn't appear in the NS records of the zone.
For the foregoing reasons, I only define "recursion no" on our primary
master server for the internal DNS (which is only supposed to
communicate to other DNS programs via non-recursive transactions,
including zone transfers), and in one of the views of our
Internet-facing boxes. Everything else has recursion enabled.
- Kevin
Forgot a ";" |
|
recursion yes; |
allow-recursion { 10.5.4.0/24 ; 10.4.3.0/24; };
--
jbeasley@sdf.lonestar.org
SDF Public Access UNIX System - | http://fixunix.com/dns/52290-re-turned-recursion-off-now-lookups-not-working.html | CC-MAIN-2016-18 | refinedweb | 795 | 51.07 |
I've always enjoyed doing user interface work. The ability to put my work right on the screen in full view is what makes UI work different from back-end business logic. It feels more satisfying somehow to write code that does something on the screen, rather than just adjust an account balance somewhere in some database.
However, in my time working on UI, I've found that most often the interface I'm working on is something like this:
We all know how to handle this simple sort of application. We start by coding up a data class to represent the underlying information:
public class TodoItem { private TodoPriority priority; private DateTime dueDate; private string description; public TodoPriority Priority { get { return priority; } set { priority = value; } } public DateTime DueDate { get { return dueDate; } set { dueDate = value; } } public string Description { get { return description; } set { description = value; } } }
Then we start writing the code to read and write from the controls to the data structure. For the form in the above screenshot, it's something along these lines:
private void highPriButton_Click(object sender, System.EventArgs e) { todoItem.Priority = TodoPriority.High; } private void mediumPriButton_Click(object sender, System.EventArgs e) { todoItem.Priority = TodoPriority.Medium; } private void lowPriButton_CheckedChanged(object sender, System.EventArgs e) { todoItem.Priority = TodoPriority.Low; }
This is repeated ad nauseum for each control on the form. Basically, we write event handlers so that each change in the UI makes a corresponding change in the underlying data model.
Going the other direction becomes a bit of a challenge, though. How do we get changes in the data model reflected in the user interface? Data can change for any number of reasons. File|Open will load a whole new set of data -- the entire UI will need to be updated. Or suppose your fearless leader comes back and says "Oh, by the way, add a list box to that form so that it shows the due dates for all of your current Todo items." Now you've got two windows that depend on the same data model. Changes in one have to be reflected in the other.
The "easy" way is to simple add code to the Todo edit window to update the Todo list window. But that way lies madness -- the madness of highly-coupled classes. The only reason the Todo editor needs to know anything about the Todo list is to update the Todo list when the data changes. But that has nothing to do with the job of the Todo editor. Good OO design principles say that a class should do only one thing; we're asking the Todo editor to do two: update the data and update its sibling windows. And then what happens when your manager comes in and asks for a third window? What about reusing the Todo editor window in another app?
This is a classic problem in UI code, and there's a classic solution: the Model/View/Controller (MVC) architecture. (Those of us who've done coding with the Microsoft Foundation Classes (MFC) are familiar with the Doc/View architecture, a variation of MVC that merges the controller and view.) The fundamental idea is to split your app into separate Model (or Document) classes that store data and do processing, and View classes that are the actual user interface. As the user does things to the View (mouse clicks, typing, etc) the View class calls methods on the Model class to reflect the changes. The Model class then fires a callback to let its Views know when something has changed. Those of you who are familiar with design patterns will recognize this as the Observer pattern. A UML sequence diagram for our todo app demonstrating this interaction is shown below:
Notice on the diagram that the
TodoItem (our Model) is updating two different
views. The nice thing about this architecture is that multiple views can share
the same model. This way, when the Todo editor changes the date on its Todo
item, it calls the Model's
DueDate setter method, which then triggers the
Model's
Update callback, thus causing the Todo list window to update itself.
Todo editor and Todo list need know nothing about each other, and each can be
reused separately.
Thanks to C#'s event syntax, implementing this in a .NET Windows Forms applicaiton is very simple. All we need to do is redo our model code slightly:
public class TodoItem { public delegate void TodoItemUpdate( TodoItem source, int whatChanged ); public event TodoItemUpdate Update; private TodoPriority priority; private DateTime dueDate; private string description; public TodoPriority Priority { get { return priority; } set { if( value != priority ) { priority = value; if( Update != null ) { Update( this, 0 ); } } } } public DateTime DueDate { get { return dueDate; } set { if( value != dueDate ) { dueDate = value; if( Update != null ) { Update( this, 1 ); } } } } public string Description { get { return description; } set { if( value != description ) { description = value; if( Update != null ) { Update( this, 2 ); } } } } }
I've added the
Update event, and in each set method I've made sure to fire the
event if the property actually changes. I've also added parameters to the
event; in this case, the first parameter is the model that's changed, and the
second is an integer that tells the client which specific field of the model
has changed. This is pretty much what MFC's Doc/View architecture does. It's up
to the View class to know what the hint means.
This does work, but it becomes tedious. When writing a new model class, you've got to add setter methods for every properly, and remember to call the update functions. When writing views, you're pretty much forced to take the entire model; even if all you care about is the priority, you have to make sure you properly decode the hints to filter out updates that you don't care about. And it gets really boring, when writing view classes, to write the same code to pull out a string property from the model and stuff it into an edit control for the thirteenth time on the same dialog.
There are several approaches to take to get around this problem. One that I've had some success with uses composite models. The idea is simply this: the "view" class doesn't actually have to be a user interface; it just needs to be an object that registers for updates. You can build up larger model classes from smaller ones. For example, here's a model that handles a single string:
public class StringModel { public delegate void StringChanged( StringModel source ); public event StringChanged Update; private string s; public StringModel() { } public StringModel( String s ) { this.s = s; } public void Set( string s ) { if( s != this.s ) { this.s = s; if( Update != null ) { Update( this ); } } } public string Get() { return s; } }
There's nothing magic here; it's just like the model classes we've written
before, except that there's only the single data item. Supposing that we also
have a
DateTimeModel and a
TodoPriorityModel, our
TodoItem class now becomes
this:
public class TodoItem2 { public delegate void TodoItem2Changed( TodoItem2 source ); public event TodoItemChanged Update; public TodoPriorityModel Priority = new TodoPriorityModel(); public DateTimeModel DueDate = new DateTimeModel(); public StringModel Description = new StringModel(); public TodoItem2( ) { Priority.Update += new TodoPriorityModel.TodoPriorityChanged( this.OnPriorityChanged ); DueDate.Update += new DateTimeModel.DateTimeChanged( this.OnDueDateChanged ); Description.Update += new StringModel.StringChanged( this.OnDescriptionChanged ); } private void FireUpdate() { if( Update != null ) { Update( this ); } } private void OnPriorityChanged( TodoPriorityModel source ) { FireUpdate(); } private void OnDueDateChanged( DateTimeModel source ) { FireUpdate(); } private void OnDescriptionChanged( StringModel source ) { FireUpdate(); } }
Instead of manually writing lots of property set and get functions, we simply
use member variables of the appropriate model types. Then the composite model
registers for updates on changes to its member models. This way, when a view
calls
model.Description.Set( "A new description" ) the
TodoItem
object itself will get an
Update call. This way the model can react to changes
in its contained data. In this case, all
TodoItem does is fire its own
Update
callback.
This gives views a great deal of flexibility. A view can either register with
the model as a whole, or with individual sub-items in the model, depending on
what it's interested in. This also opens the doors to composite views as well.
You could write, for example, a
TextBoxView that subclasses
System.Windows.Forms.TextBox, stores a reference to an underlying
StringModel
object, registers for updates on that model, and updates the model
appropriately as the contents of the text box change. Then, to hook up to a
string in a model, all you'd need to do is drag the
TextBoxView onto your form
and set its model property.
This worked out pretty well when I did it in C++. However, in the .NET world, this suffers from one big problem: lots of boilerplate code to write. The get and set methods in every atomic model are identical; all that's different are the types. In C++ you can use templates to make writing the models easier, but in .NET, there's no way around just grinding out the code (yet).
You would think that Microsoft would have addressed this problem in the WinForms framework. And they have, but their solution isn't advertised as a general UI update framework. It's Windows Forms Data Binding, which I'll talk about in Part 2 of this series.
Chris Tavares is a development lead on the patterns & practices team, producing written and code-based guidance for .NET developers.
Return to ONDotnet.com | http://archive.oreilly.com/lpt/a/2778 | CC-MAIN-2014-49 | refinedweb | 1,576 | 54.32 |
Hi everybody. I can't understand why the compiler give me this warning:
warning: #warning "F_CPU not defined for
In mylcd.c I try with:
#ifndef F_CPU #define F_CPU 8000000UL #endif #include "mylcd.h" #include
#include #include
Then in main.c I use
#include
#include "mylcd.h" #include #include #include
Can anybody explain the correct use of
#define F_CPU
Thanks.
Think about it.
"mylcd.c" uses the _delay_xx() macros. Fine because it knows F_CPU.
"main.c" uses the delay_xx() macros. But it does NOT know F_CPU
Either put F_CPU in the Studio Configuration (or makefile)
Or place in ONE header file that is common to every file in your project that uses _delay_xx() ...
Normally you have a "project.h" but here you seem to have a common "mylcd.h"
David.
Top
- Log in or register to post comments
Ok. I try to place in "mylcd.h"
.
Then I use the directive
in both "mylcd.c" and "main.c", but i have the same problem.
Top
- Log in or register to post comments
Try to think about the order in which the C pre processor operates. It's sequential. So if you include "mylcd.h" (in which F_CPU is defined) AFTER the use of
then at that point F_CPU will not be defined.
To be honest you are better passing F_CPU as a -D from the Makefile anyway. Do you use a Makefile or do you use AVR Studio's project mechanism?
Cliff
Top
- Log in or register to post comments
Are you being deliberately obtuse?
Personally I would always put the F_CPU in the makefile (or Studio config)
Using '#ifndef F_CPU' is a kludge in my opinion. You risk different values in different modules. I prefer to get the 'undefined' Warning. Then I correct my Studio configuration.
David.
Top
- Log in or register to post comments
One of the strong arguments for using -D instead and having a common value visible to all compilation units and defined BEFORE any #include.
Top
- Log in or register to post comments | https://www.avrfreaks.net/forum/fcpu-not-defined?name=PNphpBB2&file=viewtopic&t=88000 | CC-MAIN-2019-47 | refinedweb | 338 | 77.13 |
Hello fellow SpiceHeads,
I currently have a Windows Server 2008 Standard SP2 32-bit domain controller that's had a couple of unexpected crashes in recent weeks. My boss and I have discussed this and given the server's age (6 years-out of warranty 3 years), I've been tasked with exploring replacement options and procedures. Here are the physical specs:
- Dell PowerEdge T410
- 4GB RAM
- 3TB total disk space (no RAID)
The following roles are housed on this server:
- Active Directory Domain Services (2003 Forest Functional Level)
- Application Server
- DHCP
- DNS (Primary Server)
- File Services (1.9TB of the 3TB total used for this, 400GB remaining, split among 9 different share directories)
- Print Services (13 networked printers/copiers throughout our facility)
- IIS
- Windows Server Update Services (3.0 SP2)
All of this is run straight on the physical server (no virtualization). In addition, we have a PowerEdge T310 as our backup DC/secondary DNS (also Server 2008), plus 3 other 2008 file servers (all 32-bit), a two Server 2008 R2 64-bit units (ERP server and Exchange 2010 SP3).
As I indicated in the title, I'm willing to go to Server 2012 R2 or Server 2016. I favor the latter if possible since server budgets are scarce around here, and this server will have to last at least 5 years, possibly more, and I'm imagining the other domain controller will have to eventually be upgraded as a result. I'm not a total stranger to server migrations (I was involved in the Exchange 2010 migration from a Server 2003/Exchange 2003 setup). This will probably be all on a single physical server (although I intend to push for putting the DC and file server roles on separate VMs as a best practice). However, if multiple servers are recommended, I can work with that as well. I will be working with Google of course, and Microsoft Virtual Academy (I went through their Server 2012 crash course). I just wonder if anyone ever attempted such a far-reaching migration, and if so, what advice would you all recommend?
15 Replies
Not sure what is 'far-reaching' about this, it seems pretty straight forward.
AD and Exchange should be no issues, are you planning on staying on Exchange 2010? Just double check it is supported on 2016
Then for the ERP software, is that supported on 2016? Are you comfortable with it, or would you need vendor support to move it? Also, does it support virtualization?
You'd want to virtualize all of that.
Do you have any specific questions?
Sorry, I guess I'm just a little intimidated by scope of the move (since we'll be jumping three server versions), although I figure if I do my homework, it should be straightforward. My initial research on Microsoft indicates no major problems (and there seem to be some good tools built into the OS to make this relatively painless:-) ).
The plan is the stay with Exchange 2010, at least for now. Since formal budgets are virtually nonexistent here, I don't know when the next Exchange upgrade will be (although the current setup went live 3 years ago). If Microsoft says no-go between Server 2016 and Exchange 2010, then I'll go to Server 2012 R2.
I talked with our ERP vendor (Exact Software), and they said that a Server 2016 domain controller shouldn't pose a problem, since that will remain on it's current physical (2008 R2) server (no virtualization at this company right now).
I don't have any specific questions at this point, although as the actual move takes place I imagine questions will come up. I'm more trying to get a feel of other people's experiences with Server 2016, and any potential "gotchas".
You will need new Windows CALs if you deploy a server OS higher than what your CALs are for currently. I am guessing that your CALs are for 2008 R2, as that is the latest server version you have deployed. If you deploy Windows Server 2012 R2 or 2016, you need to buy new Windows Server 2016 CALs.
I don't see any technical issues with making the jump.
As a good practice, I like to give my new DC the same IP address of the old DC at cutover, so I don't have to reprogram and statically assigned devices, or adjust any DHCP scopes. :-)
It's possible to alias print servers, so the old server name can still work.
If you are not using a DFS namespace already, now if a great time to put one in. Keeps all UNC shortcuts and drive mappings intact, even as the names of the underlying file servers change.
This requires one server, two at the most depending on storage requirements and availability. I currently count seven physical servers in your setup which is unnecessary. Purchase you a quality server with plenty of CPU (Server 2016 licenses you for 16 cores minimum, so might as well get them!), plenty of RAM based on current plus future needs, and storage that is large enough and has the IO requirements you need. Virtualize that bad boy using your hypervisor of choice, purchase a few Server 2012 or 2016 licenses to cover the virtual licenses you need and the CALs to back it up and boom, done. Get rid of all that wasted physical space, power consumption, etc.
I would encourage you to minimize applications per VM where you can and it makes sense. For instance, leave the domain controllers to themselves with only AD, DHCP, and DNS. Possibly leave the file server to itself. Definitely want to leave Exchange and ERP to themselves. You can combine some of the other services such as WSUS, Print Server, IIS, etc. Save money on physical hardware and use that money towards Server licensing.
Thanks again to everyone for their input.
kevinmhsieh, I definitely agree with the IP address cutover. That will definitely save some headaches. I don't have a DFS namespace, but perhaps it's time to implement one.
Zachary715, I agree that only one physical device should be necessary if I can get enough storage for it (that's an issue with the current server). Your ideas on what roles to put on which VMs mesh with my early thoughts. Hopefully I can use this (and the fact that Server 2016 allows so many cores in one license) to convince the people holding the purse strings to give us the funds for a sufficiently powerful server.
I'm definitely going to be looking all this over during the course of the weekend and the coming week to flesh out what kind of physical server(s) will fit the bill.
Anything above 16 physical cores or 2 VMs will cost you additional in licensing.
You have to license all physical cores in the host, with a 16 core minimum, even if you have fewer cores.
With Windows Standard, each "license" (really core packs equal to your number of cores/16 core minimum), you are entitled to run 2 Windows Server VMs. To run additional Windows Server VMs, will require more licensing at 2 VMs per for each Windows Server "license".
Windows Server datacenter allows unlimited copies of Windows Server, but it only pencils out at around 9 VMs or so. I don't remember the exact number.
If you are going to install the DC on SErver 2016 you will need to perform some check o the existing environment first:
- The Domain Functional Level must be at Server 2008
- SYSVOL must be shared with DFSR and not FRS.
These are lickely to be issues if you upgraded from 2003 DCs in the past.
Exchange 201 should be fine but you probably want to be on a recent Cumulative Update. But check that assumption anyway.
Verify the current AD health by running these commands on both DCs and checking for errors:
dcdiag /c /v repadmin /replsum
Since you use the term migration, I supposed you will be getting new servers and installing fresh OS on it and move your stuff from the old server?
I believe that is the recommended way as compared to upgrading your existing OS to new OS on the same box.
Recently I did a migration from 2008R2 to 2012R2 as well. In fact, I am sort of still in the midst of finalizing the migration. So, I guess I will share my pain and hiccups with you so that you can avoid it.
1. follow the advice from other spiceheads of using the same IP for the new server. This will reduce some headaches. I was bitten by it as I used a new IP and the helper IP in router was not updated. As a result, DHCP did not work on different subnets.
2. personally, I would be more conservative and stay on 2012r2 instead of going to 2016 directly. As 2016 is relatively new, it is probably a bit more risky to use for production as you might have issues finding sufficient resources during troubleshooting if anything goes wrong.
3. virtualize if you can convince your management.
4. always check your domain health with dcdiag, check your replication with repadmin, check if you are using DFSR by using dfsrmig - "eliminated" would indicate that you are using DFSR for SYSVOL replication. It looks to me your domain was from Win2003 like mine. So chances are you will have to do a SYSVOL migration first.
5. always test the migration in a lab environment prior of doing the real thing. Try to make the lab environment as close as the real deal. If you can't, at least try out the basic migration for 2008R2 to the OS of choice. Try out the various migration protocols out there to find the most suitable ones for your situation
6. schedule the migration timing properly depending on your business hour. Avoid doing it during peak activity. Doing it after office hour is the safest as it gives you time to recover any mistakes or unexpected events
7. always backup your servers prior migration if you can.
8. check compatibility of your application with the new OS as suggested by others.
Good luck! And have faith in your ability! | https://community.spiceworks.com/topic/1954283-migrate-windows-server-2008-dc-to-windows-server-2012-r2-2016-new-server | CC-MAIN-2021-49 | refinedweb | 1,718 | 69.62 |
51679/how-to-input-optional-arguments-in-python-command-line
Hi All,
I would like to know how do I make an option file as an argument on command prompt in python.
At present I using :
if len(sys.argv) == 3:
first_log = sys.argv[1]
second_log = sys.argv[2]
else:
print "enter the second argument"
It works well for the following command :
python test.py file1 file2
However I have another case where only file1 may be present so file1 is mandatory for this script to run however file2 is optionnal :
if len(sys.argv) == 2:
first_log = sys.argv[1]
second_log = sys.argv[2]
pthon test.py file1
It gives the error :
second_log = sys.argv[2]
IndexError: list index out of range
How do I achieve this because if python test.py file1 file2 then I would process both files?
Please use this code.
if len(sys.argv) == 2:
first_log = sys.argv[1]
second_log = sys.argv[2]
import sys
print(sys.argv)
More specifically, if you run python example.py ...READ MORE
To read user input you can try the cmd module for ...READ MORE
You can use '\n' for a next ...READ MORE
The canonical solution in the standard library ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
The correct, fully Pythonic way to read ...READ MORE
Memory management in python involves a private heap ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/51679/how-to-input-optional-arguments-in-python-command-line | CC-MAIN-2021-43 | refinedweb | 285 | 79.46 |
Given a string ‘s’, which of the following expressions is faster?
1. String.IsNullOrEmpty( s )
2. s == null || s.Length == 0
If you guessed option #2, you are correct. As you might expect, it takes about 15% more time to call the IsNullOrEmpty method, but this represents only about one second per hundred million executions.
Here is a simple C# console program that compares the two options:
using System; namespace StringNullEmpty { class Program { static void Main( string[] args ) { long loop = 100000000; string s = null; long option = 0; long empties1 = 0; long empties2 = 0; DateTime time1 = DateTime.Now; for (long i = 0; i < loop; i++) { option = i % 4; switch (option) { case 0: s = null; break; case 1: s = String.Empty; break; case 2: s = "H"; break; case 3: s = "HI"; break; } if (String.IsNullOrEmpty( s )) empties1++; } DateTime time2 = DateTime.Now; for (long i = 0; i < loop; i++) { option = i % 4; switch (option) { case 0: s = null; break; case 1: s = String.Empty; break; case 2: s = "H"; break; case 3: s = "HI"; break; } if (s == null || s.Length == 0) empties2++; } DateTime time3 = DateTime.Now; TimeSpan span1 = time2.Subtract( time1 ); TimeSpan span2 = time3.Subtract( time2 ); Console.WriteLine( "(String.IsNullOrEmpty( s )): Time={0} Empties={1}", span1, empties1 ); Console.WriteLine( "(s == null || s.Length == 0): Time={0} Empties={1}", span2, empties2 ); Console.ReadLine(); } } }
The program output was:
(String.IsNullOrEmpty( s )): Time=00:00:06.8437500 Empties=50000000
(s == null || s.Length == 0): Time=00:00:05.9218750 Empties=50000000
Note this test is unscientific, and times may vary slightly with each run and of course from PC to PC.
The time difference is minimal enough that you can safely choose either option. You may actually prefer IsNullOrEmpty because it’s more intuitive. And the rumor about IsNullOrEmpty crashing is much ado about nothing.
Errr…if you use the shiny new .NET symbols and debug into the framework from VS.NET 2008 you will see that in String.cs IsNullOrEmpty is really just:
public static bool IsNullOrEmpty(string value)
{
return (value == null || value.Length == 0);
}
So the small additional cost is really pushing and poping the stack etc.
Suggestion…edo your test using the Stopwatch Class instead of datetime.
[…] string = "" or = string.Empty? A short note on efficiency can be found here. "Sami" wrote: > string = "" or string = string.Empty? > should is the […]
When compiling in release mode, the IsNullOrEmpty method is expanded inline, so the function call disappears and the gain no longer exists.
I just testing the same code but using
long loop = 5000000000;
and the following is the results | https://www.csharp411.com/stringisnullorempty-shootout/ | CC-MAIN-2021-43 | refinedweb | 426 | 77.84 |
Your Account
by Gregory Brown
class Fixnum
def +(other)
self - ( -1 * other ) - 2
end
end
class Fixnum
def squared
self ** 2
end
end
class Fixnum
attr_accessor :letter
end
('a'..'z').each_with_index { |letter, index| index.letter = letter }
0.letter == 'a'
1.letter == 'b'
ITEM_STATUS_NEW = 0
ITEM_STATUS_NEW.text = "New Item"
I thought I was pretty clever, until I realized that I couldn't reuse 0 in any other status categories (for instance I had CUSTOMER_STATUS_CLOSED = 0 elsewhere in my code) as all references to 0 were to the same Fixnum object (and you can't clone a Fixnum of course).
Probably get the same functionality of Constants of OpenStructs
ITEM_STATUS_NEW = OpenStruct.new(:text => "New Item")
ITEM_STATUS_NEW = OpenStruct.new(:text => "New Item")
This would let you add more data down the line as well
Maybe a Status singleton would be even better though?
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/post/playing_with_numbers.html | CC-MAIN-2014-52 | refinedweb | 167 | 67.15 |
Source
JythonBook / chapter6.rst
Chapter 6: Exception Handling and Debugging
Any good program makes use of a language’s exception handling mechanisms. There is no better way to frustrate an end-user then by having them run into an issue with your software and displaying a big ugly error message on the screen, followed by a program crash. Exception handling is all about ensuring that when your program encounters an issue, it will continue to run and provide informative feedback to the end-user or program administrator. Any Java programmer becomes familiar with exception handling on day one, as some Java code won’t even compile unless there is some form of exception handling put into place via the try-catch-finally syntax. Python has similar constructs to that of Java, and we’ll discuss them in this chapter.
After you have found an exception, or preferably before your software is distributed, you should go through the code and debug it in order to find and repair the erroneous code. There are many different ways to debug and repair code; we will go through some debugging methodologies in this chapter. In Python as well as Java, the assert keyword can help out tremendously in this area. We’ll cover assert in depth here and learn the different ways that it can be used to help you out and save time debugging those hard-to-find errors.
Exception Handling Syntax and Differences with Java
Java developers are very familiar with the try-catch-finally block as this is the main mechanism that is used to perform exception handling. Python exception handling differs a bit from Java, but the syntax is fairly similar. However, Java differs a bit in the way that an exception is thrown in code. Now, realize that I just used the term throw …this is Java terminology. Python does not throw exceptions, but instead it raises them. Two different terms which mean basically the same thing. In this section, we’ll step through the process of handling and raising exceptions in Python code, and show you how it differs from that in Java.
For those who are unfamiliar, I will show you how to perform some exception handling in the Java language. This will give you an opportunity to compare the two syntaxes and appreciate the flexibility that Python offers.:
try { // perform some tasks that may throw an exception } catch (ExceptionType messageVariable) { // perform some exception handling } finally { // execute code that must always be invoked }
Now let’s go on to learn how to make this work in Python. Not only will we see how to handle and raise exceptions, but you’ll also learn some other great techniques later in the chapter.
Catching Exceptions
How often have you been working in a program and performed some action that caused the program to abort and display a nasty error message? It happens more often than it should because most exceptions can be caught and handled nicely. By nicely, I mean that the program will not abort and the end user will receive a descriptive error message stating what the problem is, and in some cases how it can be resolved. The exception handling mechanisms within programming languages were developed for this purpose.
Below is a table of all exceptions that are built into the Python language along with a description of each. You can write any of these into a clause and try to handle them. Later in this chapter I will show you how you and them if you’d like. Lastly, if there is a specific type of exception that you’d like to throw that does not fit any of these, then you can write your own exception type object.
The try-except-finally block is used in Python programs to perform the exception-handling task. Much like that of Java, code that may or may not raise an exception should be placed in the try block. Differently though, exceptions that may be caught go into an except block much like the Java catch equivalent. Any tasks that must be performed no matter if an exception is thrown or not should go into the finally block.
try-except-finally Logic
try: # perform some task that may raise an exception except Exception, value: # perform some exception handling finally: # perform tasks that must always be completed
Python also offers an optional else clause to create the try-except-else logic. This optional code placed inside the else block is run if there are no exceptions found in the block.
try-finally logic:
try: # perform some tasks that may raise an exception finally: # perform tasks that must always be completed
try-except-else logic:
try: # perform some tasks that may raise an exception except: # perform some exception handling else: # perform some tasks that should only be performed if no exceptions are thrown
You can name the specific type of exception to catch within the except block , or you can generically define an exception handling block by not naming any exception at all. Best practice of course states that you should always try to name the exception and then provide the best possible handling solution for the case. After all, if the program is simply going to spit out a nasty error then the exception handling block does not help resolve the issue at all. However, there are some rare cases where it would be advantageous to not explicitly refer to an exception type when we simply wish to ignore errors and move on. The except block also allows us to define a variable to which the exception message will be assigned. This allows us the ability to store that message and display it somewhere within our exception handling code block. If you are calling a piece of Java code from within Jython and the Java code throws an exception, it can be handled within Jython in the same manner as Jython exceptions.
Example 5-1: Exception Handling in Python
# Code without an exception handler >>> x = 10 >>> z = x / y Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'y' is not defined # The same code with an exception handling block >>> x = 10 >>> try: ... z = x / y ... except NameError, err: ... print "One of the variables was undefined: ", err ... One of the variables was undefined: name 'y' is not defined
Take note of the syntax that is being used for defining the variable that holds the error message. Namely, the except ExceptionType, value statement syntax in Python and Jython 2.5 differs from that beyond 2.5. In Python 2.6, the syntax changes a bit in order to ready developers for Python 3, which exclusively uses the new syntax. Without going off topic too much, I think it is important to take note that this syntax will be changing in future releases of Jython.
Jython and Python 2.5 and Prior
try: // code except ExceptionType, messageVar: // code
Jython 2.6 (Not Yet Implemented) and Python 2.6 and Beyond
try: // code except ExceptionType as messageVar: // code
We had previously mentioned that it was simply bad programming practice to not explicitly name an exception type when writing exception handling code. This is true, however Python provides us with another means to obtain the type of exception that was thrown. There is a function provided in the sys package known as sys.exc_info() that will provide us with both the exception type and the exception message. This can be quite useful if we are wrapping some code in a try-except block but we really aren’t sure what type of exception may be thrown. Below is an example of using this technique.
Example 5-2: Using sys.exc_info()
# Perform exception handling without explicitly naming the exception type >>> x = 10 >>> try: ... z = x / y ... except: ... print "Unexpected error: ", sys.exc_info()[0], sys.exc_info()[1] ... Unexpected error: <type 'exceptions.NameError'> name 'y' is not defined
Sometimes you may run into a situation where it is applicable to catch more than one exception. Python offers a couple of different options if you need to do such exception handling. You can either use multiple except clauses, which does the trick and works well, but may become too wordy. The other option that you have is to enclose your exception types within parentheses and separated by commas on your except statement. Take a look at the following example that portrays the latter approach using the same example from Example 5-1.
Example 5-3: Handling Multiple Exceptions
# Catch NameError, but also a ZeroDivisionError in case a zero is used in the equation >>> x = 10 >>> try: ... z = x / y ... except (NameError,ZeroDivisionError), err: ... print "One of the variables was undefined: ", err ... One of the variables was undefined: name 'y' is not defined # Using mulitple except clauses >>> x = 10 >>> y = 0 >>> try: ... z = x / y ... except NameError, err1: ... print err1 ... except ZeroDivisionError, err2: ... print 'You cannot divide a number by zero!' ... You cannot divide a number by zero!
The try-except block can be nested as deep as you’d like. In the case of nested exception handling blocks, if an exception is thrown then the program control will jump out of the inner most block that received the error, and up to the block just above it. This is very much the same type of action that is taken when you are working in a nested loop and then run into a break statement, your code will stop executing and jump back up to the outer loop. The following example shows an example for such logic.
Example 5-4: Nested Exception Handling Blocks
# Perform some division on numbers entered by keyboard try: # do some work try: x = raw_input ('Enter a number for the dividend: ') y = raw_input('Enter a number to divisor: ') x = int(x) y = int(y) except ValueError, err2: # handle exception and move to outer try-except print 'You must enter a numeric value!' z = x / y except ZeroDivisionError, err1: # handle exception print 'You cannot divide by zero!' except TypeError, err3: print 'Retry and only use numeric values this time!' else: print 'Your quotient is: %d' % (z)
Raising Exceptions
Often times you will find reason to raise your own exceptions. Maybe you are expecting a certain type of keyboard entry, and a user enters something incorrectly that your program does not like. This would be a case when you’d like to raise your own exception. The raise statement can be used to allow you to raise an exception where you deem appropriate. Using the raise statement, you can cause any of the Python exception types to be raised, you could raise your own exception that you define (discussed in the next section), or you could raise a string exception. The raise statement is analogous to the throw statement in the Java language. In Java we may opt to throw an exception if necessary. However, Java also allows you to apply a throws clause to a particular method if an exception may possibly be thrown within instead of using try-catch handler in the method. Python does not allow you do perform such techniques using the raise statement.
raise Statement Syntax
raise ExceptionType or String[, message[, traceback]]
As you can see from the syntax, using raise allows you to become creative in that you could use your own string when raising an error. However, this is not really looked upon as a best practice as you should try to raise a defined exception type if at all possible. You can also provide a short message explaining the error. This message can be any string. Lastly, you can provide a traceback via use of sys.exc_info(). Now you’ve surely seen some exceptions raised in the Python interpreter by now. Each time an exception is raised, a message appears that was created by the interpreter to give you feedback about the exception and where the offending line of code may be. There is always a traceback section when any exception is raised. This really gives you more information on where the exception was raised.
Example 5-5: Using the raise Statement
>>> raise TypeError,"This is a special message" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: This is a special message
Defining Your Own Exceptions
You can define your own exceptions in Python by creating an exception class. Now classes are a topic that we have not yet covered, so this section gets a little ahead, but it is fairly straightforward. You simply define a class using the class keyword and then give it a name. An exception class should inherit from the base exception class, Exception. The easiest defined exception can simply use a pass statement inside the class. More involved exception classes can accept parameters and define an initializer. It is also a good practice to name your exception giving it a suffix of Error.
Example 5-6: Defining an Exception Class
class MyNewError(Exception): pass
The example above is the simplest type of exception you can create. This exception that was created above can be raised just like any other exception now.
raise MyNewError, “Something happened in my program”
A more involved exception class may be written as follows.
Example 5-7: Exception Class Using Initializer
class MegaError(Exception): “”” This is raised when there is a huge problem with my program””” def __init__(self, val): self.val = val def __str__(self): return repr(self.val)
Issuing Warnings
Warnings can be raised at any time in your program and can be used to display some type of warning message, but they do not necessarily cause execution to abort. A good example is when you wish to deprecate a method or implementation but still make it usable for compatibility. You could create a warning to alert the user and let them know that such methods are deprecated and point them to the new definition, but the program would not abort. Warnings are easy to define, but they can be complex if you wish to define rules on them using filters. Much like exceptions, there are a number of defined warnings that can be used for categorizing. In order to allow these warnings to be easily converted into exceptions, they are all instances of the Exception type.
Table 5-2. Python Warning Categories
Table 5-1: Exceptions
To issue a warning, you must first import the warnings module into your program. Once this has been done then it is as simple as making a call to the warnings.warn() function and passing it a string with the warning message. However, if you’d like to control the type of warning that is issued, you can also pass the warning category.
import warnings … warnings.warn(“this feature will be deprecated”) warnings.warn(“this is a more involved warning”, RuntimeWarning)
Importing the warnings module into your code gives you access to a number of built-in warning functions that can be used. If you’d like to filter a warning and change its behavior then you can do so by creating a filter. The following is a list of functions that come with the warnings module.
This adds an entry into a warning filter list. Warning filters allow you to modify the behavior of a warning. The action in the warning filter can be one from the following table of actions, message is a regular expression, category is the type of a warning to be issued, module can be a regular expression, lineno is a line number to match against all lines, append specifies whether the filter should be appended to the list of all filters.
Table 5-3. Warning Functions
Warning filters are used to modify the behavior of a particular warning. There can be many different warning filters in use, and each call to the filterwarnings() function will append another warning to the list of filters if so desired. In order to see which filters are currently in use, issue the command print warnings.filters. One can also specify a warning filter from the command line by use of the –W option. Lastly, all warnings can be reset to defaults by using the resetwarnings() function.:
-Waction:message:category:module:lineno
Assertions and Debugging
Debugging can be an easy task in Python via use of the assert statement and the __debug__ variable. Assertions are statements that can print to indicate that a particular piece of code is not behaving as expected. The assertion checks an expression for a True or False value, and if False then it issues an AssertionError along with an optional message. If the expression evaluates to True then the assertion is ignored completely.
assert expression [, message]
By effectively using the assert statement throughout your program, you can easily catch any errors that may occur and make debugging life much easier. The following example will show you the use of the assert statement.:
# The following example shows how assertions are evaluated >>> x = 5 >>> y = 10 >>> assert x < y, "The assertion is ignored" >>> assert x > y, "The assertion works" Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError: The assertion works
You can make use of the internal *__debug__* variable by placing entire blocks of code that should be run for debugging purposes only inside a conditional based upon value of the variable.
Example 5-10: Making Use of __debug__
if __debug__: # perform some debugging tasks
Context Managers
Ensuring that code is written properly in order to manage resources such as files or database connections is an important topic. If files or database connections are opened and never closed then our program could incur issues. Often times, developers elect to make use of the issues. Often times, developers elect to make use of the try-finally blocks to ensure that such resources are handled properly. While this is an acceptable method for resource management, it can sometimes be misused and lead to problems when exceptions are raised in programs. For instance, if we are working with a database connection and an exception occurs after we’ve opened the connection, the program control may break out of the current block and skip all further processing. The connection may never be closed in such a case. That is where the concept of context management becomes an important new feature in Jython. Context management via the use of the with statement is new to Jython 2.5, and it is a very nice way to ensure that resources are managed as expected.
In order to use the with statement, you must import from __future__. The with statement basically allows you to take an object and use it without worrying about resource management. For instance, let’s say that we’d like to open a file on the system and read some lines from it. To perform a file operation you first need to open the file, perform any processing or reading of file content, and then close the file to free the resource. Context management using the with statement allows you to simply open the file and work with it in a concise syntax.
Example 5-11: Python with Statement Example
# Read from a text file named players.txt >>> from __future__ import with_statement >>> with open('players.txt','r') as file: ... x = file.read() ... >>> print x This is read from the file
In the example above, we did not worry about closing the file because the context took care of that for us. This works with object that extends the context management protocol. In other words, any object that implements two methods named __enter__() and __exit__() adhere to the context management protocol. When the with *statement begins, the *__enter__() method is executed. Likewise, as the last action performed when the with statement is ending, the __exit__() method is executed. The __enter__() method takes no arguments, whereas the __exit__() method takes three optional arguments type, value, *and traceback. The *__exit__() method returns a True or False value to indicate whether an exception was thrown. The as variable clause on the with statement is optional as it will allow you to make use of the object from within the code block. If you are working with resources such as a lock then you may not the optional clause.
If you follow the context management protocol, it is possible to create your own objects that can be used with this technique. The __enter__() method should create whatever object you are trying to work if needed. If you are working with an immutable object then you’ll need to create a copy of that object to work with in the __enter__() method. The __exit__() method on the other hand can simply return False unless there is some other type of cleanup processing that needs to take place.
Summary
In this chapter, we discussed many different topics regarding exceptions and exception handling within a Python application. First, you learned the exception handling syntax of the try-except-finally code block and how it is used. We then discussed why it may be important to raise your own exceptions at times and how to do so. That topic led to the discussion of how to define an exception and we learned that in order to do so we must define a class that extends the Exception type object.
After learning about exceptions, we went into the warnings framework and discussed how to use it. It may be important to use warnings in such cases where code may be deprecated and you want to warn users, but you do not wish to raise any exceptions. That topic was followed by assertions and how assertion statement can be used to help us debug our programs. Lastly, we touched upon the topic of context managers and using the with statement that is new in Jython 2.5.
In the next chapter you will delve into creating classes and learning about object-oriented programming in Python. Hopefully if there were topics discussed in this chapter or previously in the book that may have been unclear due to unfamiliarity with object orientation, they will be clarified in Chapter 6. | https://bitbucket.org/idalton/jythonbook/src/c559df498a7e/chapter6.rst?at=default | CC-MAIN-2015-27 | refinedweb | 3,699 | 60.65 |
return to main index
This tutorial demonstrates how sub-classes of the Node and QuadTree classes introduced
in QuadTree: Python can be used to explore some
aspects of blobby geometry. An example, of the interaction of positive and negative
"blobby circles" is shown in figure 1. The quadtree (2D) technique shown in this
tutorial is intended to be an introduction to 3D methods that use octrees.
Figure 1
Similar, but non-interacting, circles produced by
cquadtree.py and visualized using
RenderMan are shown in figure 2. For another tutorial that deals with blobby shapes refer to
RSL: Blobby Effects
Figure 2
Unlike figure 2 where the small green squares trace the circumferences of two circles,
those in figure 1 mark the boundary of two "clouds" of (scalar) values such that
their combined values are 0.5 or greater. The two dark red dots in figure 1 mark the
centers of two radial fields of values that have a maximum value of 1.0 (positive field)
and -1.0 (negative field) at the red dots and at a fixed radial distance have a value
0.0.
Figure 3 shows the positive and negative fields visualized as a grayscale. The green
line marks locations where the grayscale value is 0.5.
Figure 3
Another way of visualizing a scalar field is to represent its values as heights - figure
4.
Figure 4
The code presented later in this tutorial calculates the (scalar) field value at a point
in space according to its distance from one or more blobby geometries such as circles
and lines. The formula for converting a distance to a field value in the range 0 to 1
is taken from the research of Geoff Wyvill and Craig McPhetters "Data Structure for
Soft Objects", The Visual Computer, Vol 2 1986. Wyvill calls his geometries
Soft Objects.
The terminology used in this tutorial follows Pixar's RenderMan and refers to Wyvill's
soft objects as Blobby Geometries.
Others, such as Nishimura and Blinn name similar objects as Metaballs. For
example, Nishimura et al "Object Modelling by Distribution Function and a Method of
Image Generation", The Transactions of the Institute of Electronics and Communication
Engineers of Japan, 1985, Vol. J68-D, Part 4, pp. 718-725, in Japanese, translated
into English by Takao Fujuwara.
Soft Objects
Blobby Geometries
Metaballs
This tutorial, in an attempt to be as pictorial as possible, will not delve into the
mathematics of their "field function". Different researchers have different ways of using
distances to calculate a scalar field. Their technique is used here because it does not
rely on ray tracing. Their technique is relatively fast and generates smooth iso-contours
in 2D and smooth iso-surfaces in 3D. The prefix "iso" simply means "equal", as in a
contour or a surface sharing the same field value. The code that performs the calculation is
shown next.
Listing 1 (see blobby_quadtree.py)
# Given the distance (squared) to the center of a circle or the
# shortest distance to a line and the radius of influence (squared)
# the proc returns a)
A quadtree subdivides a rectangle only if it detects that any of its four (children)
sub-rectangles spans an iso-contour. Field values at each vertex might be calculated and
if there is a mixture of values greater and smaller than 0.5 then the rectangle
must be spanning an iso-contour (figure 5).
Figure 5
Relying solely on vertex sampling will also fail if the iso-contour is completely contained
within the rectangle (figure 6) or if the iso-contour only passes through an edge (figure 7).
Figure 6
Figure 7
Figure 8 visualizes a quadtree that has relied solely on vertex sampling. Figure 9
demonstrates the use of a preliminary test that determines if the center of any of the
blobby circles are with a sub-rectangle. Figure 10 shows the effect of applying a third
test to determines if any of the edges of a sub-rectangle are within a certain distance
of the center of any blobby circle. The tests are applied in the following order, from
computationally cheapest to most expensive.
A rectangle is forced to subdivide if,
it contains the center of a blobby circle,
the distance for any edge to a blobby circle is less than the radius/2,
its vertices have a mixture of field values < 0.5 and >= 0.5
Figure 8
Figure 9
Figure 10
Because adjacent sub-rectangles share vertices their field values are stored in a
dictionary that acts as a lookup table - see BlobbyNode.vertLUT. Previously calculated
field values can then be reused by rectangles that share a vertex. The impact on memory
useage is well worth the extra efficiency. For example, the quadtree for figure 11 was calculated
in 400 milliseconds (Mac 2.66 GHz laptop). It reused 19,731 of the total 26,612
field value calculations. The same quadtree took 975 milliseconds without the use of the
lookup table.
BlobbyNode.vertLUT
Figure 11
The code in listing 2 should be saved in the same directory as
quadtree.py,
vectors.py and
distances.py.
Open blobby_quadtree.py with Cutter and look for the following comment,
### EDIT PATH ###
Edit the path to the location where the archive rib will be saved. Use the
keyboard shortcut control+e or alt+e to execute the script.
Listing 2 (blobby_quadtree.py)
# blobby_quadtree.py
# A quadtree that finds an implicit field that outlines a
# soft (blobby) circles. The field values are calculated using
# Geoff Wyvill and Craig McPhetters method. See,
# "Data Structure for Soft Objects", The Visual Computer,
# Vol 2 1986, (page 228)
# Malcolm Kesson Dec 19 2012
from quadtree import Node, QuadTree
import random, time
from distances import pnt2line
#____UTILITY PROCS_______________________________________
# Returns the length of a vector "connecting" p0 to p1.
# To avoid using the sqrt() function the return value is
# the length squared.
def dist_sqrd(p0, p1):
x,y,z = p0
X,Y,Z = p1
i,j,k = (X - x, Y - y, Z - z)
return i * i + j * j + k * k
#_______________________________________________________
def getedges(rect):
x0,z0,x1,z1 = rect
edges = ( ((x0,0,z0),(x1,0,z0)), # top
((x1,0,z0),(x1,0,z1)), # right
((x1,0,z1),(x0,0,z1)), # bottom
((x0,0,z1),(x0,0,z0))) # left
return edges
#_______________________________________________________
# Given the distance (squared) to the center of a circle
# and its radius of influence (squared) the proc returns a
# (Wyvill) implicit)
#_______________________________________________________
# Returns a string containing the rib statement for a
# four sided polygon positioned at height "y".
def RiPolygon(rect, y):
x0,z0,x1,z1 = rect
verts = []
verts.append(' %1.3f %1.3f %1.3f' % (x0,y,z0))
verts.append(' %1.3f %1.3f %1.3f' % (x0,y,z1))
verts.append(' %1.3f %1.3f %1.3f' % (x1,y,z1))
verts.append(' %1.3f %1.3f %1.3f' % (x1,y,z0))
rib = '\tPolygon "P" ['
rib += ''.join(verts)
rib += ']\n'
return rib
#_______________________________________________________
class BlobbyNode(Node):
verthits = 0
nonverthits = 0
vertLUT = {}
#_______________________________________________________
# Overrides the base class method.
# Ensures Node.subdivide() uses instances of our custom
# class rather than instances of the base Node class.
def getinstance(self,rect):
return BlobbyNode(self,rect)
#_______________________________________________________
# Overrides the base class method.
# Tests:
# 1 if the 'rect' contains the center of any blobby circle,
# 2 if any edges are within half the radius of any circles,
# 3 if any vertices span the (blobby) iso-surface.
# To avoid repeated vertex calculations field values are cached
# in a lookup table - BlobbyNode.vertLUT
def spans_feature(self, rect):
x0,z0,x1,z1 = rect
size = x1 - x0
if size > Node.minsize:
# Cheap test
for circle in BlobbyQuadTree.circles:
pol,rad,x,y,z = circle
if self.contains(x,z):
return True
# Not so cheap
for circle in BlobbyQuadTree.circles:
pol,rad,x,y,z = circle
if self.depth < 4:# the parent node
edges = getedges(rect)
for edge in edges:
dist,loc = pnt2line( (x,0,z), edge[0], edge[1] )
if dist <= rad/2:
return True
verts = [(x0,0,z0),(x0,0,z1),(x1,0,z1),(x1,0,z0)]
span = 0
# Expensive test, hence the use of a cache
for vert in verts:
if self.vertLUT.has_key(vert):
fv = BlobbyNode.vertLUT[vert]
BlobbyNode.verthits += 1
else:
BlobbyNode.nonverthits += 1
fv = 0
for circle in BlobbyQuadTree.circles:
pol,rad,x,y,z = circle
rad_influ_sqrd = rad * rad
center = (x,y,z)
dist = dist_sqrd(vert, center)
field = fieldvalue(dist, rad_influ_sqrd)
if pol == BlobbyQuadTree.POSITIVE:
fv += field
else:
fv -= field
BlobbyNode.vertLUT[vert] = fv
if fv >= BlobbyQuadTree.blobby_level:
span += 1
if span > 0 and span < 4:
return True
return False
class BlobbyQuadTree(QuadTree):
POSITIVE = 1
NEGATIVE = -1
circles = [] # list of tuples (polarity, radius of influence,x,y,z)
blobby_level = 0.5
#_______________________________________________________
def __init__(self, rootnode, minrect, circles):
BlobbyQuadTree.circles = circles
QuadTree.__init__(self, rootnode, minrect)
if __name__=="__main__":
rootrect = [-2.0, -2.0, 2.0, 2.0]
resolution = 0.02
circles = []
random.seed(1)
for n in range(20):
p = BlobbyQuadTree.POSITIVE
if p < random.random():
p = BlobbyQuadTree.NEGATIVE
r = random.uniform(0.2, 0.8)
x = random.uniform(-1.5, 1.5)
z = random.uniform(-1.5, 1.5)
circles.append( (p,r,x,0,z) )
#circles = [(1,3.8,0,0,0),(-1, 1.3,0.75,0,0)]
begintime = time.time()
rootnode = BlobbyNode(None, rootrect)
tree = BlobbyQuadTree(rootnode, resolution, circles)
endtime = time.time()
print (endtime - begintime) * 1000
# Output RenderMan polygons for each node
ribpath = 'FULL_PATH_TO_ARCHIVE/nodes.rib' ### EDIT PATH ###
f = open(ribpath,'w')
f.write('AttributeBegin\n')
for node in BlobbyQuadTree.allnodes:
height = node.depth * 0.1
if node.depth == BlobbyQuadTree.maxdepth:
f.write('\tColor 0 .5 0\n')
else:
f.write('\tColor 1 1 1\n')
f.write(RiPolygon(node.rect, height))
f.write('AttributeEnd\n')
f.write('Color 1 0 0\n')
for c in circles:
f.write('Points "P" [%1.3f 1 %1.3f] "constantwidth" [0.15]\n' % (c[2],c[4] ) )
f.close()
print('Wrote %d polygons' % len(BlobbyQuadTree.allnodes))
print('vert hits %d misses %d' % (BlobbyNode.verthits,BlobbyNode.nonverthits)) | http://www.fundza.com/algorithmic/quadtree_blobby/index.html | CC-MAIN-2016-26 | refinedweb | 1,659 | 58.28 |
Intro: PART 1 - Send Arduino Data to the Web ( PHP/ MySQL/ D3.js )
The objective of this project was to use and Arduino to read a sensor and send the values to the internet, to be stored in a Web Server and displayed.
It consists in an Arduino Uno with an Ethernet Shield and a DHT 11 temperature / moisture sensor, acting as a Web Client. It sends POST requests with the readings to a web server running a custom Database and PHP application.
The PHP app stores the values when new POST requests are received and also serves the pages that display the information. In Part 2, i will explain the use of D3.js to dynamically show the data stored in the Database.
The Arduino it's configured to use a Dynamic IP Address, in order to solve any conflicting IP issues, and also to work easily with most home networks/routers.
This project is divided in 2 main parts:PART 1
- Arduino Web client Application: reads the sensor values and sends them to the webserver.
- PHP/MySQL Application: handles the POST requests that are sent to the server and serves the pages to clients who connectPART 2
- Data Visualization: The PHP application will use the Javascript Framework D3.js to display the values stored in the DB with graphics. It will allow to navigate to the past days to observe the readingsREQUIREMENTS
HARDWARE
- Arduino Uno
- Ethernet Shield (eBay clone)
- DHT 11 sensor
- breadboard
- 10k Ohm resistor
- USB cable
- Ethernet cable
- wires
- piece of acrylic
- PCB spacers
Software
- You need access to a web server ( can be from a free hosting company ) with capability to run PHP applications and also to create databases. ( possibly cPanel with phpMyAdmin)
RESOURCES
Request Maker: This online tool is very useful to test the PHP application. You can simulate the POST requests that will be made by the Arduino and check if everything is working well.
DHT11 sensor library from Adafruit
Step 1: Arduino Web Client + DHT11 Sensor
The code is very simple, all the important section are commented. If you have any doubt feel free to ask.
Step 2: PHP / MySQL Application
In this second part i will explain briefly the PHP application and the database. The database is used obviously to store the sensor readings, so that they can be accessed later. It's a very simple DB, with just one table with 3 columns. It stores the time stamp and the corresponding temperature and humidity values.
CREATE TABLE tempLog ( timeStamp TIMESTAMP NOT NULL PRIMARY KEY, temperature int(11) NOT NULL, humidity int(11) NOT NULL, );
The PHP application consists of 3 files:
- connect.php: this file is loaded every time we need to access to the database. It's loaded in the beginning of the almost each file. It contains a function that returns a new connection to be used by the PHP to execute query's to the DB. You need to store the DB configs (hostname, database, user, password) in this file.
- add.php: when the Arduino sends POST requests to the server, is to to this page. The PHP receives the values sent in the request and executes an insertion query with those values.
Sometimes you need to change the permissions of this file (should be 644), because it might be protected to allow only executions from the localhost.
- index.php: this is the website landing page. It displays the values that are stored in the database. Right now, it will display all the values in a single HTML table, just to show that works.
So, this concludes the first part of this Instructable. Feel free to ask questions about anything related, i'm glad to help.
3 People Made This Project!
KumaraswamyS made it!
sspence made it!
Muhammad SaimN made it!
163 Discussions
1 year ago
hi, nice work, on local network is work fine, but when i try to connect with domain, it;s didnt work. Someone can help me?
Reply 11 days ago
Sir please teach me how did you do it in local netqork thankyou. Your reply is a releif
Question 6 months ago on Step 1
Respected sir, Good morning sir, i am madhu from Madanapalle. Sir, I want know about how we will connect IoT things to website. can you please expalin it sir.
Question 8 months ago
Hi, tried this tutorial of yours. But its not working, data wont feed into my SQL database. I used a server, saved my php script to that server to fetch the data from arduino and send to database. But seems to be not working :( Please help me. Thanks
Question 8 months ago on Step 1
Sir , i want to connect my gsm module to apache server! Is it possible ?
Question 9 months ago
I have to control an Home automation system through Web so would php be enough for backend and how would i connect the server through IP address if i am able to send ardino data over wifi module
Question 9 months ago
How to put the ph files in my subdomain? I tried but showing server not found
Question 9 months ago
In the add.php section,you said that sometimes we have to change the permission of this file. What do you mean by that?
10 months ago
from where will the ethernet shield get internet?
11 months ago
Hello
First thanks for this project, it is exactly what I wanted to start Arduino-WEB experiences.
Everthing is fine for measuring on arduino, connecting with ethernet shield, and also connection to my WEBside returns = 1 so it connects correctly. Database is also created.
But The add.PHP dosent work ( no data apears in the table of database).
Who can I debug for finding solution ? Who can I know if it is a problem to not access to PHP file / not connecting to database / SQL execution ?
Your help would greatly be apreceated !!
I copy only the code of the part that doesnt work :
(I modified the sql to write fix values to test, but still does not work)
Arduino
Conect
= client.connect("", 80);
if (Conect) {
client.println("POST /add.php HTTP/1.1");
client.println("Host:"); // SERVER ADDRESS HERE TOO
client.println("Content-Type:
application/x-www-form-urlencoded");
client.print("Content-Length: ");
client.println(data.length());
client.println();
client.print(data);
}
if (client.connected()) {
client.stop(); //
DISCONNECT FROM THE SERVER
}
PHP
connect
<?php
function
Connection(){
$server="steinhiltearmin.mysql.db";
$user="steinhiltearmin";
$pass="*******";
$db="steinhiltearmin";
$connection = mysql_connect($server, $user,
$pass);
if (!$connection) {
die('MySQL ERROR: '
. mysql_error());
}
mysql_select_db($db)
or die( 'MySQL ERROR: '. mysql_error() );
return $connection;
}
?>
PHP add
<?php
include("connect.php");$link=Connection();
$temp1=$_POST["temp1"];
$hum1=$_POST["hum1"];
$query =
"INSERT INTO `tempLog` (`temperature`, `humidity`) VALUES
(25,35)";
mysql_query($query,$link);
mysql_close($link);
header("Location:
index.php");
?>
1 year ago
what version of xampp need to use ?
1 year ago
enthernet shield is unable to send data on WAN i mean hosting companies are unisg ssl and ethernet shield unable to handle it please help me i want to send data on my website on WAN ....
1 year ago
I'm having trouble compiling for the wemos d1. The error I''m getting has to do with the concatination in the data = line. Can you not concat in the Arduino IDE?
1 year ago
Thank you for the detailed instructions.
Just curious if there is a quick way to have event based data logging and still have time stamp in database.
1 year ago
In file included from C:\Users\YPK\Documents\Arduino\libraries\DHT-sensor-library-master\DHT_U.cpp:22:0:
C:\Users\YPK\Documents\Arduino\libraries\DHT-sensor-library-master\DHT_U.h:25:29: fatal error: Adafruit_Sensor.h: No such file or directory
#include <Adafruit_Sensor.h>
^
compilation terminated.
exit status 1
Ошибка компиляции для платы Arduino/Genuino Uno.
Reply 1 year ago
try to use older version of DHT library, it works for me
1 year ago
how to connect Arduino + mysql + android, means i need to access that mysql data from android it self
1 year ago
Thank you for the instructable..can we add a relay to the arduino and control it by server...
1 year ago
How about instead of monitoring temp we monitor voltage? Say, from a solar panel battery bank!
1 year ago
Where is Part-II | https://www.instructables.com/id/PART-1-Send-Arduino-data-to-the-Web-PHP-MySQL-D3js/ | CC-MAIN-2018-47 | refinedweb | 1,393 | 65.52 |
[
]
ASF GitHub Bot commented on FLINK-1820:
---------------------------------------
Github user fhueske commented on a diff in the pull request:
--- Diff: flink-core/src/main/java/org/apache/flink/types/parser/ByteParser.java ---
@@ -21,22 +21,23 @@
public class ByteParser extends FieldParser<Byte> {
-
--- End diff --
Please avoid such reformatting changes in the future.
In this file less than 10 changed lines make up the actual change but almost 80 lines
are touched. This costs quite a bit of time to review.
> Bug in DoubleParser and FloatParser - empty String is not casted to 0
> ---------------------------------------------------------------------
>
> Key: FLINK-1820
> URL:
> Project: Flink
> Issue Type: Bug
> Components: Core
> Affects Versions: 0.8.0, 0.9, 0.8.1
> Reporter: Felix Neutatz
> Assignee: Felix Neutatz
> Priority: Critical
> Fix For: 0.9
>
>
> Hi,
> I found the bug, when I wanted to read a csv file, which had a line like:
> "||\n"
> If I treat it as a Tuple2<Long,Long>, I get as expected a tuple (0L,0L).
> But if I want to read it into a Double-Tuple or a Float-Tuple, I get the following error:
> java.lang.AssertionError: Test failed due to a org.apache.flink.api.common.io.ParseException:
Line could not be parsed: '||'
> ParserError NUMERIC_VALUE_FORMAT_ERROR
> This error can be solved by adding an additional condition for empty strings in the FloatParser
/ DoubleParser.
> We definitely need the CSVReader to be able to read "empty values".
> I can fix it like described if there are no better ideas :)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/flink-issues/201505.mbox/%3CJIRA.12787831.1428012192000.147317.1431978002490@Atlassian.JIRA%3E | CC-MAIN-2018-05 | refinedweb | 254 | 61.67 |
It is time to write code. In this chapter, we will go over and write the typical entry code example for every languageâthe famous Hello World!. In order to do this, we will need to set up the initial environment required to develop software with Kotlin. We will provide a few examples using the compiler from the command line, and then we will look at the typical way of programming using the integrated development environments (IDEs) and build tools available.
Kotlin is a Java virtual machine (JVM) language, and so the compiler will emit Java bytecode. Because of this, naturally, Kotlin code can call Java code, and vice versa! Therefore, you need to have the Java Development Kit (JDK) installed on your machine. To be able to write code for Android, where the most recent supported Java version is 6, the compiler needs to translate your code to bytecode that is at least compatible with Java 6.Â
In this chapter, you will learn how to do the following:
- Use the command line to compile and execute code written in Kotlin
- Use the REPL and write Kotlin scripts
- Create a Gradle project with Kotlin enabled
- Create a Maven project with Kotlin enabled
- Use IntelliJ to create a Kotlin project
- Use Eclipse IDE to create a Kotlin project
- Mix Kotlin and Java code in the same project
Throughout this book, all the code examples will run with JDK 8. If you are new to the JVM world, you can get the latest version fromÂ.
In Chapter 7, Null Safety, Reflection, and Annotations, the examples will draw heavily on classes provided by the reflection API. This API is available through theÂ
kotlin-reflect.jar located on maven central atÂ.
Additionally, the code snippets used in this book can be found on GitHub at the following repository.
To write and execute code written in Kotlin, you will need its runtime and the compiler. At the time of writing, the stable release of Kotlin is 1.3.31. Every runtime release comes with its own compiler version. To get your hands on it, navigate to, scroll to the bottom of the page, and download and unpack the ZIP archive,Â
kotlin-compiler-1.3-31.zip, to a known location on your machine. The output folder will contain a directory calledÂ
binwith all the scripts required to compile and runKotlinon Windows, Linux, or macOS. You need tomakesure the
binfolder location is part of your system path in order to call
kotlincwithout having to specify the full path.
If your machine runs Linux or macOS, there is an even easier way to install the compiler by using
sdkman. All you need to do is execute the following commands in a Terminal:
$ curl -s | bash $ bash $ sdk install kotlin 1.3.31
Alternatively, if you are using macOS and you have
homebrew installed, you could run the following commands to achieve the same thing:
$ brew update $ brew install [email protected]
Now that all of this is done, we can finally write our first Kotlin code. The application we will be writing does nothing other than display the text
Hello World! on the console. Start by creating a new file named
HelloWorld.kt and type the following:
fun main(args: Array<String>) { println("Hello, World!") }
From the command line, invoke the compiler to produce the JAR assembly, as follows (
include-runtime is a flag for the compiler to produce a self-contained and runnable JARÂ by including the Kotlin runtime in the resulting assembly):
kotlinc HelloWorld.kt -include-runtime -d HelloWorld.jar
Now you are ready to run your program by typing the following on your command line. Make sure that your
JAVA_HOME variable is set and added to the system path:
$ java -jar HelloWorld.jar
The code is pretty straightforward. It defines the entry point function for your program, and, in the first and only line of code, it prints the text to the console.
If you have been working with the Java or Scala languages, you might raise an eyebrow because you noticed the lack of the typical class that would normally define the standard
static main program entry point. How does it work then? Let's have a look at what actually happens. First, let's just compile the preceding code by running the following command. This will create a
HelloWorld.class in the same folder:
$ kotlinc HelloWorld.kt
Now that we have the bytecode generated, let's look at it by using the
javap tool available with the JDK, as follows (please note that the file name contains a suffix,Â
Kt):
$ javap -c HelloWorldKt.class
Once the execution completes, you should see the following printed on your Terminal:
Compiled from "HelloWorld.kt" public final class HelloWorldKt { public static final void main(java.lang.String[]); Code: 0: aload_0 1: ldc #9 // String args 3: invokestatic #15 // Method kotlin/jvm/internal/Intrinsics.checkParameterIsNotNull:(Ljava/lang/Ob ject;Ljava/lang/String;)V 6: ldc #17 // String Hello, World! 8: astore_1 9: nop 10: getstatic #23 // Field java/lang/System.out:Ljava/io/PrintStream; 13: aload_1 14: invokevirtual #29 // Method java/io/PrintStream.println:(Ljava/lang/Object;)V 17: return }
You don't have to be an expert in bytecode to understand what the compiler has actually done for us. As you can see in the snippet, a class has been generated for us, and it contains the program entry point with the instructions to print
Hello World! to the console.
I would not expect you to work with the command-line compiler on a daily basis; rather, you should use the tools at hand to delegate this, as we will see shortly.
When we compiled
Hello World! and produced the JAR, we instructed the compiler to bundle in the Kotlin runtime. Why is the runtime needed? Take a closer look at the following bytecode that was generated, if you haven't already done so. To be more specific, look at line 3. It invokes a method to validate the fact that the
args variable is not null; therefore, if you compile the code without asking for the runtime to be bundled in, and then try to run it, you will get an exception:
$ kotlinc HelloWorld.kt -d HelloWorld.jar$ java -jar HelloWorld.jarException in thread "main" java.lang.NoClassDefFoundError: kotlin/jvm/internal/Intrinsicsat HelloWorldKt.main(HelloWorld.kt)Caused by: java.lang.ClassNotFoundException: kotlin.jvm.internal.Intrinsics
The runtime footprint is very small; at approximately 800 K, you can't argue otherwise. Kotlin comes with its own standard class library (
Kotlin runtime), which is different from the Java library. As a result, you need to merge it into the resulting JAR, or provide it in the classpath, as follows:
$ java -cp $KOTLIN_HOME/lib/kotlin-runtime.jar:HelloWorld.jar HelloWorldKt
If you develop a library for the exclusive use of other Kotlin libraries or applications, then you don't have to include the runtime. Alternatively, there is a shorter path that involves passing a flag to the Kotlin compiler, as follows:
$ kotlinc -include-runtime HelloWorld.kt -d HelloWorld
The preceding code will include the runtime when assembling the final JAR file.
These days, most languages provide an interactive shell, and Kotlin is no exception. If you want to quickly write some code that you won't use again, then the REPL is a good tool to have. Some people prefer to test their methods quickly, but you should always write unit tests rather than using the REPL to validate that the output is correct.
Note
Note: REPL is the common name when referring to an interactive shell, and is an abbreviation for read, evaluate, print, loop.
You can start the REPL by adding dependencies to the classpath in order to make them available within the instance. To look at an example, we will use the Joda library to deal with the date and time. First, we need to download the JAR. In a Terminal window, use the following commands:
$ wget tar xvf joda-time-2.9.4-dist.tar.gz
Now, you are ready to start the REPL. Attach the Joda library to its running instance, and import and use the classes it provides, as follows:
$ kotlinc-jvm -cp joda-time-2.9.4/joda-time-2.9.4.jarWelcome to Kotlin version 1.1-M04 (JRE 1.8.0_66-internal-b17)Type :help for help, :quit for quit>>> import org.joda.time.DateTime>>> DateTime.now()2016-08-25T22:53:41.017+01:00
Running the preceding code will execute the
now function of the
DateTime class provided by the Joda library. The output is simply the current date and time.
Kotlin can also be run as a script. If
bash or
Perl is not for you, now you have an alternative.
Say you want to delete all files that are older than N given days. The following code example does just that:
import java.io.File val purgeTime = System.currentTimeMillis() - args[1].toLong() * 24 * 60 * 60 * 1000 val folders = File(args[0]).listFiles { file -> file.isFile } folders ?.filter { file -> file.lastModified() < purgeTime } ?.forEach { file -> println("Deleting ${file.absolutePath}") file.delete() }
Create a file namedÂ
delete.kts with the preceding content. Note the predefined variable
args, which contains all the incoming parameters passed when it is invoked. You might wonder what the
? character is doing there. If you are familiar with the C# language and you know about nullable classes, you already know the answer. Even though you might not have come across it, I am sure you have a good idea of what it does. The character is called the safe call operator, and, as you will find out later in the book when the subject is discussed in greater length, it avoids the dreadful
NullPointerException error.
The script takes two argumentsâthe target folder, and then the number of days, threshold. For each file it finds in the target, it will check the last time it was modified; if it is less than the computed purge time, it will delete it. The preceding script has left out error handling; we leave this to the reader as an exercise.
Now that the script is available, it can be invoked by running the following command:
$ kotlinc -script delete.kts . 5
If you copy/create files in the current folder with a last-modified timestamp older than five days, it will remove them.
If you are familiar with the build tool landscape, you might be in one of three campsâMaven, Gradle, or SBT (more likely if you are a Scala developer). I am not going to go into the details, but we will present the basics of Gradle, the modern open source polyglot build automation system, and leave it up to the curious to find out more from. Before we continue, please make sure you have it installed and available in your classpath in order for it to be accessible from the Terminal. If you have SDKMAN, you can install it using this command:
$ sdk install gradle 3.0
The build system comes with some baked-in templates, albeit limited ones, and, in its latest 3.0 version, Kotlin is not yet included. Hopefully, this shortfall will be dealt with sooner rather than later; however, it takes very little effort to configure support for it. First, let's see how you can list the available templates by executing the following command:
$ gradle help --task :init
You should see the following being printed out on the Terminal:
Options--type Set type of build to create.Available values are:basicgroovy-libraryjava-librarypomscala-library
Let's go and use the Java template and create our project structure by executing this
bash command:
$ gradle init --type java-library
This template will generate a bunch of files and folders, as shown in the following screenshot. If you have been using Maven, you will see that this structure is similar:
Project Folders layout
As it stands, the Gradle project is not ready for Kotlin. First, go ahead and delete
Library.java and
LibraryTest.java, and create a new folder named
kotlin, a sibling of the Java one. Then, using a text editor, open the
build.gradle file. We need to add the plugin enabling the Gradle system to compile Kotlin code for us. To do this, add the following snippet to the top of your file:
buildscript { ext.kotlin_version = '1.3.31' repositories { mavenCentral() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } }
The preceding instructions tell Gradle to use the plugin for Kotlin, and set the dependency Maven repository. Since Kotlin 1.1 is only at milestone 4, there is a specific repository to pick it from (refer to the last entry in
repositories). We are not done yetâwe still need to enable the plugin. The template generated will already have an applied pluginâ
java. Replace it with the following:
apply plugin: 'kotlin' apply plugin: 'application' mainClassName = 'com.programming.kotlin.chapter01.ProgramKt'
Now, Kotlin plugin support is enabled. You may have noticed that we have also added the application plugin and set the class containing the program entry point. The reason we have done this is to allow the program to run directly, as we will see shortly.
We are not quite done. We still need to link to the Kotlin standard library. Replace the
repositories and
dependencies sections with the following:
repositories { mavenCentral() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" testCompile 'io.kotlintest:kotlintest-runner-junit5:3.3.2' }
Now, let's create the file named
HelloWorld.Kt. This time, we will set a namespace and avoid having our class as part of the default one. If you are not yet familiar with the term, don't worryâit will be covered in Chapter 12, Microservices with Kotlin.
From the Terminal, run the following command:
$ mkdir -p src/main/kotlin/com/programming/kotlin/chapter01$ echo "" >> src/main/kotlin/com/programming/kotlin/chapter01/Program.kt$ cat <<EOF >> src/main/kotlin/com/programming/kotlin/chapter01/Program.ktpackage com.programming.kotlin.chapter01fun main(args: Array<String>) { println("Hello World!")}
We are now in a position to build and run the application, as follows:
$ gradle build$ gradle run
Now, we want to be able to run our program using
java -jar [artefact]. Before we can do that, we need to adapt the
build.gradle file. First, we need to create a
manifest file and set the main class, as follows; the JVM will look for the
main function to start executing it:
jar { manifest { attributes( 'Main-Class': 'com.programming.kotlin.chapter01.ProgramKt' ) } from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } } }
Furthermore, we also embed the dependency for
kotlin-stdlib, as well as
kotlin-runtime, into the JAR. If we leave out these dependencies, we will need to add them to the classpath  when we run the application. Now, you are ready to build and run the code.
If you still prefer to stick with good old Maven, there is no problem. There is a plugin for it to support Kotlin as well. If you don't have Maven on your machine, you can follow the instructions at to get it installed on your local machine.
Just as we did with Gradle, let's use the built-in templates to generate the project folder and file structure. From the Terminal, run the following command in an empty directory:
$ mvn archetype:generate -DgroupId=com.programming.kotlin -DartifactId=chapter01 -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false maven-archetype- quickstart => maven-archetype-quickstart
This will generate the
pom.xml file and the
src folder for Maven. But before we add the file containing the
kotlin code, we need to enable the plugin. Just as before, start by deleting
App.java and
AppTest.java from
src/main/java/com/programming/kotlin and
src/test/java/com/programming/kotlin, and create the
src/main/kotlin directory (the subdirectory structure matches the namespace name), as shown in the following code:
$ mkdir -p src/main/kotlin/com/programming/kotlin/chapter01$ mkdir -p src/test/kotlin/com/programming/kotlin/chapter01
In an editor of your choice, open up the generatedÂ
pom.xml file and add the following:
<properties> <kotlin.version>1.3.31</kotlin.version> <kotlin.test.version>3.3.2</kotlin.test.version> </properties> >${kotlin.version}</version> <executions> <execution> <id>compile</id> <phase>process-sources</phase> <goals> <goal>compile</goal> </goals> </execution> <execution> <id>test-compile</id> <phase>process-test-sources</phase> <goals> <goal>test-compile</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
All we have done so far is enable the Kotlin plugin and make it run in the process-stages phase to allow the mixing of Java code as well. There are cases where you might have part of the source code written in good old Java. I am sure you also noticed the addition of source directory tags, allowing for the
kotlin files to be included in the build.
The only thing left to do now is to add the library dependencies for the Kotlin runtime, as well as the unit tests. We are not going to touch upon the testing framework until later in the book. Replace the entire
dependencies section with the following:
<dependencies> <dependency> <groupId>org.jetbrains.kotlin</groupId> <artifactId>kotlin-stdlib</artifactId> <version>${kotlin.version}</version> </dependency> <dependency> <groupId>io.kotlintest</groupId> <artifactId>kotlintest-runner-junit5</artifactId> <version>${kotlin.test.version}</version> <scope>test</scope> </dependency> </dependencies>
It is now time to add the
Hello World!code; this step is similar to the one we took earlier when we discussed Gradle, as you can see from the following code:
$ echo "" >> src/main/kotlin/com/programming/kotlin/chapter01/Program.kt $cat <<EOF >> src/main/kotlin/com/programming/kotlin/chapter01/Program.kt package com.programming.kotlin.chapter01 fun main(args: Array<String>) { println("Hello World!") }
We are now in a position to compile and build the JARÂ file for the sample program using the following code:
$ mvn package$ mvn exec:java - Dexec.mainClass="com.programming.kotlin.chapter01.ProgramKt"
The last instruction should end up printing the
Hello World! text to the console. Of course, we can run the program outside Maven by going back to executing Java, but we need to add the Kotlin runtime to the classpath, as follows:
$java -cp $KOTLIN_HOME/lib/kotlin-runtime.jar:target/chapter01-1.0- SNAPSHOT.jar "com.programming.kotlin.chapter01.ProgramKt"
If you want to avoid the
classpath dependency setup when you run the application, there is an option to bundle all the dependencies in the result JARÂ and produce what is called a fat jar. For that, however, another plugin needs to be added, as shown in the following code:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.4.3</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestRe sourceTransformer"> <mainClass>com.programming.kotlin.chapter01.ProgramKt</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin>
We can execute the command to run our JAR without having to worry about setting the classpath since this has been taken care of by the plugin, as follows:
$ java -jar target/chapter01-1.0-SNAPSHOT.jar
The result of executing this command is to launch the JAR and execute the program.
Coding using Vim/nano is not everyone's first choice. Working without the help of an IDE with its code completion, IntelliSense, shortcuts for adding files, or refactoring code can prove challenging depending on how complex the project is.
For a while now, in the JVM world, people's first choice when it comes to their integrated development environment has been IntelliJ. The tool is made by the same company that created KotlinâJetBrains. Given the integration between the two of them, it would be my first choice of IDE to use, but, as we will see in the next section, it is not the only option.
IntelliJ comes in two versionsâUltimate and Community (free). For the code we will be using over the course of this book, the free version is sufficient. If you don't already have it installed, you can download it from.
From version 15.0 onward, IntelliJ comes bundled with Kotlin, but if you have an older version, you can still get support for the language by installing the plugin. Just go to
Settings |Â
Plugins |Â
Install IntelliJ plugins, and type
Kotlin in the search box.
We are going to use the IDE to create a Gradle project with Kotlin enabled, just as we did in the previous section. Once you have started IntelliJ, clickÂ
Create new project. You will then see a dialog window from which you should select
Gradle from the left-hand side section;. Then, check the
Kotlin(Java) option from the right-hand side, as shown in the following screenshot:
Selecting a project type
You should already have the system variable
JAVA_HOME set up for the tool to pick it up automatically (see the
Project SDK at the top of the screenshot). If this isn't the case, click the
New button and navigate to where your JDK is. Once you have selected it, you are ready to go to the next step by clicking on the
Next button available on the bottom right-hand side of the screen.
The next window presented to you asks you to provide the
Group Id and
Artifact Id. Let's go with
com.programming.kotlin and
chapter01, respectively. Once you have entered these fields, you can move to the next step of the process where you tick the
Use auto-import flag and
Create directories for empty directory roots automatically options. Carry on to the next step, where you will be asked where you wish to store the project on your machine. Set the project location, expand
More Settings, type
chapter01 for the
Module name, and hit the
Finish button.
IntelliJ will go on and create the project, and you should see the outcome shown in the following screenshot:
Hello World! basic project
With the
kotlin folder selected, right-click, select the
New |
Package option, and type
com.programming.kotlin.chapter01, as shown in the following screenshot:
Setting up the package name
You should see a new folder appear below the
kotlin folder, matching what was typed earlier. Right-click on that, choose
New|
Kotlin File/Class, and type
Program.kt, as shown in the following screenshot:
Creating the Program.kt file
We are now ready to start typing our
Hello World! string. Use the same code we created earlier in the chapter. You should notice the Kotlin brand icon on the left-hand side of the file editor. If you click on it, you will get the option to run the code, and if you look at the bottom of your IntelliJ window, you should see the text
Hello World!, printed out, as shown in the following screenshot:
Hello World! program
Well done! You have written your first Kotlin program. It was easy and quick to set up the project and code, and then to run the program. If you prefer, you can have a Maven rather than a Gradle project. When you choose
New |
Project, you have to select
Maven from the left-hand side and check
Create from archetype while selectingÂ
org.jetbrains.kotlin:kotlin-archetype-jvm from the list presented, as shown in the following screenshot:
Maven project
As the screenshot shows, the Maven option should be selected from the various archetypes.
There might be some of you who still prefer Eclipse IDE to IntelliJ; don't worry, you can still develop Kotlin code without having to move away from it. At this point, I assume you already have the tool installed. From the menu, navigate to
Eclipse Marketplace, look for the Kotlin plugin, and install it (I am working with the latest distributionâEclipse Neon).
Once you have installed the plugin and restarted the IDE, you are ready to create your first Kotlin project. From the menu, selectÂ
File |
New |
Project, and you should see the following dialog:
New Kotlin project
Click the
Next button to move to the next step, and once you have chosen the source code location, click the
Finish button. This is not a Gradle or Maven project! You can choose one of the two, but then you will have to manually modify the
build.gradle or
pom.xml file, as we did in the Kotlin with Gradle and Kotlin with Maven sections of this chapter. As you did with the IntelliJ project, click on the
src folder, selectÂ
New package, and name itÂ
com.programming.kotlin.chapter01. To add our
Program.kt file, you will need to right-click on the newly created package, select
New |
Other, and then select
Kotlin |
Kotlin File from the list. Once the file has been created, type the simple lines of code to print out the text to the console. You should have the following result in your Eclipse IDE:
Hello World! with Eclipse
Now, you are ready to run the code. From the menu, selectÂ
Run |
Run. You should be able to trigger the execution. If it is successful, you should see theÂ
Hello World! text printed out in the
Consoletab at the bottom of your IDE.
Using different languages within the same project is quite common; I have encountered projects where a mix of Java and Scala files formed the code base. Could we do the same with Kotlin? Absolutely. Let's work on the project we created earlier in the Kotlin with Gradle section. You should see the following directory structure in your IntelliJ (the standard template for a Java/Kotlin project):
Â
Project layout
You can place the Java code within the
java folder. Add a new package to the
java folder with the same name as the one present in the
kotlin folder:
com.programming.kotlin.chapter01. Navigate toÂ
New |
Java class named
CarManufacturer.java and use the following code for the purpose of the exercise:
public class CarManufacturer { private final String name; public CarManufacturer(String name) { this.name = name; } public String getName() { return name; } }
What if you want to add a Java class under the
kotlin subfolder? Let's create a
Student class similar to the previous one and provide a field name for simplicity, as shown in the following code:
public class Student { private final String name; public Student(String name) { this.name = name; } public String getName() { return name; } }
In the
main function, let's instantiate our classes using the following code:
fun main(args: Array<String>) { println("Hellow World!") val student = Student("Alexandra Miller") println("Student name:${student.name}") val carManufacturer = CarManufacturer("Mercedes") println("Car manufacturer:${carManufacturer.name}") }
While the code compiles just fine, trying to run it will throw a runtime exception, saying that it can't find the
Student class. We need to let the Java compiler look for code under the
src/main/kotlin folder. In your
build.gradle, add the following instruction:
sourceSets { main.java.srcDirs += 'src/main/kotlin' }
Now, we can compile and run the program, as follows:
$ gradle jar$ java -jar build/libs/chapter01-1.0-SNAPSHOT.jar
As your Kotlin code gets bigger, compilation will slow down since it will have to go and recompile each file. However, we can speed it up by only compiling the files that were changed between builds. The easiest way to enable this is to create a file called
gradle.properties alongside
build.gradle and add
kotlin.incremental=true to it. While the first build will not be incremental, the following ones will be, and you should see your compilation time cut down quite a bit.
Maven is still, probably, the most widely used build system on the JVM. So, let's see how we can achieve our goal of mixing Kotlin and Java code in Maven. Starting with IntelliJ, choose
New |
Project, pick
Maven as the option, and look for
kotlin-archetype-jvm from the list of archetypes. We already covered this, so it should be a lot easier the second time around. We now have a project.
From the project tree, you will notice that there is no
java folder source code created. Go ahead and create
src/main/java, followed by the namespace folder
com.programming.kotlin (this will be a subfolder of the
java folder). You will notice that right-clicking on the
java folder won't give you the option to create a package. The project is not yet configured to include Java code. But first, what makes Maven handle Kotlin code? If you open the
pom.xml file and go to the plugins section, you will notice the
kotlin plugin, as shown in the following code:
> </plugin>
To add Java code to the mix, we need to set a new plugin that will be able to compile good old Java, as follows:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.5.1</version> >
The Kotlin compiler has to run before the Java compiler to get it all working, so we will need to amend the Kotlin plugin to do just that, as follows:
<plugin> <artifactId>kotlin-maven-plugin</artifactId> <groupId>org.jetbrains.kotlin</groupId> <version>${kotlin.version}</version> <executions> <execution> <id>compile</id> <goals> <goal>compile</goal> </goals> <configuration> <sourceDirs> <sourceDir>${project.basedir}/src/main/kotlin</sourceDir> <sourceDir>${project.basedir}/src/main/java</sourceDir> </sourceDirs> </configuration> </execution> <execution> <id>test-compile</id> <goals> <goal>test-compile</goal> </goals> <configuration> <sourceDirs> <sourceDir>${project.basedir}/src/main/kotlin</sourceDir> <sourceDir>${project.basedir}/src/main/java</sourceDir> </sourceDirs> </configuration> </execution> </executions> </plugin>
To be able to produce the executable JARÂ for the code we are about to write, we need yet another Maven plugin, as shown in the following code:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>3.0.2</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>com.programming.kotlin.HelloKt</mainClass> </manifest> </archive> </configuration> </plugin>
The preceding code will give you a JAR containing just your code; if you want to run it, then you need the extra dependencies in relation to the classpath, as follows:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.6</version> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <archive> <manifest> <mainClass>com.programming.kotlin.HelloKt</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </execution> </executions> </plugin>
Now, we are in a position to add the classes from the previous example (the
CarManufacturer and
Student classes) and change the
main class to contain the following:
val student = Student("Jenny Wood") println("Student:${student.name}") val manufacturer = CarManufacturer("Honda") println("Car manufacturer:${manufacturer.name}")
This is not ready yet. While compiling will go well, trying to execute the JARÂ will yield an error at runtime due to the
Student class not being found. The Java compiler needs to know about the Java code sitting under the
kotlin folder. For that, we bring in another plugin, as shown in the following code:
<plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <executions> <execution> <phase>generate-sources</phase> <goals><goal>add-source</goal></goals> <configuration> <sources> <source>${project.basedir}/src/main/kotlin</source> </sources> </configuration> </execution> </executions> </plugin>
Finally, we are in a position to compile and run the code. First, we package up the JAR using the Maven package command. Then, we execute this assembled JAR, as follows:Â
$ mvn package$ java -jar target/chapter01-maven-mix-1.0-SNAPSHOT-jar-with-dependencies.jar
Running these commands will result in the print statements being executed and outputted to the console.
This chapter described how you can set up your development environment with Gradle, Maven, IntelliJ, or Eclipse. Now, you are able to run and execute the examples given in the rest of the book, as well as experiment with your own Kotlin code.
In Chapter 2, Kotlin Basics, we will delve into the basic constructs you will use on a daily basis when coding in Kotlin. | https://www.packtpub.com/product/learn-kotlin-programming-second-edition/9781789802351 | CC-MAIN-2020-40 | refinedweb | 5,361 | 55.84 |
30 May 2011 11:37 [Source: ICIS news]
SINGAPORE (ICIS)--Monoethylene glycol (MEG) prices in Asia surged $15-30/tonne on Monday on supply concerns upon news that Taiwan’s Nan Ya Plastics was ordered to shut its plants in Mailiao, market sources said on Monday.
Prices were at $1,140-1,160 (€798-812/tonne) CFR (cost and freight) China Main Port (CMP) at the close of business on Monday, according to ICIS.
Nan Ya, a unit of Taiwanese petrochemical major ?xml:namespace>
The plants are currently running normally, said a company source.
Nan Ya Plastics has four MEG units at the site in northern
The company had earlier informed the market that it would keep its No 1 and No 2 MEG plants with a combined capacity of 720,000 tonnes/year shut until the end of July.
If the four lines come off line, the total affected MEG capacity would reach 1.9m tonnes/year, equivalent to 10% of
“We’re negotiating with the government for the final decision on the shutdown of the other two MEG plants,” a company source said.
Speculative traders were taking the opportunity to bid up MEG prices, market sources said.
A trader was heard to have bought a June shipment at $1,150/tonne CFR CMP at lunch time, while two bonded warehouse cargoes were heard changing hands at $1,160-1,165/tonne CFR CMP in the afternoon.
( | http://www.icis.com/Articles/2011/05/30/9464415/asia-meg-jumps-15-30tonne-on-order-to-shut-nan-yas-plants.html | CC-MAIN-2015-22 | refinedweb | 238 | 64.04 |
If correctly installed, Python on the OS X has no trouble finding the ØMQ Python bindings. However, the Eclipse PyDev by default will not locate the zmq library within the Eclipse IDE. Therefore, if you include the following in your program,
import zmq
the program will run and debug fine, but Eclipse will underline zmq with the message “Unresolved import”.
This is easily remedied in the Eclipse preferences. Open up PyDev ▶ Interpreter – Python and add locate the pyzmq egg directory. On my machine that is under
/Library/Python/2.6/site-packages/pyzmq-2.1.7-py2.6-macosx-10.6-universal.egg
Eclipse Preference Settings for ØMQ Python Bindings | http://jamesreubenknowles.com/get-pydev-eclipse-to-find-zmq-1532 | CC-MAIN-2017-13 | refinedweb | 110 | 57.87 |
Apache Crunch: A Java Library for Easier MapReduce Programming
- |
-
-
-
-
-
-
Read later
Reading List
Apache Crunch (incubating) is a Java library for creating MapReduce pipelines that is based on Google's FlumeJava library. Like other high-level tools for creating MapReduce jobs, such as Apache Hive, Apache Pig, and Cascading, Crunch provides a library of patterns to implement common tasks like joining data, performing aggregations, and sorting records. Unlike those other tools, Crunch does not impose a single data type that all of its inputs must conform to. Instead, Crunch uses a customizable type system that is flexible enough to work directly with complex data such as time series, HDF5 files, Apache HBase tables, and serialized objects like protocol buffers or Avro records.
Crunch does not try to discourage developers from thinking in MapReduce, but it does try to make thinking in MapReduce easier to do. MapReduce, for all of its virtues, is the wrong level of abstraction for many problems: most interesting computations are made up of multiple MapReduce jobs, and it is often the case that we need to compose logically independent operations (e.g., data filtering, data projection, data transformation) into a single physical MapReduce job for performance reasons.
Essentially, Crunch is designed to be a thin veneer on top of MapReduce -- with the intention being not to diminish MapReduce's power (or the developer's access to the MapReduce APIs) but rather to make it easy to work at the right level of abstraction for the problem at hand.
Although Crunch is reminiscent of the venerable Cascading API, their respective data models are very different: one simple common-sense summary would be that folks who think about problems as data flows prefer Crunch and Pig, and people who think in terms of SQL-style joins prefer Cascading and Hive.
Crunch Concepts
Crunch's core abstractions are a PCollection<T>, which represents a distributed, immutable collection of objects, and a PTable<K, V>, which is a sub-interface of PCollection that contains additional methods for working with key-value pairs. These two core classes support four primitive operations:
- parallelDo: Apply a user-defined function to a given PCollection and return a new PCollection as a result.
- groupByKey: Sort and group the elements of a PTable by their keys (equivalent to the shuffle phase of a MapReduce job).
- combineValues: Perform an associative operation to aggregate the values from a groupByKey operation.
- union: Treat two or more PCollections as a single, virtual PCollection.
All of Crunch's higher-order operations (joins, cogroups, set operations, etc.) are implemented in terms of these primitives. The Crunch job planner takes in the graph of operations defined by the pipeline developer, breaks the operations up into a series of dependent MapReduce jobs, and then executes them on a Hadoop cluster. Crunch also supports an in-memory execution engine that can be used to test and debug pipelines on local data.
Crunch was designed for problems that benefit from lots of user-defined functions operating on custom data types. User-defined functions in Crunch are designed to be lightweight while still providing complete access to the underlying MapReduce APIs for applications that require it. Crunch developers can also use the Crunch primitives to define APIs that provide clients with advanced ETL, machine learning, and scientific computing functionality that involves a series of complex MapReduce jobs.
Getting Started with Crunch
You can download the source or the binaries of latest version of Crunch from the website, or you can use the dependencies that are published at Maven Central.
The source code ships with a number of example applications. Here is the source code for the WordCount application in Crunch:
import org.apache.crunch.DoFn; import org.apache.crunch.Emitter; import org.apache.crunch.PCollection; import org.apache.crunch.PTable; import org.apache.crunch.Pair; import org.apache.crunch.Pipeline; import org.apache.crunch.impl.mr.MRPipeline; import org.apache count method applies a series of Crunch primitives and returns // a map of the top 20 unique words in the input PCollection to their counts. // We then read the results of the MapReduce jobs that performed the // computations into the client and write them to stdout. for (Pair<String, Long> wordCount : words.count().top(20).materialize()) { System.out.println(wordCount); } } }
The last code block in this example shows the power of Crunch’s literate API: in a single line of Java code, we configured and executed two dependent MapReduce jobs (one to count the elements of a PCollection, and a second to find the top twenty elements by that count) and read the output of the second MapReduce job into the client via Crunch’s ability to materialize PCollections as Java Iterables.
Crunch Optimization Plans
The objective of Crunch's optimizer is to run as few MapReduce jobs as possible. Most MapReduce jobs are IO-bound, so the fewer times we have to go over the data, the better. To be fair, every optimizer (Hive, Pig, Cascading, Crunch) works essentially the same way. But unlike the other frameworks, Crunch exposes its optimizer primitives to client developers, making it much easier to construct reusable, higher-level operations for tasks like constructing an ETL pipeline or building and evaluating an ensemble of random forest models.
Conclusion
Crunch is currently in incubation status with Apache, and we gladly welcome contributions from the community (see project page) to make the library even better. In particular we are seeking ideas for more efficient MapReduce compilation (including cost-based optimizations), new MapReduce design patterns, and support for more data sources and targets like HCatalog, Solr, and ElasticSearch. There are also a number of projects that bring Crunch to other JVM languages like Scala and Clojure, as well as tools that use Crunch to create MapReduce pipelines in R.
About the Author
Josh Wills is Cloudera's Director of Data Science, working with customers and engineers to develop Hadoop-based solutions across a wide-range of industries. He earned his Bachelor's degree in Mathematics from Duke University and his Master's in Operations Research from The University of Texas - Austin.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Crunch
by
Selva Kumaran
Does Crunch have a support for HCatalog
by
Michael Enudi
Regards | https://www.infoq.com/articles/ApacheCrunch | CC-MAIN-2017-39 | refinedweb | 1,078 | 50.46 |
I have a project to do on D in college.().
Can some of ye just reply a couple of flash things about D, things like why
would be used instead of existing languages, and why it was made ?
I can make it available when done, if anyone is interested(deadline is this
friday).
Re: quick
was meant to change that subject.
i only program in C in college, with limited c++ exposure, and i cant find the difference between C++ and D :(
"Brian Folan" <99541157@itb.ie> wrote in message news:a62kuq$2tgd$1@digitaldaemon.com...
> i only program in C in college, with limited c++ exposure, and i cant find the difference between C++ and D :(
I recommend you to read the "Overview" and "Converting C++ to D" sections of the D manual for detailed explanations, but here are the most important (IMO) features you should be aware of:
- Objects are never instantiated on stack. In C++, this is a common
practice. In D, you always use operator new to create an object.
Variables of type "object" are actually references to objects, and
not objects themselves:
/* C++ */
class Foo { ... }
Foo bar; // bar is an instance of Foo
Foo* baz; // baz is a pointer to instance of Foo
/* D */
class Foo { ... }
Foo bar; // bar is a reference (strict pointer) to instance of Foo
Foo* baz; // baz is a pointer to reference to instance of Foo
- In C++, classes and structs are pretty much the same. In D, the "class"
keyword declares what is called class in C++, and D "struct" has the
abilities of C (not C++) structure.
- C++ program can be divided into parts by writing several .cpp files, and
providing an interface header .h file for each; this mechanism relies
on preprocessor to #include interface files. Namespaces are separately
provided by the namespace statement. D divides programs into modules,
each .d file is a separate module with its own namespace; you import
modules with the import directive, and there is no need for separate
interface files.
- C++ requires global types, constants, variables, functions to be declared
before they are used, and introduces a special syntax to provide forward
references. In D you can use function declared in your module from any
point of that module:
/* C++ */
void bar(); // forward declaration needed
void foo() { bar(); }
void bar() { foo(); }
/* D */
void foo() { bar(); } // bar() is already visible!
void bar() { foo(); }
- In C++, class members are private by default. In D, they are public
by default. Also, you cannot use the public/private/protected specifier
when inheriting from base class - it's always public:
/* C++ */
class Foo: public Bar { ... }
/* D */
class Foo: Bar { ... }
- In C++, bodies of member functions can be defined outside class
definition,
and it is the prefferd way. This is not possible (nor it is needed) in D:
/* C++ */
class Foo
{
void bar();
}
void Foo::bar() { ... }
/* D */
class Foo
{
void bar() { ... }
}
- C++ has three distinct resolution operators: "." (direct member access), "->" (indirect member access), and "::" (static member access). C++ also uses "::" to access member of the base class, or namespace. These all are replaced by a single "." in D, and compiler determines the exact meaning depending on the context:
/* C++ */
struct Foo { int n; }
Foo bar;
Foo* baz;
bar.n = 1; baz->n = 1;
/* D */
struct Foo { int n; }
Foo bar;
Foo* baz;
bar.n = 1; baz.n = 1;
- Constructor/destructor semantics are different:
/* C++ */
class Foo
{
Foo() { ... } // constructor
~Foo() { ... } // destructor
}
/* D */
class Foo
{
this() { ... } // constructor
~this() { ... } // destructor
}
- To call methods of base class, you use the pseudo-variable "super":
/* C++ */
class Foo
{
public: void baz() { ... }
}
class Bar: public Foo
{
public: void baz() { Foo::baz(); /* call version of base class */ }
}
/* D */
class Foo
{
void baz() { ... }
}
class Bar: Foo
{
void baz() { super.baz(); /* call version of base class */ }
}
- In C++, you have to call constructor of the base class (or it is
done for you implicitly) at the beginning of your constructor using
a weird syntax. Also, you cannot call one constructor of your class
from another in that class. In D, constructors are just functions,
and are called as such:
/* C++ */
class Foo
{
public:
Foo() { default_ctor(); }
Foo(int n) { default_ctor(); ... }
private:
void default_ctor() { /* does what needs to be done in any case */ }
}
class Bar: public Foo
{
public: Bar(int n): Foo(n) { ... }
}
/* D */
class Foo
{
this() { /* does what needs to be done in any case */ }
this(int n) { this(); ... }
}
class Bar: Foo
{
this(int n) { super(n); ... }
}
- D syntax for array and pointer declarations is a bit different from C++ one:
/* C++ */
int foo, bar[5]; // foo is int, bar is array of ints
int* foo, bar; // foo is pointer to int, bar is int
/* D */
int[5] foo, bar; // both foo and bar are arrays of ints
int* foo, bar; // both foo and bar are pointers to int
- D types are a bit different from those of C++. Here's the equivalence
table
for a typical Win32 C++ compiler:
D -> 32-bit C++
char -> char
byte -> signed char
ubyte -> unsigned char
short -> signed short
ushort -> unsigned short
int -> signed int, signed long
uint -> unsigned int, unsigned long
long -> N/A (64-bit signed int, signed long long in GCC)
ulong -> N/A (64-bit unsigned int, unsigned long long in GCC)
float -> float
double -> double
extended -> long double
complex -> std::complex
imaginary -> N/A
- D provides built-in dynamic arrays, with functionality similar
to the one provided by std::vector class from C++ STL; the syntax
is much simpler, however:
/* C++ */
vector<int> foo; // dynamic arrays of ints
foo.push_back(1); // append 1 to the end of the array
foo[0] = 2; // element access
// iteration
for (int i = 0; i < foo.size(); i++)
foo[i] = 666;
/* D++ */
int[] foo; // dynamic array of ints
foo ~= 1; // append 1 to the end of the arrat
foo[0] = 2; // element access
// iteration
for (int i = 0; i < foo.length; i++)
foo[i] = 666;
- Strings are represented by dynamic arrays of chars, rather than
by pointers to null-terminated char sequences or std::string
C++ STL class:
/* C++ */
string foo, bar, baz;
foo = "Hello, ";
bar = "world!";
baz = foo + bar; // concatenate with +
baz += "\n"; // append with +=
/* D */
char[] foo, bar, baz;
foo = "Hello, ";
bar = "world!";
baz = foo ~ bar; // concatenate with ~
baz ~= "\n"; // append with ~=
- Everything is garbage-collected. That is, everything allocated
by operator new, be it an object or a dynamic array, gets freed
automatically, you don't have to use operator delete. This allows
you to write code that is unsafe (and thus considered "bad") in
C++, but perfectly legal in D:
/* bad C++, but legal D */
int* foo(int n)
{
int* result;
result = new int[n];
return result;
}
void main()
{
int* array = foo(5);
...
return 0; // forgot to delete array! OK in D, bad thing in C++
}
Oh, damn, I've tired of typing. =) There are SO many differences...
I haven't mentioned array slicing, design by contract (the "Contracts"
section in the D reference is a MUST READ!), interfaces, type complex,
standard library, and many other things. Once again, the best idea
is to read the entire D reference document, everything's there...
Woah, it was a large one. I should probably have copyrighted it =) But now it's too late... so enjoy it for free =) | http://forum.dlang.org/thread/a62kcq$2tcg$2@digitaldaemon.com | CC-MAIN-2015-48 | refinedweb | 1,207 | 70.13 |
Hangman is a very interesting game. It must have been played by almost everyone at least once in life. The back-benchers and movie lovers will agree to me. It is a guessing game where the players select the domain of words to be guessed. This post is about creating Hangman Python Code.
How it is played?
First player challenges second player to guess the name or word he has chosen. The game begins by presenting the number of letters in the word to be guessed. It is played by writing blank spaces equal to the count of characters in the word.
Second player who has to guess says a letter from alphabets a-z. If this character is in the word, the first person fills it in the correct places in the set of blanks written by him.
If the spoken letter is not in the word to be guessed, one letter from “HANGMAN” is struck off. The chances are reduced by one. It means first wrongly mentioned letter reduces chances of guessing from 7 (characters in word Hangman) to 6.
This process of guessing one letter, filling or striking off one character from HANGMAN is continued.
It stops either when the word is guessed or when 7 chances of guessing the letters to complete the word is over.
Creating Hangman Game in Python
To create this program the problem is divided into three parts.
- Finding the locations of the guessed letter in the word to be guessed- The function findloc(alpha, word) finds the locations of parameter alpha in parameter word and stores in the global array loc.
- Replacing the letter in these locations- The function display(alpha) accepts the alphabet entered by the player and displays the partially completed string of guessed letters and blank space
- Decrementing the count of chances remaining-If the letter is not part of word, the chances are decremented by one and one starting letter from HANGMAN is removed. This is done in the main program within while loop.
The code is executed only once. If you want to play the game again, you have to re-execute program . Modify the code with a Python loop to run multiple times asking the user to continue or exit.
Code
import random import sys def findloc(alph,word): i=0 k=0 while (i<len(word)): if (alph==word[i]): loc.append(i) k+=1 i+=1 def display(alph): i=0 temp=[] if (loc!=None): while (i<len(loc)): check[loc[i]]=alph i+=1 i=0 while (i<len(word)): if check[i]!=None: sys.stdout.write(check[i]) temp.append(check[i]) else: sys.stdout.write(" _ ") temp.append("_") i+=1 return "".join(temp) # an array of movie names from where one movie name is picked randomly lib=["casablanca","day and knight", "jupiter","interstellar","gravity","moonlight", "inception","waterhorse","devil wears prada", "bedazzled","enchanted","cinerella"] loc=[] hm="hangman" fin="" #picking random movie name word=lib[random.randrange(len(lib))] #getting its word count check=[None]* len(word) # initializing chances chances=0 print("Guess a hollywood movie with ",len(word)," letters!!!") print("") print("let's begin"); while (chances<7): print("") #printing remaining chances by removing letters from word hangman when an entered character is not in movie name to be guessed print(hm[chances:]); lett=input("enter character--->") #finding locations of entered letter findloc(lett,word) #displaying partially completed movie name after filling the entered letter fin=display(lett) #incrementing chances availed when a enterted letter is not part of moviename if len(loc)==0: chances+=1 loc=[] #comparing partial and final word if (word==fin or chances==7 ): break print("") print("") if (word==fin) : print("Congrats!!You are a movie buff:)") else: print("HANGMAN :(") print("The movie was ",word)
Successful Guess of the word
Failure to guess the word
Be First to Comment | https://csveda.com/hangman-python-code/ | CC-MAIN-2022-33 | refinedweb | 642 | 62.88 |
Technical Articles
JMS special characters Issue
Recently I have worked in JMS(MQ) to Proxy scenario, in which I was facing special characters issue. Basically all umlaut chars such as ä,ö,ü are not displayed correct. I was not using any mapping as this is a pass through scenario.
When I download the file into my local machine and open with browser, still same issue exists.
Tried to open with notepad++ and changed encode to ANSI, now chars are displayed correct, saved the file and opened again with browser, now char is getting displayed correctly.
then checking hex code of ü in notepad++. with utf-8
changing encoding to ANSI.
Changing the file encoding to ANSI, creates correct HEX code.
c3 bc is the correct hex code for char ü .
I have tried all adapter module beans, as mentioned in below blogs.
I have tried TextCodepageConversionBean also, which did not work. It could be due to that this is meant for text file not xml file.
Then I have seen this code in stack overflow, This is exactly what is happening in my case.
String encodedWithISO88591 = "üzüm baÄları"; String decodedToUTF8 = new String(encodedWithISO88591.getBytes("ISO-8859-1"), "UTF-8"); //Result, decodedToUTF8 --> "üzüm bağları"
I have tried to convert my downloaded xml using java code and it worked fine. I did not convert the encoding, just download the file from monitor and used it in the java code.
package com.java.special.chars; import java.io.BufferedReader; import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.io.InputStreamReader; import java.nio.charset.Charset; import java.nio.charset.StandardCharsets; public class Readfile { public static void main(String[] args) throws Exception{ // TODO Auto-generated method stub InputStream is = new FileInputStream("data/esm.xml"); BufferedReader reader = new BufferedReader(new InputStreamReader(is,StandardCharsets.UTF_8)); StringBuilder out = new StringBuilder(); String line; while ((line = reader.readLine()) != null) { byte[] b = line.getBytes(StandardCharsets.ISO_8859_1); System.out.println(new String(b, StandardCharsets.UTF_8)); } } }
Now the question is, how to achieve the same using module configuration instead of writing java map.
Since my payload is an xml, I had to use anonymizer bean with Message Transform bean. This has solved the issue, chars are getting displayed correct now.
Xml payload
By fair one of the challenges that you often face when working with integration, is that people use different character sets. Good standard solution you ended up with. | https://blogs.sap.com/2022/07/06/jms-special-characters-issue/ | CC-MAIN-2022-33 | refinedweb | 404 | 60.51 |
Addition of two numbers in base 14 in C++
In this tutorial, we will look at how to add two numbers represented in base 14 in C++. The number system that we see the most in our daily lives is the base 10 or ‘decimal’ system. Binary (base 2), Octal (base 8), and Hexadecimal (base 16) number systems are often used in computers for the representation and manipulation of numbers.
In this post, we will first learn about the representation of numbers in a base 14 system.
Next, we will look at the addition of numbers in base 14.
Finally, we will look at the C++ code to implement this.
Understanding numbers in base 14
Numbers in the decimal number system are represented using the digits:
0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
This means that every whole number can be represented as a pattern of these 10 digits. For example, the number ‘five thousand, one hundred and eighty-seven’ is represented as ‘5187’.
A base 14 number system, (also known as a tetradecimal system), uses 14 digits to represent whole numbers. Technically speaking, we do not use 14 ‘digits’ but rather 14 ‘symbols’:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, and D.
Note that ‘A’ is equivalent to the decimal number ’10’, ‘B’ is ’11’, and so on. Given below are a few numbers in decimal and their equivalent base 14 representation.
Decimal Representation Base 14 Representation
9 9
10 A
13 D
14 10
222 11C
5187 1C67
Addition of numbers in base 14
The addition of two numbers in base 14 is very similar to the addition of two decimal numbers. We add from right to left while considering carry-overs. In decimal addition, we carry over to the next place if corresponding digits (and a carry if present) add to a value greater than 9. Here, we carry over to the next place if corresponding symbols (and a carry if present) add to a value greater than 13. Shown below is the addition of a few pairs of base 14 numbers
12 + 14 = 26
8D3 + 91A = 140D
233 + BB = 310
C++ implementation and explanation: Add two base 14 numbers
We take in the numbers to be added as strings (say, ‘n1’ and ‘n2’). For simplicity, we assume that the strings entered represent valid base 14 numbers. We use strings rather than character arrays because of the various useful functions that operate on string objects. These functions help us to easily resize, append and reverse strings.
- The first step is to ensure that both inputs are of equal length. We do this by appending ‘0’s to the front of the shorter string.
- We then declare an empty string to store our final sum (say, ‘s’).
- Starting from the rightmost (i.e. unit’s place) of ‘n1’ and ‘n2’, we add symbols while at the same time taking into consideration carry.
- The sum of corresponding symbols is stored in ‘s’ and new carry is calculated. In this way ‘s’ will contain the required sum but in reverse.
- Finally, we reverse ‘s’ and display.
#include <iostream> #include <vector> #include <string> #include <algorithm> using namespace std; void change_lengths(string &n1, string &n2) // Function to append '0's to // the front of the shorter // string { int k = n2.size() - n1.size(); string n3(k, '0'); int i; for (i = 0; i < n1.size(); i++) n3.push_back(n1[i]); n1 = n3; } void sum(string n1, string n2) { if (n2.size() > n1.size()) change_lengths(n1, n2); else if (n1.size() > n2.size()) change_lengths(n2, n1); string s; int i, val1, val2, val3, c = 0; // val1, val2 and val3 hold the // numeric values of the symbols of // n1, n2 and s respectively. // c is the carry for (i = n1.size() - 1; i >= 0; i--) { if (n1[i] >= '0' && n1[i] <= '9') val1 = n1[i] - '0'; else if (n1[i] >= 'A' && n1[i] <= 'D') val1 = n1[i] - 'A' + 10; if (n2[i] >= '0' && n2[i] <= '9') val2 = n2[i] - '0'; else if (n2[i] >= 'A' && n2[i] <= 'D') val2 = n2[i] - 'A' + 10; val3 = (val1 + val2 + c) % 14; c = (val1 + val2 + c) / 14; if (val3 <= 9) s.push_back(val3 + '0'); else s.push_back(val3 + 'A' - 10); } if (c == 1) //In case there is a final carry s.push_back('1'); reverse(s.begin(), s.end()); // Since sum was initialised // in reverse. //cout << "Sum = "; cout << s; } int main() { string n1, n2; //cout << "Enter two numbers in base 14\n"; cin >> n1 >> n2; sum(n1, n2); }
Input
8D3 91A
Output
140D
Conclusion
In this tutorial, we saw how to add two numbers in base 14 using C++.
Note: We assumed that the inputs provided are valid base 14 numbers.
Read this if you want to know how to add two numbers without using Arithmetic Operators in C++. | https://www.codespeedy.com/addition-of-two-numbers-in-base-14-in-cpp/ | CC-MAIN-2020-50 | refinedweb | 810 | 70.33 |
table of contents
other sections
NAME¶
ioctl—
control device
LIBRARY¶Standard C Library (libc, -lc)
SYNOPSIS¶
#include <sys/ioctl.h>int
ioctl(int fd, unsigned long request, ...);
DESCRIPTION¶The
ioctl() system call manipulates the underlying device parameters of special files. In particular, many operating characteristics of character special files (e.g. terminals) may be controlled with
ioctl() requests. The argument fd>.
GENERIC IOCTLS¶Some generic ioctls are not implemented for all types of file descriptors. These include:
FIONREAD int
- Get the number of bytes that are immediately available for reading.¶If an error has occurred, a value of -1 is returned and errno is set to indicate the error.
ERRORS¶The
ioctl() system call will fail if:
- [
EBADF]
- The fd argument is not a valid descriptor.
- [
ENOTTY]
- The fd argument is not associated with a character special device.
- [
ENOTTY]
- The specified request does not apply to the kind of object that the descriptor fd references.
- [
EINVAL]
- The request or argp argument is not valid.
- [
EFAULT]
- The argp argument points outside the process's allocated address space.
SEE ALSO¶execve(2), fcntl(2), intro(4), tty(4)
HISTORY¶The
ioctl() function appeared in Version 7 AT&T UNIX. | https://manpages.debian.org/stretch/freebsd-manpages/ioctl.2freebsd.en.html | CC-MAIN-2019-22 | refinedweb | 197 | 50.94 |
This article will show you how to develop Windbg extensions in Visual Studio, and to leverage the .NET platform through the C++/CLI technology.
Windbg
I was once assigned the task to evaluate the applicability of code obfuscation of .NET assemblies with respect to protection of intellectual property, but also its consequences for debugging and support.
Obfuscation is not without drawbacks. It makes debugging almost impossible, unless it is possible to reverse the obfuscation.
The Obfuscator I evaluated provided an external tool for deobfuscation with the use of map files. You had to manually copy and paste the text into this tool.
Although deobfuscation was possible, it was still very tedious to do so.
In order to make full obfuscation a viable approach, we needed it to be integrated in the build environment by using plugins for both Visual Studio and Windbg. Luckily, the Obfuscator company, provided a public .NET Assembly, which they encouraged you to use.
Windbg
My first approach was to build a normal Windbg Extension with the Windows Driver Kit (WDK), and make it use a C++/CLI DLL built in Visual Studio. This C++/CLI DLL would in turn, use the .NET Assembly. The solution proved to be much simpler.
Enjoy the reading!
Read the manual that comes with Windbg.
It clearly states that you should use the Windows Driver Kit.
Below, you see a screenshot of the build environment.
It is a command line based build environment.
After having been spoiled with Visual Studio, it is not really fun to go back to command line based environments. It only reminds me of the spartan development tools they have in UN*X.
There is a salvation for us Visual Studio fans. I came across this article Developing WinDbg Extension DLLs. It is a step by step guide on how to develop Windbg Extensions with Visual Studio. Praise the Lord!!
My contribution is showing you how to integrate .NET assemblies in your Windbg Extensions. The road to success was trial and error development, recompilation, and lots of crashes.
The earlier mentioned article showed only how to build Windbg Extensions in Visual Studio in native C++ using the C API.
I got that working. Then I changed the project into a managed C++/CLI project. The extension still worked. That implied that it would be possible to use .NET assemblies.
Then, I changed to the C++ API of windbg. That did not work. It crashed on loading the DLL. Since my concern was the Deobfuscator not troubleshooting, I reverted back to the C API.
windbg
To add a reference to .NET library, you have two options:
Right clicking on the project, and selecting "references" to bring up this window.
Another approach is just to add the following line in your C++ file.
#using "StackTraceDeobfuscator.dll"
Now it is possible to call the .NET Assembly from C++/CLI.
StackTraceDeobfuscator^ decoder = gcnew StackTraceDeobfuscator(filename);
String^ s = decoder->DeobfuscateText(obfuscatedText);
Managed objects, such as strings, cannot be passed around freely. Managed strings are references. The garbage collector performs memory compactation once in a while, meaning that the underlying memory can move around. Another restriction is that unmanaged code cannot use managed types. Fortunately, there is a special data type, called msclr::auto_gcroot that addresses this problem. For further information, please read Mixing Native and Managed Types in C++.
string
msclr::auto_gcroot
#include <span class="code-keyword"><msclr\auto_gcroot.h>
</span>
struct NativeContainer
{
msclr::auto_gcroot<String^> obj;
};
NativeContainer container;
void foo(const char* text)
{
container.obj = gcnew String(text);
}
I used this construction in order to save the filename to the mapfile between calls.
In order to avoid allocation when you use char* strings, try to use the std::string. That type will automatically handle deallocation and reallocation for you.
char* string
std::string
std::string str = "abc";
str += "def";
char* backAgain = str->c_str();
You might also need to convert a managed string to a native string and pass it to a native C/C++ function.
string
void OutCliString(String^ clistr)
{
IntPtr p = Marshal::StringToHGlobalAnsi(clistr);
const char* linkStr = static_cast<char* />(p.ToPointer());
dprintf(linkStr);
Marshal::FreeHGlobal(p);
}
I had to use it in order to call the C API function dprintf for outputting text to Windbg.
dprintf
Then I tested my extension:
0:000> .loadby sos mscorwks
0:000> .load DeobfuscateExt.dll
0:000> !mapfile C:\SecretApp_1.0.0.0.nrmap
0:000> !dclrstack
To my surprise, it crashed when it tried to access the .NET assembly. So I inserted some exception handling, in order to get the error message. This is what I saw:
0:000> !dclrstack
Exception
Could not load file or assembly
'StackTraceDeobfuscator, Version=..., Culture=neutral, PublicKeyToken=...'
or one of its dependencies. The system cannot find the file specified.
How was that possible? It existed in the same directory as the other DLL.
After many failed attempts to get it working, I almost included the assembly as an embedded resource, to be loaded manually through the LoadAssembly function.
LoadAssembly
To my rescue, was the Process Monitor program from sysinternals. It logs registry accesses, loading of libraries, assembly bindings, etc. In the log, I could see that Windbg looked in the GAC, then it looked in the installation folder, but never in the winext extension folder.
When I copied the .NET library to the installation folder, it worked. DLLs are supposed to be copied to the winext folder. Since I don't like ugly work-arounds, I continued to investigate. In Process Monitor, I saw that the assembly loader, also looked for a Windbg.config file, but none was found. Then it hit me. The .NET assembly loader, sees Windbg as a .NET application now.
After adding a windbg.config file in the installation folder, telling windbg to look in the winext folder for .NET extensions. It finally worked.
windbg
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<runtime>
<assemblyBinding
xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="winext" />
</assemblyBinding>
</runtime>
</configuration>
This is how the final result looked in Windbg:
0:000> g
// Load in the SOS .Net extension
0:000> .loadby sos mscorwks
// Display the .Net callstack
0:000> !ClrStack
OS Thread Id: 0x748 (0)
ESP EIP
003bed14 770b22a1 [NDirectMethodFrameStandalone:
003bed14] EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.VeInoLCS1()
003bed24 0029390b EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.glsAhkOLS(System.String)
003bed30 002938e9 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.RqruoQ7Wy(System.String)
003bed38 002938c9 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.y870B2Qwn(System.String)
003bed40 002938a9 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.MshWmjm77(System.String)
003bed48 00293889 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.EQqEmimFN(System.String)
003bed50 00293869 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.lTK9lM3pf(System.String)
003bed58 00293849 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.CRBliDoRv(System.String)
003bed60 00293829 EP8d6KKZNOc1stcvCF.rW7Nf5orPqjZvnA6qV.rEXh0q6dg(System.String)
003bed68 0029009d yIRVo677UilAvTI0XH.LoYYaNO29PLFbE32gm.ayXxZy7mO(System.String[])
003bef94 6f0a1b6c [GCFrame: 003bef94]
// Load my deobfuscator extension
0:000> .load DeobfuscateExt.dll
0:000> !mapfile C:\SecretApp_1.0.0.0.nrmap
0:000> !dclrstack
OS Thread Id: 0x748 (0)
ESP EIP
003bed14 770b22a1 [NDirectMethodFrameStandalone: 003bed14]
SecretApp.NestingCalls.DebugBreak()
003bed24 0029390b SecretApp.NestingCalls.Ti(System.String)
003bed30 002938e9 SecretApp.NestingCalls.La(System.String)
003bed38 002938c9 SecretApp.NestingCalls.So(System.String)
003bed40 002938a9 SecretApp.NestingCalls.Fa(System.String)
003bed48 00293889 SecretApp.NestingCalls.Mi(System.String)
003bed50 00293869 SecretApp.NestingCalls.Re(System.String)
003bed58 00293849 SecretApp.NestingCalls.Do(System.String)
003bed60 00293829 SecretApp.NestingCalls.Execute(System.String)
003bed68 0029009d SecretApp.Program.Main(System.String[])
003bef94 6f0a1b6c [GCFrame: 003bef94]
My extension works as a filter on its input. It was written to deobfuscate the output from the ClrStack command in the Sos extension.
ClrStack
// Other possibilities
0:000> !dclrstack -a // Same as !ClrStack -a
0:000> !deobfuscate !ClrStack -a // The general deobfuscate function
// executes "!ClrStack -a"
0:000> !deobfuscate <cmd> <cmd> <cmd>
The implementation was originally done for a specific Obfuscator. Since I was unsure of the appropriateness to include their DLL in this project, I removed it. The only thing changed from the original implementation is the reference to StackTraceDeobfuscator.dll, which has been replaced by a simple dummy .NET DLL which just converts its input to uppercase. The windbg command mapfile accepts any existing file. The command dclrstack and deobfuscate just converts its input to uppercase.
mapfile
dclrstack
deobfuscate
I didn't get the C++ API of windbg to work. The C++ API, is a set of macros, which is expanded to C functions, which calls into the C++ methods. The C function gets called correctly, and the transition to the C++ method too. But the call to a base class constructor makes it crash. I analyzed the crashes a bit. To me, it seems to be some conflict in the calling convention between __cdecl and __stdcall. I still haven't figured out a way to make it work. Any contribution in that area is highly appreciated.
__cdecl
__stdcall. | http://www.codeproject.com/script/Articles/View.aspx?aid=187726 | CC-MAIN-2014-35 | refinedweb | 1,439 | 53.07 |
Interfacing with a Wiimote
Difficulty: intermediate
This tutorial will show you how to connect a Wiimote to the Pi over Bluetooth. You will then be able to read input from it, including the state of the buttons and accelerometer and send it output, e.g. changing the LED state and playing with rumble.
REQUIREMENTS:
Raspberry Pi
Bluetooth dongle
Wiimote
INSTRUCTIONS:
It is recommended to use one of our SD cards or images, if you are not then you will need: python-cwiid and to set your Bluetooth in discoverable mode with
sudo hciconfig hci0 piscan.
Log into your Pi and start a Python console (or ipython if you want tab completion and other extra features).
python
To be able to use the Wiimote we have to import the necessary library so:
import cwiid
Connecting a Wiimote and saving it as
wm to use later is now as simple as simultaneously pressing 1 + 2 on your Wiimote to make it discoverable then running:
wm = cwiid.Wiimote()
This is however liable to fail a few times and not estabalish a connection but raise a RuntimeError, we will handle this when writing a fuller script.
Now that we have a Wiimote connected let’s try and do something with it. Let’s start by having it count in binary on the LEDs.
import time for i in range(16): wm.led = i time.sleep(0.5)
Now have it rumble for every multiple of 3:
for i in range(16): wm.led = i if i%3: wm.rumble= False else: wm.rumble = True time.sleep(0.5) wm.rumble = False
Now if we want to read values from the Wiimote we must turn on the reporting mode. First let’s have it just report button presses.
wm.rpt_mode = cwiid.RPT_BTN
To then get all the information the Wiimote is reporting type:
wm.state
Try holding down a few buttons and running the program again to see how it changes. If you’re interested only in the button presses try instead:
wm.state['buttons']
To make it more useful we can check for specific buttons being pressed. For instance if you want to see if the button ‘1’ is being pressed:
if (wm.state['buttons'] & cwiid.BTN_1): print ("button '1' pressed")
If you want to see what other buttons there are to read, try:
dir(cwiid)
Or if you’re using ipython hit tab after typing
cwiid..
Now that we understand the basics of how to use the Wiimote we’ll have a look at its key feature, the accelerometer. This is also very easy to access, first we can make the Wiimote report both button presses and accelerometer state with:
wm.rpt_mode = cwiid.RPT_BTN | cwiid.RPT_ACC
Let’s just have a look at the data we get from it:
wm.state
now shows us we have an extra field called
acc which is a 3-tuple. Let’s have it regularly print the state so we can see how it changes as the Wiimote is moved.
while True: print(wm.state['acc']) time.sleep(0.3)
It appears that during normal movement the value centres at about 125 with 25 either way (going much higher if you flick it sharply or provide another strong acceleration rather than just gravity).
So to make your script a bit more robust, here’s a better way to connect to the Wiimote: it will try a few times, tell you how to connect and quit if a connection isn’t made.
import cwiid import time import i2c #connecting to the Wiimote. This allows several attempts # as first few often fail. print 'Press 1+2 on your Wiimote now...' wm = None i=2 while not wm: try: wm=cwiid.Wiimote() except RuntimeError: if (i>10): quit() break print "Error opening wiimote connection" print "attempt " + str(i) i +=1 #set Wiimote to report button presses and accelerometer state wm.rpt_mode = cwiid.RPT_BTN | cwiid.RPT_ACC #turn on led to show connected wm.led = 1
What you now do is up to you! For information on projects other people have done look at WiiBrew.
Here’s an example script of how to use the Wiimote to drive our robot: wiimote.py | https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/wiimote/ | CC-MAIN-2018-34 | refinedweb | 700 | 72.16 |
Hi.
I recently read that using the name of an array without brackets was one method for accessing the <address> of the array's first element. E.g:
#include <stdio.h> int arr[5] = { 2, 4, 6, 8, 10 }; int main( void ) { printf( "The address of the '2' is %d\n", arr ); return 0; } char* message1 = "C"; char* message2 = "is the"; char* message3 = "best"; char* message4 = "programming"; char* message5 = "language!"; puts( message1 ); puts( message2 ); puts( message3 ); puts( message4 ); puts( message5 ); /* message1 points to the 'C' and using puts (message1) will */ /* display the letter as opposed to the address. Why is this? */ /* I know the puts() function receives a char pointer as an */ /* argument. Is this why one need not dereference it by */ /* using *message1 unless one is using */ /* printf( "%c\n", *message1 ); */ /* Essentially, I'm slightly confused by when one pointer is */ /* pointing to the address of the first element in an array and when */ /* it is pointing to the first value */ return 0;
Thanks for the help,
java_girl | https://www.daniweb.com/programming/software-development/threads/160193/arrays-and-pointers | CC-MAIN-2017-34 | refinedweb | 169 | 64.95 |
.
SimpleClass': System.InvalidOperationException:
Usually, talking about the differences between running code under the CLR vs. running under SQL CLR focuses on functionality that either doesn’t work or is difficult to use in a safe and reliable manner. However, one feature that SQL Server actually adds to the CLR environment is deadlock detection.
Joe Duffy mentions this in his article, No More Hangs, about advanced CLR techniques to detect and resolve deadlocks. One of his methods is to use the CLR Hosting interfaces to write a custom CLR Host to handle all the locking primitives so he can analyze them to check if deadlock has occurred. This is essentially the same method that SQL Server uses to detect deadlocks, except rather than using a separate deadlock detection algorithm for lock requests coming from the CLR, we translate them to the standard SQL locks provided by SOS.
If you compile the following program as an executable and run it, not a whole lot happens. The program deadlocks as expected and leaves you staring at the blinking cursor, wondering what to do. However, if you create Method1 and Method2 as SQL Stored Procedures and run them at the same time from separate connections, you’ll see that SQL Server automatically detects the deadlock and kills one of them for you.
public class DeadlockSample
{
public static readonly object a = new object();
public static readonly object b = new object();
[SqlProcedure]
public static void Method1()
{
lock(a)
{
Thread.Sleep(2000);
lock (b) { SqlContext.Pipe.Send("This means Method2 was killed!"); }
}
}
public static void Method2()
lock(b)
lock (a) { SqlContext.Pipe.Send("This means Method1 was killed!"); }
public static void Main()
Thread thread1 = new Thread(new ThreadStart(Method1));
Thread thread2 = new Thread(new ThreadStart(Method2));
thread1.Start();
thread2.Start();
thread1.Join();
thread2.Join();
}
Having SOS handle all locking is especially useful as it allows for deadlock detection to work even for the case of inproc data access where a CLR lock might be deadlocked against a SQL lock. In the following example, 2 methods both want to take a CLR lock and update a column in a SQL table, but requesting them in a different order leads to deadlock.
create table table1(c int)
insert into table1 values(1)
public static void LockAndUpdate()
using (SqlConnection conn = new SqlConnection("Context Connection=true"))
conn.Open();
SqlCommand cmd = new SqlCommand("update table1 set c = 2", conn);
cmd.Transaction = conn.BeginTransaction();
lock (a)
{
Thread.Sleep(2000);
cmd.ExecuteNonQuery();
SqlContext.Pipe.Send("This means UpdateAndLock was killed!");
}
cmd.Transaction.Commit();
public static void UpdateAndLock()
cmd.ExecuteNonQuery();
{ SqlContext.Pipe.Send("This means LockAndUpdate was killed!"); }
One important aspect to keep in mind when dealing with deadlock detection in SQL CLR is that SQL does not explicitly kill your thread with a ThreadAbortException but merely throws a regular exception so that you can catch it and deal with the problem if you are prepared to handle it. This also means, however, that poor programming practices, such as catching all exceptions, might cause you to dismiss the exception without handling it properly. If you catch the exception and retry without releasing the deadlocked resources then it is likely that you'll only deadlock again.
Here is the section from BOL, Detecting and Ending Deadlocks:
."
There.
What if the INSERT statement failed due to a duplicated key violation? SQL server will translate such normal TSQL exceptions into a CLR SqlException object. When this happens, the TSQL exception is considered as been handled. The system no longer has any pending TSQL exceptions at all, instead a managed SqlException will be thrown. Your code will see a SqlException. You can catch it through your CLR exception handler. This mechanism allows to catch TSQL exceptions in your CLR function/procedure.
SQL]
In]
SQL Server 2005 allows creating of User Defined Aggregate in any of the .NET languages such as C# or VB. For simple cases like SUM or MAX you probably want to use built-in aggregates, however there are cases where build-ins are insufficient. In such cases people used to put the business logic on a client on a middle tier. With the new version of SQL Server you can have this logic on a server.
Let’s say company XYZ wants to come up with a way of calculating a bonus for their employees. XYZ uses NWIND database (NWIND database can be downloaded from). XYZ wants to have a business rule such that the bonus is never greater than 200% of the salary and each regular sale adds 1% to the bonus and each sale to Germany adds 3% to the bonus.
With the new Sql Server 2005 you can write your own aggregates in C# (or any .NET compatible language). Here is the aggregate.
public struct Bonus
public void Accumulate(SqlString Country)
if (Country == "Germany")
public void Merge(Bonus Group)
And here is a T-SQL query that uses this aggregate to calculate bonus for each employee.
Employees.FirstName, Employees.LastName, dbo.Bonus(Orders.ShipCountry)
[Posted by NikitaS]
When.
This is a sample on how to register satellite assemblies in SQL Server 2005.
Based on the CultureInfo on the executing thread, CLR will try to load the respective resource assembly. It should not be a difference between satellite assemblies inside and outside of the SQL CLR other than location: if for a normal application the satellite assemblies need to be located in a special name subdirectory or in GAC for SQL CLR, the satellite assemblies should be registered inside of the database using Create Assembly command. Remember that the SQL Server will only load the assemblies registered in the database.
The naming convention is mandatory: for assembly A, resources assembly file should be named as A.resource.dll. The sql name given at the registration time is not important.
Also I observed from my tests that it is mandatory to have a match between the versions of the root assembly and resources assembly (but this is not specific to SQL CLR).
In my bellow test I am changing the CultureInfo of the current thread in order to check that the right resource assembly is loaded.
I have already created 2 resource files Test.resources and Test.fr.resource that contain an entry 'test', with values ‘default’ and 'fr' with the following code.
IResourceWriter rw = new ResourceWriter(strFileName);
rw.AddResource(strResName, strResValue);
rw.Close();
This is the assembly code that tries to consume the resource and change the CultureInfo. My assembly is strong named using TestKeyPair.key so you will need to use your own key pair there.
using System.Reflection;
using System.Threading;
using System.Globalization;
using System.Resources;
[assembly:AssemblyVersion("1.3.0.0")]
public class cTest
public static string MainMethod(string strCul)
{
if (strCul!="" && strCul!="def")
{
Thread.CurrentThread.CurrentUICulture = new CultureInfo (strCul);
}
ResourceManager rm = new ResourceManager ("Test", Assembly.GetExecutingAssembly());
return rm.GetString("test");
This is the command to compile assembly:
csc.exe /target:library /out:Test.dll Test.cs /r:system.dll /res:Test.resources /keyfile:TestKeyPair.key
This is the command that I used to create the resource assembly (note the version, culture and key used for signing):
al.exe /out:Test.resources.dll /v:1.3.0.0 /c:fr /embed:Test.fr.resources /t:lib /keyf:TestKeyPair.key
This is the TSQL code used to register assemblies and clr user defined function in the database:
CREATE ASSEMBLY Test FROM 'C:\temp\meta\Test.dll' WITH PERMISSION_SET=UNSAFE
go
CREATE FUNCTION dbo.f_SatTest (@p_culture nvarchar(400))
RETURNS nvarchar(400)
AS
EXTERNAL NAME Test.cTest.MainMethod
CREATE ASSEMBLY Test_FR FROM 'C:\temp\meta\Test.resources.dll'
SELECT name FROM sys.assemblies WHERE name LIKE 'Test%'
SELECT dbo.f_SatTest('')
SELECT dbo.f_SatTest('fr')
This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at:'
When my greedy count query from the last post is running, I notice a number of rows like the following in the output:
0x006D9180
RUNNABLE
SQLCLR_QUANTUM_PUNISHMENT
E_TASK_ATTACHED_TO_CLR
2
0x006D89C0
4
0x008CCCA8
1
0x006D8E98
3.
0x008CCAB8
RUNNING
SOS_SCHEDULER_YIELD
0
0x006D9278
0x006D9088.
Perhaps.
There are two important memory considerations you may want to track when using SQL CLR functionality: 1) How much memory is SQL CLR using? And 2) How much memory is SQL CLR allowed to use?
The answer to the first question is pretty easy to answer thanks to the dmv sys.dm_os_memory_clerks. The field single_pages_kb is for memory allocated in the SQL Buffer Pool, multi_pages_kb is for memory allocated by the SQL CLR Host that is outside the SQL Buffer pool, and virtual_memory_committed_kb is the amount of memory allocated by the CLR directly through bulk allocation interface (instead of heap allocation) through SQL server. The memory is mostly used for the managed GC heap and the JIT compiler heap, and it is also stored outside of the SQL Buffer Pool. So, to get the total memory used by SQL CLR, you would run the following query:
select single_pages_kb + multi_pages_kb + virtual_memory_committed_kb from sys.dm_os_memory_clerks where type = 'MEMORYCLERK_SQLCLR'
Now that we know how much memory SQL CLR is using on the server, it would be nice to know how much memory SQL CLR is allowed to use. You may be aware that when there is memory pressure on the server, SQL CLR will try to release memory by explicitly running garbage collection and, if necessary, unloading appdomains.
There are two types of memory pressure to be aware of:
- Physical memory pressure based on the amount of available system memory
- Virtual Address Space memory pressure based on the number of available virtual addresses
Physical memory pressure is pretty clear; if your server is under load and running low on available memory, then Windows issues a LowMemoryResourceNotification which SQL Server listens for and handles as Slava explains in two posts on his blog. Understandably, SQL CLR can’t use so much memory that it causes external physical memory pressure.
Virtual Address Space memory pressure is more interesting and frequently more limiting from the SQL CLR perspective because it might cause memory pressure even when there is enough physical memory available. This might happen because as was noted above most SQL CLR memory allocations occur outside of the SQL Buffer Pool in what is sometimes called the MemToLeave section. The size of this area of memory is set by the –g flag on SQL Server start-up, but by default it is at least 256 MB. I say “at least” because the value is not explicitly defined, it is simply the amount of VAS not reserved already by the SQL Buffer Pool. Since the SQL Buffer Pool will not reserve more than the amount of physical ram, this would result in the case where a machine with less ram would actually have more VAS available in MemToLeave.
For an example of how this might affect SQL CLR, in a discussion with MVP Adam Machanic, it was noted than on his machine with 1 GB of ram, he was able to use more memory in SQL CLR than I was on my machine with 2 GB of ram. Adam’s machine would have 1 GB reserved for the buffer pool and 1 GB left for MemToLeave, whereas my machine had 1792 MB reserved for the buffer pool and therefore SQL CLR was limited to the 256 MB left in MemToLeave.
Thankfully, Virtual Address Space memory pressure is primarily only an issue for x86 because on 64-bit machines, as Ken Henderson mentions, the user-mode VAS is 8 TB, so there is always plenty of VAS space left for SQL CLR. | http://blogs.msdn.com/sqlclr/ | crawl-002 | refinedweb | 1,932 | 53.61 |
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "weekly rant rant"
-
-
- Yesterday I used a company service account to email over 1,000 internal employees (mostly application managers and the like) about an old OS version their servers are using which must be upgraded in a few months. It's an automated email that will repeat each month until the servers are upgraded.
That is not the part that might get me fired.
The part that might get me fired is an easter egg I left in the html content of the email itself.
In the embedded html of the message, I buried a comment block that contains a full-screen ascii-art drawing of a spooky tree and grim reaper standing beside a tombstone. The tombstone has the OS info and dates on it. Beneath the ascii-art is a bastardized quote in homage to Metallica's "For Whom The Bell Tolls", referring to the OS end-of-life.
The ascii-art is visible in both the html and the internal git repo that contains the email template.
This is a bit of a shoe-horn for this weekly group rant, as I doubt there is any chance I would really be fired over this, as I (sadly) expect that absolutely NO ONE who receives the messages will ever actually see the comments. But it's out there in the corporate network now... and will be sent over and over for the next few months...
There is a better chance someone may catch the easter egg in the git repo, but I kind-of doubt that, too - so I wanted to at least share with my devRant friends that it's out there, so at least someone else knows than just me. 😝6
-
- First rant, so here goes:
def initialize
@rant = 'when the contractor your boss brought in to help, whose weekly rate exceeds your monthly salary, doesn't know the difference between server side and client side technologies.'
end
-
- Wk103 got to be the best weekly rant, it’s like everybody sharing a good quote about programming, and it’s useful
-
- Thank to “Weekly Rant 119” we all had to clean the place where working, eating, gaming, chilling, sex, porno watching, bugs fighting ,duck fucking, etc etc etc.
Salute to @dfox !2
- 2008: Made my new PC go Blue Screen within 3 months. Knew I wanted to do "something with computers".
2016: CS Major. ✌(◕‿-)✌3
-
-
- Dude claimed that he had good practise of DS and problem solving.
My senior gave him a tough one to solve. Couldn't. Started shouting in between the interview that we tricked him with wrong question. Senior sat him down, told him how it was a right question. Dude got pissed. Stormed out of our office. Posted a review on Glassdoor calling our interview process rubbish and unnecessarily difficult.
HAAH!11
- So while I was reading the weekly rant from wk1, I found this bug.
// I am prroud of how productive I am 😐8
-
-
-
- !rant && Announcement
The closed beta for the new DEVRANT TOOLBOX is starting for chrome users.
The Toolbox is an UNOFFICIAL web extension for Chrome and Firefox.
Additional features:
- Compact mode: reduced image height in the feeds
- Extended page navigation controls for feeds
- Timestamps for rants
- Image preview on mouseover
- Autoreload for the recent feed (180 sec)
- Highlighting new rants after a reload (recent feed only, see screenshot)
- Highlighting own rants (inside feeds) and comments (inside rants)
- Hiding personal scores (still visible by mouseover) and share buttons inside rants
- Colored notifs (different colors for the notif types)
- Notifs with clickable usernames: a click will open the rant AND the username (in a different tab)
- 3 additional Themes: Black, Monochrome, Dark blue
(Next themes to come: solarized light and dark)
- Global history.back on rightclick (for faster navigation)
- Increased feed width (see screenshot)
- Plain background (just the feed on screen)
- Weekly rant
All features can be switched on/off.
The weekly rant is a temporary feature. It uses the devrant api.
I will remove it when that feature is added to the original devrant webfeed.
@dfox: If you dont like the use of the api or some of the features please contact me.
Chrome users can join this group to get the beta:...
I NEED SOME FEEDBACK!!!
Therefore a feedback is my term of use.
Please post it as a comment (or in the google group);
-
- !rant
Do you folks listen to podcasts? What are your favorite ones? The ones I listen to weekly are Geekrant, Linux Gamecast Weekly, and Linux Weekly Daily Wednesdays (done by the LGC group). I'm looking for more tech related ones, though.7
- @dfox can you make the new theme to show faster i woke up at 5:00 am and i wanna to write weekly rant but i cannot4
-
- Am I the only one that thinks it's annoying when people use the weekly rant in their tags when their rant has nothing to do with the weekly rant?4
-'
- That moment when it's Sunday and you cannot wait for the new Weekly Rant theme to be announced on Monday.
- Recently was in a recruitment hackathon for leading technology company.
So, to test ppls networking, team building skill they grouped ppl into a team.
I was teamed up with noobs, and had very bad experience.
One guy in the team was arguing to use PHP for developing a web app.
Me : What PHP framework are u good at!?
He: what is framework !?
Me : like laravel etc..
He: no I meant we use plain PHP!
Me (mind voice) : go fuck yourself, I am bailing out , I Do not need the job
Me : It's ohk we only know NodeJs , so, gave a wierd smile
He was still arguing ,but I gave 0 f***
This is considered as a fight!?
Yeah not the worst though
Apparently the recruitment ppl liked him a lot in my team!2
- Idea for Weekly Rant - What have you developed that you're most proud of?😁
Idea for Dev Rant in General - Ability to add and message friends. Groups or Communities to join and chat. 👫3
- !rant
I’ve worked at six companies over my 10 years of professional programming. All companies say they track internet usage of the employees. This is the first company I have worked for that sends you a weekly summary of sites that you visit.5
-
-
- So I am going from tje otjer side for this weeks rant:
So i my school we had this teacher who was also our network admin. A great deal of knowledge, always bussy and had to listen to shit people messed up with their configs and network settings but was never grumpy and always friendly. He did a great job teaching and keeping the network up and running (for which he didn`t even get paid in full)
So my gratest respects and tribute to him in this weekly rant
- Please stop tagging rants with the weekly rant tag if they have nothing to do with the weekly rant, it makes you look attention hungry.2
-
-
- I swear every one of this week's weekly rant could be remixed together with the sound of silence playing over it all...
EDIT: now that I've said this I really want to do it...5
-
-
- Finally a weekly rant where I can learn something!! 🙌
Mine:
-put linux on machine
-buy a gallon of coke
-run the ide
-start skype
-skype with my bud and code
And the thing that makes me The most productive: deadlines2
-
-
-
-
-
- Don't get me wrong I love the weekly rants!
...but on mondays I just genuinely don't use devRant because reading about the same topic over and over again becomes boring so quickly.1
-
- !Rant:
Why did you guys decide to become a developer?
I became a developer after finding out that I loved wrecking my brains on complicated puzzles to keep me from getting depressed. After a while I figured out that I'm the person that needs to be challenged to actually be able to enjoy something and start to overthink the little things.
Here are the things I wreck my brains over on a weekly basis.
- programming
- research on complicated subjects
- magic the gathering9
-
-
- TFW you stay up until midnight on a Sunday to see next week's weekly rant question, but it doesn't get changed...2
- :-/19
- I have already accepted another internship offer.
I just received a far better offer.
And somehow I made it to the final interview round for my dream company.6
- @dfox, @trogus, would you consider adding the topic for next week to be "Near miss: how I almost lost all my code/data"?
It sould be fun, and people can also learn something. Like what not to do, how to recover and how to prevent this kind of stuff..
- I see a lot of people on this weekly rant telling they go lunch, read a book, play guitar...
And then i look at me, working in an office, only being able to silently cry in front of the screen.1
- #weeklyroutine Every monday morning wake up and watch new episode of silicon valley.
Loved It.Tell me about your experience3
-.
- Weekly Rant-
My best office prank by far was at my high school. First, I bought a USB rubber ducky and programmed it to backdoor my friends school computer with netcat and a batch file that ran in the background so that I could connect to his computer any time inconspicuously. The next day, I injected his computer with the drive when he went to turn in some papers.
You should've seen the look on his face when his computer started having conversations with the teacher.
-
- Dealing with people. They take forever to respond on the simplest things...
Today I fixed a big problem. It took me 2wks or more.
If only they let me have admin on all systems needed, it prolly take a few days...
It's sorta ironic I wrote this then saw the weekly rant...
- So I just read the Blink to "Poke the Box" and one question it raised was.
If you had no obstacles and unlimited resources (I guess like if you could retire right now and do anything you want), what would you do?
For myself I couldn't really think of something/a goal/big project that would keep me interested...
(I vaguely remember maybe there was a Weekly Rant about this...
- !rant
Suggestion: Polls? Either user supplied or put up like the weekly rant.
I think it'd be fun/interesting with all these captive devs.
Perhaps earned every 500++ or so to avoid spam.
-
- Next weekly rant will be about what demotivates us most...
Yea so my rant: REWRITING MY FUCKING NN FOR THE FUCKING THIRD TIME10
- Did we already have a Weekly Group Rant for how many lines there are in the biggest file in the project you are working on?5
- Hi All !!!
Woah this is my first Post after 3 years not opening this website.
i don't know why.
but maybe between 2017-2020 my live got better so i don't think will have any Rant again.ahahaha *kidding
but today i see email, that i got sticker from devRant, woah i think i will go to devRant again.
wow devRant more cool than before , i don't think this website still open. i just want to check it. i forgot my password too. but luckily still got an access to my email.
So i want to tell a story about this weekly Rant,
Family Support? what the he** is it.
my family only look for money.
at my first job finding, i always pushed for find work in Factory/Oil/Goverment that will give a BIG money.
my first reaction to this i tell i won't do that. but overtime i think i will not talk about it again.
i just want to get Dev Job anywhere.
i don't know if this is the meaning of passion or something like that.
but from the first time , i try hard to get job only is software development.
and hey Maybe my Pray Listened by Almighty God.
so i got my first job as Fullstack developer that luckily accept me as self taught software developer. i don't have any formal education.
actually i only learn software dev from Lynda.com(not promotion) .
i learn algorithm, pseudocode . then i got passed the test of psudocode.
Then because the money is good in there. my parent just accept my first job. not complaining again till now..
maybe this is what they called ikigai??
i love software development so much....
but still i always have a Rant every day about it.
someday you like it, someday you hate it.
someday yo miss it, someday you regret it.
maybe that what is called Love.Damn...
-
-
- Given how much talk there's around security, I think it'd be grand idea to dedicate a weekly rant to cybersecurity. Could spark an interesting discussion, especially in today's heated climate. Thoughts?
E.g. Best way to increase security/privacy?9
-
-
- That feeling when your rant is on top of weekly rant list..
Feeling awesome thanks for the support buds !
-
- Last week, I didn't come up with something for this. Just now I experienced such a moment and remembered that there was a weekly rant on this topic.
The first bug report for my first ever project got resolved and the client commented with thanks and told to keep it up.
It feels awesome.
(tears of joy all over my eyes)
It's a moment that took me more than a year's effort to get a bug report and a positive feedback post it's fix.
I am all motivated now to work even better and wait for such awesome moments.
- Weekly Group Rant - My biggest dev ambition is to make a product so great that I have enough money to buy back GitHub from Microsoft before they destroy the platform :)4
- Prompt in which you have friends and you can sort them by favorit language or other things.
You can invite friends and they got to confirmanf then you have friendship with the other guy/girl/other.
I am sorry that this is not for weekly rant but for devRant.😅3
- Delegate!
'nuff said!
( PS : This is my first rant and I like the Weekly Group Rant. I'm starting all the way from the beginning of history )8
- Who chooses the Weekly Rant?
Are we allowed to make suggestions?
I suggest because a lot of people are at home and we need some distractions that every just post pictures of their pets.6
-
-
- don't know if I'm allowed to post twice for the same weekly rant, but my favorite reaction was when my senile neighbor responded with "Oh that's nice! Computers are all over the place like stink on shit these days"
-
-
-
-
- @dfox devRant search seems a tad iffy - if I search for "wk19" I see results for pretty much every weekly rant, not just wk19 ones.3
- Weekly Group Rant topic idea:
Funniest/weirdest/most absurd question you've seen on Stackoverflow?
Just had to delete one of my own questions because it turns out I answered it in the description 💩.4
- FOR CLIENTS AND LINKEDIN LURKERS
The weekly rant has bunch of ideas that you can steal to earn some tasty money.
Check them out! You can also sell those to a company.
-
-
- As a college student I couldn't be happier with this weeks weekly rant learning so much from a great community thank you all!3
- I am the bug finder! Notice the two black paddings in the top and in the bottom of the screen. Happens only with the Weekly Rant section. @dfox2
-
-
- The timing for this weekly rant is quite perftect, as I have actually just finished rewriting JavaRant! :D I redid a lot of work tried to make it very easy to use, and I think it has improved a lot.
This was also a nice occasion for me to set up Jenkins on my personal server to use that as a CI server (not that it's really necessary, but it is fun).
If you like it, check it out on github:...
And feedback/help is of course always welcome!
- My biggest dev regret is not starting earlier. I started learning how to code only 5 years ago, when I was 19. God, I wish I started earlier.
- !dev && rant(-ish)
Seriously, what's up with all the different "Toss a Coin to Your Witcher" covers? Artists have created so many of them, that after listening to one or two covers, I get like 8 covers in my Weekly Discover or Release Radar on Spotify. It's getting annoying. I know it became a popular song of the new fans of Witcher, but if you plan to release an (n+1)th cover, just don't do it! It became boring and I will probably ignore your cover. Instead, you can create your own unique song!5
- Replace every other profession eventually, actually screw that remove computing profession too and just chill and let things burn3
- I thought devRant week count would have overflowed and gone back to 1.
Qhat sort of year has more than 52 weeks, feels like the estimates PMs give ...
-
- They say cloud is gonna rise up up above the sky but there's only the numbers of unemployed AWS cloudiers that souring high in sight
- Fatal problem in weekly rant 4: Segmentation fault
No further messages available
Core was not dump for reasons unknow
-
-
- Bored at work. I'm feeling compelled to go back and post all the weekly rants. I'd like to have them all submitted, am I crazy? Any one else with me?
Also, this:
[Weekly Rants should have started with 'wk0']1
-.
- How long does a week in devRant last? I mean the weekly rant shouldn't it get changed every week or what?4
-
- Are there any off-limits items for weeklys (other than politics)? Like could we get a "bad dev pickup lines" weekly or should i just start a thread?3
-
-
- Immersive VR/AR that works - kind of like what Magic Leap is but open, free and now.
Alas ..... unlimited budgets are hard to come by
- This rant is only because I have a rant, but not related to the weekly rant, so I don't fucking like a tag that I didn't choose to come up... Fuck off wk1381
- How do raise an issue in devRant??. The app keeps hanging after I finish one rant and click back? Sometimes it does not load any rants. I have had to click weekly rants and then back to daily rants to reload.2
- @dfox It would be helpful to see the weekly rant topic in-browser, or have a sticky post with the topic when searching by the weekly tag2
Top Tags | https://devrant.com/search?term=weekly+rant+rant | CC-MAIN-2021-10 | refinedweb | 3,214 | 80.92 |
Introducing the Tellurium Automated Testing Framework
IntroductionThe Tellurium Automated Testing Framework (Tellurium) is a framework for testing web applications, which was started in June 2007 by Jian Fang and became an open source project on Google Code in June 2008. It is released on a regular basis and is currently at 0.7.0.
The core of the project was started over two years ago and quickly spawned multiple sub-projects including: UDL, Core, Engine, Widget extensions, Maven Archetypes, Trump, Tellurium IDE, TelluriumWorks, and reference projects.
The framework was developed from the Selenium framework, but with a different testing concept. Most existing web testing frameworks, like Selenium, primarily focus on individual UI elements. Tellurium on the other hand, treats the whole UI element as a widget; calling the element a UI module.
Taking the Google search UI as an example, it is represented in Tellurium as follows:"])
}
As shown in this example, the UI module is a set of nested UI elements with tags and attributes. The adoption of the UI module makes Tellurium expressive, and robust to changes. It is also easy to represent dynamic web content, and easy to maintain.
The framework comprises the following components:
- Trump - A Firefox plugin, properly the Tellurium UI Module Plugin , that automatically generates the UI module after a user selects the UI elements from the web page being tested.
- Tellurium IDE – A Firefox plugin that records user actions and generates Tellurium test scripts, including UI module definitions, actions, and assertions. The scripts are written in Groovy.
- TelluriumWorks – A standalone Java Swing application used to edit and run Tellurium test scripts. An IDE plugin for IntelliJ IDEA is in development.
- JavaScript Widget Extensions - Extensions for popular JavaScript frameworks such as Dojo and jQuery UI. This allows users to include the published Tellurium jar file and then treat the UI widget as a regular Tellurium object in the UI module definition.
Features
The main features are:
- The UI module clearly represents the UI being tested. In Tellurium's test code, locators are not used directly. The object uids are used to reference UI elements, which are expressive.
For example:
type "GoogleSearchModule.Input", "Tellurium test"
click "GoogleSearchModule.Search"
- UI attributes are used to describe the UI instead of fixed locators. The actual locators are generated at runtime. If the attributes are changed, new runtime locators are generated by the framework. Tellurium then self-adapts to UI changes as necessary.
The Santa algorithm in Tellurium 0.7.0 further improves the test robustness by locating the whole UI module in a single attempt. A UI module partial match mechanism is then used to adapt to attribute changes up to a certain level.
- The Tellurium UI templates and the Tellurium UID Description Language (UDL) are used to represent dynamic web content.
- The framework enforces the separation of UI modules from the test code, allowing easy refactoring.
For example, the UI and the corresponding methods are defined in a separate Groovy class. In this way, the test code is decoupled from the UI module.
In addition the framework:
- Uses abstract UI objects to encapsulate web UI elements
- Supports widgets for re-usability
- Offers a DSL for UI definition, actions, and testing
- Supports Group locating to locate a collection of UI components in one attempt
- Includes CSS selector support to improve test speed in IE
- Has Locator caching and command bundles to improve test speed
- Supports Data-driven test support
Comparing Selenium and Tellurium
The Selenium web testing framework is one of the most popular open source automated web testing frameworks. It is a ground-breaking framework offering many unique features and advantages such as: browser-based testing, Selenium Grid, and "record and replay" of user interactions with the Selenium IDE.
However, Selenium has some issues. Take the following test code for example:
setUp("", "*chrome");
selenium.open("/");
selenium.type("q", "Selenium test");
selenium.click("//input[@value='Google Search' and @type='button']");
If one were not familiar with the Google search page, could one tell what the UI of the page looked like based on that code? What does the locator q mean in this instance?
What if the XPath //input[@value='Google Search' and @type='button']became invalid due to changes on the web? More than likely, the test code would have to be reviewed in its entirely to locate the lines that needed to be updated.
What if there are tens or hundreds of locators in the test code? Creating the test code using the Selenium IDE may be easy to use initially, but it is difficult to generalize and refactor.
Refactoring is a more tedious procedure than generating new test code from scratch. The reason for this is that hard-coded locators tightly coupled with the test code are being used. Maintaining the code can be difficult because the test code is not structured.
Selenium is a good framework when it acts as a low level web test driving framework. However, it requires a lot of effort to create robust test code..
Testing Approach
Tellurium takes a new approach to automated web-testing through the concept of the UI module. Objects are used to encapsulate web UI elements so that manually generalizing and refactoring of the UI locators is not required. The UI module is simply a composite UI object consisting of nested basic UI objects.
The framework runs in two modes. The first mode is to work as a wrapper to the Selenium framework. That is to say, the Tellurium core generates the runtime locator based on the UI object's attributes in a UI module. The generated runtime locator is then passed in the Selenium call to the Selenium core with Tellurium extensions.
Tellurium is also developing its own test driving engine, the Tellurium Engine, to better and more efficiently support UI modules.
- First, the Tellurium Core converts the UI module into a JSON representation.
- The JSON representation is then passed to the Tellurium Engine for the first time when the UI module is used.
- The Tellurium Engine then uses the Santa algorithm to locate the whole UI module and put it into a cache.
- For the subsequent calls, the cached UI module is used instead of re-locating them again.
- In addition, the Tellurium core combines multiple commands into one batch called a macro command and then sends the batch to the Tellurium Engine in one call. This reduces round trip latency.
The following example, which uses the issue search UI on the project web site, illustrates the idea.
We start by defining the UI module for the issue search UI
ui.Form(uid: "issueSearch", clocator: [action: "list", method: "GET"]) { Selector(uid: "issueType", clocator: [name: "can", id: "can", direct: "true"]) TextBox(uid: "searchLabel", clocator: [tag: "span", text: "for"]) InputBox(uid: "searchBox", clocator: [type: "text", name: "q", id: "q"]) SubmitButton(uid: "searchButton", clocator: [value: "Search", direct: "true"]) }
The following test method is used:
public void searchIssue(String type, String issue){ select "issueSearch.issueType", type keyType "issueSearch.searchBox", issue click "issueSearch.searchButton" waitForPageToLoad 30000 }
If for some reason, the Selector is changed to an input box, then we just update the UI module accordingly
ui.Form(uid: "issueSearch", clocator: [action: "list", method: "GET"]) { InputBox(uid: "issueType", clocator: [name: "can", direct: "true"]) TextBox(uid: "searchLabel", clocator: [tag: "span", text: "for"]) InputBox(uid: "searchBox", clocator: [type: "text", name: "q", id: "q"]) SubmitButton(uid: "searchButton", clocator: [value: "Search", direct: "true"]) }
then change the command:
select "issueSearch.issueType", type
to:
type "issueSearch.issueType", type
and the rest remains the same.
When there is dynamic web content, taking the Google Books website as an example, the UI includes a list of book categories with a list of books inside each category. The UI module for this UI is surprisingly simple to use as follows:
ui.Container(uid: "GoogleBooksList", clocator: [tag: "table", id: "hp_table"]) {
List(uid: "subcategory", clocator: [tag: "td", class: "sidebar"], separator:
"div") {
Container(uid: "{all}") {
TextBox(uid: "title", clocator: [tag: "div", class: "sub_cat_title"])
List(uid: "links", separator: "p") {
UrlLink(uid: "{all}", clocator: [:])
}
}
}}
The Tellurium UID description language (UDL) provides more flexibility to define dynamic web content. Let us see a complex example.
ui.StandardTable(uid: "GT", clocator: [id: "xyz"], ht: "tbody"){
TextBox(uid: "{header: first} as One", clocator: [tag: "th", text: "one"], self:
true)
TextBox(uid: "{header: 2} as Two", clocator: [tag: "th", text: "two"], self: true)
TextBox(uid: "{header: last} as Three", clocator: [tag: "th", text: "three"],
self: true)
TextBox(uid: "{row: 1, column -> One} as A", clocator: [tag: "div", class: "abc"])
Container(uid: "{row: 1, column -> Two} as B"){
InputBox(uid: "Input", clocator: [tag: "input", class: "123"])
Container(uid: "Some", clocator: [tag: "div", class: "someclass"]){
Span(uid: "Span", clocator: [tag: "span", class: "x"])
UrlLink(uid: "Link", clocator: [:])
}
}
TextBox(uid: "{row: 1, column -> Three} as Hello", clocator: [tag: "td"], self:
true)
}
In the above example, we use meta data "first", number, and "last" to indicate the header positions. The meta data "{row: 1, column -> One} as A" means the UI element, a TextBox in our case, is in row 1 and the same column as where the header "One" is. The test code is very clean, for example:
getText "GT.A"
keyType "GT.B.Input", input
click "GT.B.Some.Link"
waitForPageToLoad 30000
Future Plans
Tellurium is a young and innovative framework with many novel ideas from both the development team and the user community. There are many areas Tellurium would like to develop:
- Tellurium 0.7.0 has implemented a new test driving engine using jQuery. The main features of the Engine are: UI module group locating, UI module caching, Command bundle processing, Selenium APIs re-implemented in jQuery, and new Tellurium APIs. Tellurium will continue to develop the new Engine to its maturity.
- Tellurium UI module plugin 0.8.0 RC1 is just released and it includes many new features. Tellurium IDE release candidate is also out to record and generate test scripts. They are key to the success of Tellurium and they will continue to be improved upon. Besides Trump and Tellurium IDE, Tellurium is planning to improve TelluriumWorks so that it will edit, complete syntax checks, and run Tellurium DSL test scripts.
- Tellurium as a cloud testing tool is another very important future development. The project team is planning to rethink the architecture to make it more straightforward to execute tests in parallel. It is very challenging to exploit peer-to-peer techniques to make the test servers capable of self-organizing and self-coordinating in the cloud environment with the least management effort.
Other areas of the framework to be developed include:
- The creation of reusable Dojo, ExtJS, and jQuery UI Tellurium widgets. This will allow other people to reuse the widgets simply by including the jar files in their projects.
- Behavior Driven Testing support.
- Testing flow support.
- Web security testing.
- Support for other languages such as Ruby.
About The Author
Jian Fang graduated from Georgia Institute of Technology with a Ph.D. degree in Electrical and Computer Engineering. He works as a senior software engineer in a company in the IT industry and mainly focuses on the design and implementation of enterprise applications. He is the creator of the Tellurium Automated Testing Framework.
tellurium rocks!!!!!
by
Haroon Rasheed
Some Resources for Tellurium
by
John Fang
code.google.com/p/aost/
Tellurium User Group
groups.google.com/group/tellurium-users
Tellurium on Twitter
twitter.com/TelluriumSource
Tellurium IDE
code.google.com/p/aost/wiki/TelluriumIde080RC1
Tellurium
by
Jade Lindquist
Jade
Verbose?
by
Vikas Hazrati
Re: Verbose?
by
John Fang
selenium.click("//input[@value='Google Search' and @type='button']");
In Tellurium, the command is
click "Google.Search"
Which one is verbose?
Responsive community
by
Jonathan Share
Re: Verbose?
by
Behrang Saeedzadeh
Re: Tellurium
by
Behrang Saeedzadeh
we've been able to overcome issues that we had with Selenium tests on web pages with Ajax
Could you please elaborate? In particular, what type of Ajax testing issues are you referring to?
Re: Verbose?
by
John Fang
public class GoogleSearchModule extends DslContext {
public void defineUi() {"])
}
}
public void doGoogleSearch(String input) {
type "GoogleSearchModule.Input", input
click "GoogleSearchModule.Search"
waitForPageToLoad 30000
}
}
Compare full working snippets. Your example is misleading. You haven't shown the code necessary to create the "Google" module in Tellurium.
Re: Verbose?
by
Behrang Saeedzadeh
Re: Verbose?
by
John Fang
the UIs under testing? Once you define it, you always use UIDs to
reference UI elements, which is not verbose.
And isn't that verb | http://www.infoq.com/articles/tellurium_intro | CC-MAIN-2014-41 | refinedweb | 2,069 | 55.44 |
Talk:Waves/Precise Intersection
Wave Surfing
So far I know a few robots use Precise Intersection in their wave surfing. Unfortunately their code is either too messy (Wintermute) or too granular (RougeDC) that I don't know which class to look. So I asked here. (Diamond is too new to look into for this)
Normally you surf waves 'till the wave passed centre of the robot. But with precise wave intersection, in order to gain pixel-perfect surfing, you need to surf the wave until they passed the robot. So if anybody does this or the other way? --Nat Pavasant 11:20, 20 October 2009 (UTC)
I was planning to add all 3 to Wintermute, ie, branch when the wave first strikes, branch when the wave passes midway, and branch when the wave exits, but after adding in a stop branch for every tick on the second wave it was too slow to do that without skipping turns. I think it now branches at the mid-point only...--Skilgannon 13:21, 20 October 2009 (UTC)
My experience, precise intersect or not, is that I've seen a measurable decrease in performance any time I try surfing waves any longer than I do now. I surf until they are less than one bullet velocity away, meaning they will pass my center before I can move again. I have some thoughts on this. If you weight by distance or by time to impact, the weight of the wave after it passes your center will be very high, while the chance of it hitting you if it hasn't already is very low. Giving it less weight somehow might help. Also, be careful not to give it negative weight, since your time to impact or distance calculation may give a value less than zero. --Voidious 13:32, 20 October 2009 (UTC)
If you guys surf the wave only half-way, then how are you calculating precise botwidth in the danger calculation? --Nat Pavasant 14:33, 20 October 2009 (UTC)
- In Diamond 1.47's bot width calculation, I start with the first tick the bullet will cross my front bumper and go until the tick where it will cross my center. I tried until it crossed the rear bumper and it performed worse. Again, it's probably because it's so rare you will be hit there that weighting all the ticks evenly is just inaccurate, probabilistically. The next iteration of my precise intersect experiments doesn't have that issue, and I will again try considering all possible intersections. --Voidious 14:52, 20 October 2009 (UTC)
- In Wintermute I take any additional danger that may happen after branching into account with my second-wave surfing - the method returns a range of hits over both waves, then adds them all together. --Skilgannon 08:02, 21 October 2009 (UTC)
- And what about when you surf only single wave? --Nat Pavasant 08:33, 21 October 2009 (UTC)
- Then it just continues through until the wave is completely past =) There's no reason to reverse unless there's another wave coming, in which case I would be branching and my statement above would apply =) --Skilgannon 09:54, 21 October 2009 (UTC)
- In DrussGT I take a bunch of points and predict what GF would be hit if I fed it to my GoTo method. If you're only dealing with one wave then it's quite easy, simply keep predicting until the wave has passed your future bot. I found it is best to start surfing the next wave as the old one passes the center of your bot. --Skilgannon 18:44, 21 October 2009 (UTC)
- Slightly offtopic: Nat, you should take a look at the conversation I had with Voidious about how to surf multiple waves with goto surfing. You can cut down on the combinatorial explosion by not recursing if the danger for the first wave is already higher than the current minimum total. « AaronR « Talk « 21:12, 22 October 2009 (UTC)
In YersiniaPestis I surf the wave until it passes completely, if I don't it gets hit more by simple targeters, I think is because YP doesn't go that far from the danger spots so when it starts surfing the second wave it ran towards the bullet near him. But as far as having a precise window calculated, you can keep track of the waves until it passes you even if you don't consider that wave for surfing it anymore. --zyx 20:51, 20 October 2009 (UTC)
I looked it up and I use Precise Intersection since GresSuffurd 0.1.1 (october 2006), but only in simplified form and only in my gun (e.g. almost Precise Intersection). Reading the reactions above, I already know where my focus for the first few weeks will lie: introducing PI in my surfing. --GrubbmGait 00:19, 21 October 2009 (UTC)
Psudocode
I really like Precise Intersection. Here is some javalike psudocode to help explain this to those who see things better in code.
- IntersectRectCircle(Rect rect,Point center,double radius) is a function that returns an array of points where the circle defined by center and radius, intersects the Rect rect. Otherwise it returns an array of length 0.
- GetRectCorners(Rect rect) is a function that returns an array of 4 points, each making up the corners of Rect rect.
- Rect and Point are generic classes that define a copy of what they say. Point is equivalent to a Point2D.Double and a Rect is equivalent to a Rectangle2D.Double
class Wave { Point center; public long fireTime; public double bearing; public double speed; /** * You will need to divide these min and max angles by the maximum escape * angle to get the guess factor, however since they are all part of this * wave, you do not need to convert them immediately to guess factors. */ public double min; /* Set these to a good initial value */ public double max; /* like positive and negative infinity */ boolean intersectStarted = false; /** * Return true if the intersection is complete */ public boolean intersect(Point target, long time) { Rect boundingbox = new Rect(target.x - 18, target.y - 18, 36, 36); Point[] current = IntersectRectCircle(boundingbox, center, getDistance(time)); Point[] last = IntersectRectCircle(boundingbox, center, getDistance(time - 1)); if(current.length > 0 || last.length > 0) { Point[] corners = GetRectCorners(boundingbox); /* Check the bearings on all the current points */ for(Point p : current) { double angle = Utils.normalRelativeAngle(bearingFromTo(center, p) - bearing); if(angle < min) min = angle; if(angle > max) max = angle; } /* Check the bearings on all the last points */ for(Point p : last) { double angle = Utils.normalRelativeAngle(bearingFromTo(center, p) - bearing); if(angle < min) min = angle; if(angle > max) max = angle; } /* Check the bearings on the bounding boxes corners */ for(Point p : corners) { /* Make sure that the corner is between the distance of the current and last. */ if(center.distance(p) <= getDistance(time) && center.distance(p) >= getDistance(time-1)) { double angle = Utils.normalRelativeAngle(bearingFromTo(center, p) - bearing); if(angle < min) min = angle; if(angle > max) max = angle; } } intersectStarted = true; } else if(intersectStarted) { return true; } return false; } public double getDistance(long time) { return (time - fireTime) * speed; } }
— Chase-san 02:20, 17 July 2010 (UTC)
Smashing it down to bins
Although the concept is very clear to me, calculating the precise intersection between a bot and a wave is one step to far for my mind. Also, why bother calculating it with 6 figures behind the comma, and then force it with a sledgehammer into those coarse bins. With some straight-forward programming and some brute CPU power you can get the same results. I must give a warning though: this looks a lot like virtual bullets, firing one bullet per bin.
When the wave is less than 2 ticks away from the opponent, the calculation should start. Starting with the bin where the center of the opponent is, the bullet is checked for if it lies in the robot bounding box. If it is, this bin is noted as 'left bin' and 'right bin' and then the bullet (bin) on the left is checked. It this still in the bounding box, then this bin is noted as left and the next left bin is checked, and so on. The same for the bins on the right side. On the next tick, the same goes again, starting from the 'left bin' to the left and the 'right bin' to the right. This is repeated every tick untill the wave has passed the opponent. Now you have the bins that would have noted a 'hit' if you indeed fired at one of them. In a slightly other form, this is present in the gun of GresSuffurd since October 2006. But this only adressed the first point mentioned on the Precise Intersection page, so it is better called Not so precise intersection.
To have a precise intersection, also the movement of the bullet and the movement of the bot must be taken into account. The movement of the bullet can be calculated quite simple. Just repeat the above with the distance increased by 'bulletvelocity' (point of T+1). The movement of the bot seems more difficult to calculate, but think about this situation: the bullet has passed a corner of the bot, and the bot moves towards the bullet, intersecting the calculated angle behind the bullet. It just means you have to look backwards to check if the bot was hit in the back. So repeating the same steps as above with the distance decreased by 'bulletvelocity' (point of T-1) would do the trick. This adresses the second point mentioned on the master page.
How can the third point taken care of? Brute-force calculating! Just repeat the above with several points between the 'T-1' point and the 'T+1' point and we have an 'almost precise intersection' ready. Better would be to check if the 'bulletline' would intersect the robot bounding box, but that function I could not find.
The routine implementing this will be part of a next version of GresSuffurd. If I made wrong assumptions, please let me know, I'm still not too old to learn. --GrubbmGait 23:02, 28 January 2011 (UTC)
Neat! I think this approach is a sensible one for bots that use bins (all my uses of precise intersection have never been with any form of bins in the same robot). One note though, is that when checking if the bullet is in the bounding box, be sure to treat the bullet as a line, not a point. The graphics of robocode are rather misleading, since bullets are treated as continuous line segments for purposes of collisions. You may be accounting for this, but just thought I'd mention it since it's not clear whether you are. --Rednaxela 00:37, 29 January 2011 (UTC)
Hey, I also think this is pretty cool! I do have some corrections:
- To really be precise, you should start 3 or even 4 ticks ahead in some situations. A min speed bullet (11) could be a distance of 35 from the enemy bot's center, making it > 3 ticks away. But before the enemy bot moves again, this bullet will move and check for collisions, so in effect the wave's closest point (with respect to collisions) is already 24 from the bot's center. The distance from a bot's center to its corner is ~25, so it could intersect.
- If a bot moves into the bullet line, it is not a collision. Each tick, the bullet moves forward, forming a line segment, and it is checked if this line segment intersects the bot bounding box, then the enemy bot moves. So you don't really need to check the T-1 thing. Just check T, T+1, and some points in between, checking if any of them lie within the current enemy bounding box. (And do that each tick, like you said.)
- This page is kind of confusing - in my opinion, it should talk about "this tick" and "next tick", never "last tick"...
I also have only done this in a bot without bins (Diamond), and only in the movement. I was surprised it helped as much as it did... Good luck!
--Voidious 01:36, 29 January 2011 (UTC)
Oh yeah, and this might save you some brute forcing. :-) java.awt.geom.Line2D.intersects(...) --Voidious 01:56, 29 January 2011 (UTC)
- Sorry for triple post - clearly you've got my brain churning over this. :-) Note that because Robocode's coordinates start from bottom left, you should use x/y of bottom left corner, not top left like the API says... Same with how we use Rectangle2D's. Or you could just brute force... :-P --Voidious 02:05, 29 January 2011 (UTC)
Thanx for the comments, now the implementation can be quite small and fast.
@Rednaxela: I know, but I couldn't find Rectangle2D.intersects( line2D). The other way round is just so obvious I hadn't thought of it . . . duh
@Voidious: Regarding point 1, you're absolutely right. On the second point you are probably right, it seems logical to me, but I found the page also a bit unclear regarding this. Third point, well, you're right (again)
Now let's see howmany points it will gain me in the rumble . . . --GrubbmGait 09:51, 29 January 2011 (UTC)
Yes thanks for your comments :) They've nicely followed my own recent work the last couple of days :) I also used the Line2D.intersec.... For simplicity I currently just check all the bins, instead of the whole center, left, right thing.. Being I won't be firing waves and only surfing, my "sim" starts with onHitByBulletEvent, logs the bulletLine.intersect/contains in a buffer (this and 2 more ticks) , and adds the those binsHIT to my stats , all in the same loop.. The code is surprising very simple and short, works great -Jlm0924 22:55, 29 January 2011 (UTC)
Ok, just assembled some java(pseudo)code, so it can be used. I think it is selfexplaining, most variables you just need in a Wave. The comments above have been processed. When the wavepasscheck returns true, the minimum and maximumbin should be processed and the wave can be removed. --GrubbmGait 11:32, 4 April 2011 (UTC)
class Wave { public long fireTime; public double bulletVelocity; public double HOTAngle; public double maxEscAngle; public int minimumBin; public int maximumBin; public Wave( double bulletpower, Point myPos, Point enemyPos); this.fireTime = robot.getTime(); this.bulletVelocity = Rules.getBulletSpeed( bulletpower); this.fireLocation.setLocation( myPos); this.HOTAngle = doGetAngle( myPos, enemyPos); this.maxEscAngle = Math.asin( 8.0 / bulletVelocity); this.minimumBin = BINS; this.maximumBin = 0; public boolean wavepasscheck( Point2D.Double enemyPos, long currTime) { double waveDistance = bulletVelocity * ( currTime - fireTime); if (waveDistance > fireLocation.distance( enemyPos) + 40) { return true; // wave has passed, process min/max in stats } if (waveDistance > fireLocation.distance( enemyPos) - 3 * bulletVelocity) { Rectangle2D.Double enemySpace = new Rectangle2D.Double( enemyPos.x - 18, enemyPos.y - 18, 36, 36); Line2D.Double bullet = new Line2D.Double(); for (int tbin = 0; tbin < BINS; tbin++) { double tguessfactor = (double)(tbin - MIDBIN) / MIDBIN; double tangleOffset = enemyDirection * tguessfactor * maxEscAngle; double thead = Utils.normalRelativeAngle( HOTAngle + tangleOffset); bullet.setLine( doProjectPos( fireLocation, thead, waveDistance), doProjectPos( fireLocation, thead, waveDistance + bulletVelocity)); if (bullet.intersects( enemySpace)) { if (tbin < minimumBin) minimumBin = tbin; if (tbin > maximumBin) maximumBin = tbin; } } } return false; } }
Targeting
I was wondering how much, if any, improvement people have gained by using this in their targeting. It seems to me that this would be unlikely to cause anything more than 0.05 APS increase, and because of the extra processing needed for precise intersection combined with 2 to 4 virtual waves hitting every tick, you could for example, use a larger k in a kNN algorithm which would help your gun more than the added precision. Any thoughts?--AW 15:02, 15 August 2011 (UTC)
I agree it's not worth much in the gun. I added it from Diamond 1.5.29 to 1.5.30, and here's the comparison: [1]. Before that, I made waves break (bullet velocity / 2) before they passed the center of the enemy bot, so on average, the center of the bullet line segment is at the enemy's center. You have so much data in targeting that it should average out pretty quickly. That said, I like precision and err on the side of what feels right when there's no data to prove which way to go, so I left it in. --Voidious 16:41, 15 August 2011 (UTC)
I've also found it's not worth much in targeting. Here's how I see it... If you have perfect knowledge of what the enemy will do, it doesn't matter if you use precise intersection because your bullet will be well within the intersection range anyway. For surfing however, even if you have perfect knowledge of what they do, precise intersection can allow dodging close calls that may otherwise have hit. Essentially, targeting is more forgiving of small inaccuracies in angles.
Regarding the extra processing needed, how much overhead it is depends highly on the specific targeting method you're using it in. If you're applying it to guessfactor targeting, you only need to do the precise intersection calculation 3 or 4 times per tick (once for each wave intersection occurring on that tick). In contrast, if you're applying it to a play-it-forward targeting system (i.e. PM), then you'd have to compute 3 or 4 precise intersection calculation for each possible future you're projecting (for some kNN-PIF implementations this means several hundred precise intersection calculations). The overhead is pretty tiny really for guessfactor targeting, but can be huge with some kNN-PIF targeting. --Rednaxela 22:55, 15 August 2011 (UTC)
- [View source↑]
- [History↑]
Contents
When GuessFactor normalize angles based on MEA, doesn´t it distort the edges of the bot?
Looks like bots with distances farther away than when the wave was collected are calculated as being wider than they really are. Similar distortion happens when bullet power changes.
Am I missing something?
In the gun, I only ever collect/project the center angle. I only use precise bot width in my gun to find the exact center angle in the range. I use an imprecise bot width as the bandwidth in my kernel density when aiming.
In movement, the precisely predicted bot width is also the bandwidth in my kernel density. I still ignore it whenever collecting angles.
Using only the center of the range makes sense.
I wonder how much improvement is due to precise calculation, and how much is due to distortion (imprecise/unpredictable calculation). | https://robowiki.net/wiki/Talk:Waves/Precise_Intersection | CC-MAIN-2021-10 | refinedweb | 3,103 | 61.67 |
Generic 60W IP65 Intergrated Solar Street Light With Remote - Black. UGX 247,000. Add To Cart. Solar Home Solar Gd-8 Lite Lighting kit - black. UGX 95,000. UGX 100,000. 5%. Add To Cart. New to Jumia? Subscribe to our newsletter to get updates on our latest offers! form_email-label. Male Female. DOWNLOAD JUMIA FREE APP. Get access toGet a Quote
50 watt solar street light supplier, solar powered led
great selection of products-solar street light_solar street light_ all,wall lamp,street light at the guaranteed lowest price. get the best deals on banggood.com. led street lights; 200w waterproof 600 led flood light white light spotlight outdoor lamp for garden yard ac180-220vGet a Quote
Generic 60W Outdoor IP65 Intergrated Solar Street Light With Remote - Black. UGX 237,000. UGX 500,000. 53%. Discover a large selection of outdoor and garden items online at Jumia for your home and office. There are various kinds of garden equipment available for your use, it all depends on your personal taste, you can opt for lanterns with;
30w 40w 50w 60w 70w 80w 100w 120w waterproof ip67 outdoor all in one/ integrated solar street light with camera cheapest all in one garden lamp. high power ce certificate manufacturers outdoor led light manufacturers led lamps rohs manufacturers led outdoor lamp manufacturers outdoor led lighting lamp manufacturersGet a Quote
GH₵ 150.00. GH₵ 200.00. 25%. 1 out of 5 (1) Add To Cart. Shipped from abroad. 60A MPPT Solar Charge Controller Dual USB LCD Display Auto. GH₵ 134.00. GH₵ 268.00. 50%. Buy Solar-powered Lights On Jumia. Now you can say no to an erratic power supply with solar-powered lights on Jumia Ghana. They can be used for homes, schools as well
solar powered led street lighting system. september 2016; doi: 10.13140/rg.2.2.26957.44009. parts of outdoor solar light wholesale in Vietnam, as solar panel will convert . solar energy into electricity. there areGet a Quote) Buy Solar Lights on Jumia Kenya. Everyone needs an adequate supply of electricity. In the home, power supply wouldGet a Quote
Neelux Mosquito Killer Lamp Watt Neelux Energy Saving LED Bulb. KSh 409. KSh 700. 42%. offers from. 3.4 out of 5 GDLITE Solar Lighting System Kit With 3 LED Lights, Solar Panel, Power Cable And Multiple Phone Charger. KSh 2,150. KSh 4,000. 46%. 4.7 out of 5 DOWNLOAD JUMIA FREE APP. Get access to Solarlusive offers! LET US HELP YOU. Help
feb 24, 2021 120w 90 led solar street light in winter gama. Add To CartGet a Quote
BUY…
yangzhou bright solar solutions co., ltd. > products > 24v dc solar led flood light used for highway > 8m 60w super bright solar street lights solar led lights with poles 8m 60w super bright solar street lights solar led lights with polesGet%.Get a Quote
Buy from Deals on Best Light Fixtures Online with Jumia Uganda Today - Discover Our Collection of Lighting @ Best Prices and Discounts Ever - Cash on Delivery - Free Return Pro Solar Solar Street Light 40W. UGX 350,000. UGX 599,000. 42%. 5 out of 5 (1) Add To Cart. Generic 2 IN 1 Insect Zapper LED Bulb. UGX 21,700. UGX 40,000. 46%. 3.7 outGet a Quote
300lm solar head lamp 150 watt street led christmas projector light. US $11.00-$18.00 / Piece. 10 Pieces (Min. Order) 2 YRS Guangzhou Fengneng Energy Technology Co., Ltd. 91.4%. 5.0 (2) "Perfect transaction" Contact Supplier. High lumen outdoor waterproof ip65 40W 60W 100W solar LED street)
bulk buy solar lights online from chinese suppliers on dhgate.com. get deals with coupon and discount code! new portable led solar lights s-1200 15w 130lm led light bulbs charged solar energy lamp garden camp outdoor lighting emergency. us $5.47 120w 150w victorian bulb solar lamp and single lamp with gs pir motion sensor led road light waterproof ip65 outdoorGet a Quote
BougeRV 28in Adjustable Solar Panel Tilt Mount Brackets with Foldable Tilt Legs,Solar Panel Mounting Support up to 100 150 Watt Solar Panel for Roof, RV, …
import china 1000 lumen solar power street lamp on amazon from various high quality chinese solar sensor light - china solar suppliers & manufacturers on globalsources.com. 34,468 street light, heavy duty solar led results from 3,258 china manufacturers. panama (59) peru (100)Get a Quote
Buy Lamp Shades Online from Jumia Uganda. Choose from Our Collection of Lamp Shades and Shop them at the best price. Enjoy Cash On Delivery | Secure Payment | Free Returns & more! Generic AUGIENB LED Solar Power Wall Street Light Outdoor Garden Lamp. UGX 34,490. UGX 68,980. 50%. 1 out of 5 (1) Add To Cart. Shipped from abroad. Generic SolarGet a Quote
Buy bedside lamps online at Jumia Kenya. Get the best deal on decorative bed lamps in different sizes and shapes to give your room the ambiance you want. Generic 2 Pcs Waterproof Solar-Powered Stainless Steel LED Wall Lamp Yard Street Fence Light Decoration. KSh 1,848. KSh 3,083. 40%. Add To Cart. Shipped from abroad.;Get a Quote
Buy Light Bulbs Online from Jumia Uganda. Choose from Our Collection of Light Bulbs and Shop them at the best price. Generic 2 in 1 Street Light Outdoor 90W With a Solar Panel - Black. UGX 275,000. UGX 350,000. 21%. 4 out of 5 (2) Add To Cart. China Factory Made 20 Watt New Energy Bulb FA -E3920 - White. UGX 60,000. Add To CartGet a Quote
Bell Howell Stainless Steel Waterproof Outdor Solar Led Lamps - 8 Lamp - 4 PCs. EGP 215. EGP 600. 64%. 2.3 out of 5 (3) G-Lite LED COB Warm White Color Spike Garden Light - 7 Watt. EGP 165. EGP 215. 23%. Add To Cart. Solar Security Lights For Garden Driveway Patio(White) STARLIGHT Integrated Solar LED Street Light - 60W . EGP 1,800.
Solar specialized in Solar energy-saving street light waterproof ip65 super bright 16 20w integrated led solar street light street light, solar street lamp post mobile: +86-139 2539 high photoelectric conversion efficiency, automatic charging, rainy days can be a good charge, long life, intelligent light control sensor chip, when the surrounding environmentGet a Quote
Buy 150 Watt Solar Panel Online. Enjoy safe shopping online with Jumia. Widest Range of 150 Watt Solar Panel in Nigeria. Best Price in Nigeria Fast Delivery & Cash on delivery Available. Sell on Jumia. 150 Watts Solar Street Light (All In One) ₦ 18,000. Add To Cart.
4w outdoor solar panel 4400mah battery christmas camping all in one powered motion sensor garden led street solar street light | garden solar. us $10.00-$17.00 / piece. 50 pieces (min. order) cn shenzhen yh power supply pte ltd. 12 yrs. 25.0%. contact supplier
1/2/4Pack LED Landscape Solar Lights Waterproof Wall Light Garden Lawn Path Lamp. 213led Outdoor Solar Street Wall Light Sensor PIR Motion LED Lamp Remote Control. 4.4 out of 5 stars 85% agree - Would recommend. $11.00 New. 150 SMD LED Solar Powered White Motion Sensor Security Light Flood 80 100 120. 4.3 out of 5 stars (32) Total dc water pump; ce approved waterproof 60w ip65 intergrated solar street lamp on jumiaGet a Quote | https://www.ilbriccodicarru.it/bd1e43013b5b6a9b43f077201e350743 | CC-MAIN-2021-43 | refinedweb | 1,209 | 73.78 |
I want to define an infinite list where each element is a function of all the previous elements.
So, the
n+1th element of the list would be
f [x1, x2, ..., xn].
This seems simple, but I cannot seem to get my head around how to do it. Can anyone help?
gen f = xs where xs = map f $ inits xs
Or
gen f = fix $ map f . inits
As an alternative to the other answer, hopefully a little more readable but less laconic:
-- "heads f" will generate all of the the prefixes of the inflist heads f = map ( (flip take) (inflist f) ) [1..] -- inflist will generate the infinite list inflist f = ( f [] ) : map f (heads f) -- test function sum1 s = 1 + sum s -- test run >> take 5 (inflist sum1) [1,2,4,8,16]
Upd:
As pointed above the
heads function can be replaced with
inits, which I wasn't aware of it's existence.
You can use
unfoldr:
import Data.List gen :: ([a] -> a) -> a -> [a] gen f init = unfoldr (\l -> let n = f (reverse l) in (Just (n, n:l))) [init]
note this has to reverse the input list each time. You could use
Data.Sequence instead:
import Data.Sequence genSeq :: (Seq a -> a) -> a -> Seq a genSeq f init = unfoldr (\s -> let n = f s in (Just (n, s |> n))) (singleton init) | http://m.dlxedu.com/m/askdetail/3/2b029f2c3f0ea7a36038db24859c495e.html | CC-MAIN-2018-22 | refinedweb | 224 | 78.79 |
public class Event extends Object
Note: For a given event, only the fields which are appropriate will be filled in. The contents of the fields which are not used by the event are unspecified.
Listener,
TypedEvent, SWT Example: ControlExample, Listeners, Sample code and further information
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait in the keyboard. For example, a key down event with the key code equals to SWT.SHIFT can be generated by the left and the right shift keys in the keyboard. The location field can only be used to determine the location of the key code or character in the current event. It does not include information about the location of modifiers in state mask.
SWT.LEFT,
SWT.RIGHT,
SWT.KEYPAD
public int stateMask
SWT.MODIFIER_MASK,
SWT.BUTTON_MASK
public int start
public int end
public String text
public int[] segments
public char[] segmentsChars
public boolean doit
public Object data
public Touch[] touches
public int xDirection
public int yDirection
public double magnification
public double rotation
public Event()
public Rectangle getBounds()
public void setBounds(Rectangle rect)
rect- the new rectangle
public String toString()
toStringin class
Object
Copyright (c) 2000, 2016 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs. | https://help.eclipse.org/neon/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/widgets/Event.html | CC-MAIN-2019-43 | refinedweb | 205 | 53.31 |
user=> (map (fn [x] (.toUpperCase x)) (.split "Dasher Dancer Prancer" " ")) ("DASHER" "DANCER" "PRANCER")
Clojure has a rich set of data structures. They share a set of properties:
They are immutable
They are read-able
They support proper value equality semantics in their implementation of equals
They provide good hash values
In addition, the collections:
Are manipulated via interfaces.
Support sequencing
Support persistent manipulation.
Support metadata
Implement java.lang.Iterable
Implement the non-optional (read-only) portion of java.util.Collection
nil..
Computation: + - * /
user=> (map (fn [x] (.toUpperCase x)) (.split "Dasher Dancer Prancer" " ")) ("DASHER" "DANCER" "PRANCER")
Clojure characters are Java Characters.
Keywords.
Symbols are identifiers that are normally used to refer to something else. They can be used in program forms to refer to function parameters, let bindings, class names and global vars. They have names and optional namespaces, both of which are strings. Symbols can have metadata (see with-meta).
Symbols, just like Keywords, implement IFn for invoke() of one argument (a map) with an optional second argument (a default value). For example
('mysym my-hash-map :none) means the same as
(get my-hash-map 'mysym :none). See get...
(defn hash-ordered [collection] (-> (reduce (fn [acc e] (unchecked-add-int (unchecked-multiply-int 31 acc) (hash e))) 1 collection) (mix-collection-hash (count collection))))
Unordered collections (maps, sets) must use the following algorithm for calculating hasheq. A map entry is treated as an ordered collection of key and value. Note that unchecked-add-int is used to get integer overflow calculations.
(defn hash-unordered [collection] (-> (reduce unchecked-add-int 0 (map hash collection)) (mix-collection-hash (count collection))))
The mix-collection-hash algorithm is an implementation detail subject to change..
Create a new map: hash-map sorted-map sorted-map-by
'change' a map: assoc dissoc select-keys merge merge-with zipmap
Examine a map: get contains? find keys vals map?
Examine a map entry: key val
Often many map instances have the same base set of keys, for instance when maps are used as structs or objects would be in other languages. StructMaps support this use case by efficiently sharing the key information, while also providing optional enhanced-performance accessors to those keys. StructMaps are in all ways maps, supporting the same set of functions, are interoperable with all other maps, and are persistently extensible (i.e. struct maps are not limited to their base keys). The only restriction is that you cannot dissociate a struct map from one of its base keys. A struct map will retain its base keys in order.
StructMaps are created by first creating a structure basis object using create-struct or defstruct, then creating instances with struct-map or struct.
StructMap setup: create-struct defstruct accessor
Create individual struct: struct-map struct.
Sets are collections of unique values.
There is literal support for hash-sets:
#{:a :b :c :d} -> #{:d :a :b :c}
You can create sets with the hash-set and sorted-set functions:
(hash-set :a :b :c :d) -> #{:d :a :b :c} (sorted-set :a :b :c :d) -> #{:a :b :c :d}
(set [1 2 3 2 1 2 3]) -> #{1 2 3}
Sets are collections:
(def s #{:a :b :c :d}) (conj s :e) -> #{:d :a :b :e :c} (count s) -> 4 (seq s) -> (:d :a :b :c) (= (conj s :e) #{:a :b :c :d :e}) -> true
Sets support 'removal' with disj, as well as contains? and get, the latter returning the object that is held in the set which compares equal to the key, if found:
(disj s :d) -> #{:a :b :c} (contains? s :b) -> true (get s :a) -> :a
Sets are functions of their members, using get:
(s :b) -> :b (s :k) -> nil
Clojure provides basic set operations like union / difference / intersection, as well as some pseudo-relational algebra support for 'relations', which are simply sets of maps - select / index / rename / join. | https://clojure.org/reference/data_structures | CC-MAIN-2017-39 | refinedweb | 646 | 60.85 |
The
Wire.h library and the I2C protocol were already discussed in previous articles (here and here) and therefore will not be addressed in this tutorial.
To start our sketch, add the abovementioned libraries to our code by using the keyword
#include. We will also initialize two objects
lcd() and
rtc to be used for communicating with the LCD and DS3231 respectively.
#include <Wire.h> // for I2C communication #include <LiquidCrystal_I2C.h> // for LCD #include <RTClib.h> // for RTC LiquidCrystal_I2C lcd(0x27, 16, 2); // create LCD with I2C address 0x27, 16 characters per line, 2 lines RTC_DS3231 rtc; // create rtc for the DS3231 RTC module, address is fixed at 0x68
Custom Functions: updateRTC() and updateLCD()
To make our code easier to manage, we will create two custom functions.
The first function we will code is the function
updateRTC(). This function will be responsible for asking the user for the date and time and updating the RTC’s internal clock with the user’s input data. After getting the user input, we can update the RTC’s internal clock by using the function
rtc.adjust() from the
RTCLib.h library. The
rtc.adjust() function receives a parameter with type
DataTime which it uses to update the rtc’s internal time and date.
/* function to update RTC time using user input */ void updateRTC() { lcd.clear(); // clear LCD display lcd.setCursor(0, 0); lcd.print("Edit Mode..."); // ask user to enter new date and time const char txt[6][15] = { "year [4-digit]", "month [1~12]", "day [1~31]", "hours [0~23]", "minutes [0~59]", "seconds [0~59]"}; String str = ""; long newDate[6]; while (Serial.available()) { Serial.read(); // clear serial buffer } for (int i = 0; i < 6; i++) { Serial.print("Enter "); Serial.print(txt[i]); Serial.print(": "); while (!Serial.available()) { ; // wait for user input } str = Serial.readString(); // read user input newDate[i] = str.toInt(); // convert user input to number and save to array Serial.println(newDate[i]); // show user input } // update RTC rtc.adjust(DateTime(newDate[0], newDate[1], newDate[2], newDate[3], newDate[4], newDate[5])); Serial.println("RTC Updated!"); }
The second custom function we will create is the function
updateLCD(). This function will update or refresh the text displayed on the LCD. Inside this function, we will first get the time and date from the RTC. This is done by calling
rtc.now() function which is included in the
RTCLib.h library.
The function
rtc.now() in our code returns a
DateTime data type that contains the current date and time of the rtc. We then assign the data to different variables for additional formatting on the LCD. After assigning the variables, we use the functions
lcd.setCursor() and
lcd.print() from the
LiquidCrystal_I2C.h to position the cursor and to display the text respectively on the LCD. The code below shows how these functions come together to get the rtc time, format the text and display it to the LCD.
/* function to update LCD text */ void updateLCD() { /* create array to convert digit days to words: 0 = Sunday | 4 = Thursday 1 = Monday | 5 = Friday 2 = Tuesday | 6 = Saturday 3 = Wednesday | */ const char dayInWords[7][4] = {"SUN", "MON", "TUE", "WED", "THU", "FRI", "SAT"}; /* create array to convert digit months to words: 0 = [no use] | 1 = January | 6 = June 2 = February | 7 = July 3 = March | 8 = August 4 = April | 9 = September 5 = May | 10 = October 6 = June | 11 = November 7 = July | 12 = December */ const char monthInWords[13][4] = {" ", "JAN", "FEB", "MAR", "APR", "MAY", "JUN", "JUL", "AUG", "SEP", "OCT", "NOV", "DEC"}; // get time and date from RTC and save in variables DateTime rtcTime = rtc.now(); int ss = rtcTime.second(); int mm = rtcTime.minute(); int hh = rtcTime.twelveHour(); int DD = rtcTime.dayOfTheWeek(); int dd = rtcTime.day(); int MM = rtcTime.month(); int yyyy = rtcTime.year(); // move LCD cursor to upper-left position lcd.setCursor(0, 0); // print date in dd-MMM-yyyy format and day of week if (dd < 10) lcd.print("0"); // add preceeding '0' if number is less than 10 lcd.print(dd); lcd.print("-"); lcd.print(monthInWords[MM]); lcd.print("-"); lcd.print(yyyy); lcd.print(" "); lcd.print(dayInWords[DD]); // move LCD cursor to lower-left position lcd.setCursor(0, 1); // print time in 12H format if (hh < 10) lcd.print("0"); lcd.print(hh); lcd.print(':'); if (mm < 10) lcd.print("0"); lcd.print(mm); lcd.print(':'); if (ss < 10) lcd.print("0"); lcd.print(ss); if (rtcTime.isPM()) lcd.print(" PM"); // print AM/PM indication else lcd.print(" AM"); }
Standard Functions: setup() and loop()
The last phase in completing our code for an Arduino Calendar Clock is to add the standard Arduino functions
setup() and
loop().
Inside
setup(), we will initialize the serial interface, the
lcd and the
rtc objects. To initialize the serial with a baud rate of 9600 bps, we will use the code
Serial.begin(9600);. For the LCD, we need to initialize the LCD object and switch-on the backlight of the display. This is achieved by the codes
lcd.init(); and
lcd.backlight();. And finally, we add the code
rtc.begin(); to initialize the rtc object.
void setup() { Serial.begin(9600); // initialize serial lcd.init(); // initialize lcd lcd.backlight(); // switch-on lcd backlight rtc.begin(); // initialize rtc }
For the
loop() function, we will update the text displayed on the LCD by calling
updateLCD();. We will also add the capability to accept user input to update the RTC’s internal clock. If the user sends the char ‘u’ via the serial monitor, it means the user wants to modify the set time and date of the rtc. If this is the case, then we call the function
updateRTC(); to handle user input and update the RTC internal clock.
void loop() { updateLCD(); // update LCD text if (Serial.available()) { char input = Serial.read(); if (input == 'u') updateRTC(); // update RTC time } }
Our sketch is now complete. Save the sketch as arduino-rtc-tutorial.ino and upload it to your Arduino Uno.
Project Test
After uploading the sketch, your Arduino Uno should display the date and time on the LCD as shown in Figure 6.
Ultimate Guide to the Arduino
November 12, 2019
I wonder if I might make a small change to the updateLCD() function, as follows:
void updateLCD()
{
// get time and date from RTC and save in variables
DateTime rtcTime = rtc.now();
/*
* Buffers to format the date and time (on separate lines of the LCD)
*
* Parameters are:
*
| specifier | output |
|———–|——————————————————–|
| YYYY | the year as a 4-digit number (2000–2099) |
| YY | the year as a 2-digit number (00–99) |
| MM | the month as a 2-digit number (01–12) |
| MMM | the abbreviated English month name (“Jan”–“Dec”) |
| DD | the day as a 2-digit number (01–31) |
| DDD | the abbreviated English day of the week (“Mon”–“Sun”) |
| AP | either “AM” or “PM” |
| ap | either “am” or “pm” |
| hh | the hour as a 2-digit number (00–23 or 01–12) |
| mm | the minute as a 2-digit number (00–59) |
| ss | the second as a 2-digit number (00–59) |
If either “AP” or “ap” is used, the “hh” specifier uses 12-hour mode
(range: 01–12). Otherwise it works in 24-hour mode (range: 00–23).
The specifiers within _buffer_ will be overwritten with the appropriate
values from the DateTime. Any characters not belonging to one of the
above specifiers are left as-is.
*/
char dateBuffer[] = “DD-MMM-YYYY DDD”;
char timeBuffer[] = “hh:mm:ss AP”;
// move LCD cursor to upper-left position
lcd.setCursor(0, 0);
lcd.print(rtcTime.toString(dateBuffer));
// move LCD cursor to lower-left position
lcd.setCursor(0, 1);
lcd.print(rtcTime.toString(timeBuffer));
}
As you are alreay using the RTCLib, you might as well use some of the inbuilt functionality to format the date and time! Also, it makes the size of the compiled sketch smaller and could be the difference between getting a sketch to fit, or not, if there was a lot of other code in the required application.
Keep up the good work.
Cheers,
Norm.
Thanks Norm for the improved code. :)
Hi Jan,
sorry! I’ve been messing with your code again! I won;t post it here as the formatting gets weird in the comments section, but help yourself to if you wish.
I have amended the updateRTC() function to:
* Validate all user input against minimum and maximum values;
* Allow the user to abort changing the date and time, if necessary;
* Validation of the DateTime created to ensure it is valid re leap years, days in the month etc (using RTCLib);
* Etc.
Thanks for your original code which gave me the impetus to get on and do something!
Cheers,
Norm.
Good job Norm for making an alternative/improved code, our readers will surely appreciate it. :)
Regards,
Jan | https://www.circuitbasics.com/how-to-use-a-real-time-clock-module-with-the-arduino/ | CC-MAIN-2020-29 | refinedweb | 1,456 | 65.83 |
Alright. So I looked at this page:.... It mentions that canonical functions for .NET 4.0 are supported under the "Teradata" namespace. Where in the world is this namespace? I have referenced Teradata.Client.Entity.Dll, Teradata.Client.Provider.Dll, Teradata.Client.Provider.Resources.Dll. None of these is helping me find the elusive "Teradata" namespace. Please help. I have already wasted countless hours on this!
Thanks,
Sundar
Hi Sundar,
i'm not a .NET programmer, but when i looked at the link you provided i found
For example, to execute a .NET Framework 4.0 supported EndsWith(SearchString, TargetString) function, prepend Teradata to the command (Entity SQL command below)
plus an example how to use it
SELECT e.CompanyName, Teradata.EndsWith(e.CompanyName, "d")
FROM NorthwindEntities.Customers AS e order by e.customerId
So, did you try to add "Teradata." in front of the name?
Dieter
Yes. Of course, the query won't compile in that case. Error message: The type or namespace name 'DiffDays' does not exist in the namespace 'Teradata'.
Hi Sundar,
sorry that i asked such a basic thing, but some people don't really read documentation :-)
IMHO it should be in Teradata.Client.Provider, what if you try any other function from that namespace?
Of course, you could replace that all those .NET function with a generic TD function.
Dieter
>>>Of course, you could replace that all those .NET function with a generic TD function.
What do you mean by that? The canonical functions are expressly provided by Teradata to interoperate with the Entity Framework. I cannot replace that with any function of my own.
This is so frustrating...... I worked so hard over the past few weeks to get LINQPad to query a Teradata data source only to learn that the canonical functions can only be used in Entity SQL and not in "LINQ to entities". What a lousy artificial limitation....
Hi Sundar,
sorry for the late answer.
With "replace" i ment you could write it directly using Teradata SQL, afaik the .NET provider simply translates all those functions to valid Teradata SQL
e.g. DiffDays(date1, date2) = date2 - date1
Of course this means database specific code. | http://community.teradata.com/t5/Connectivity/NET-Data-provider-Canonical-functions-support/td-p/32258 | CC-MAIN-2017-34 | refinedweb | 363 | 61.12 |
At Thu, 9 Feb 2006 13:46:47 +0100, Bas Wijnen <address@hidden> wrote: > > On Thu, Feb 09, 2006 at 12:30:38PM +0100, Marcus Brinkmann wrote: > > At Thu, 9 Feb 2006 10:28:47 +0100, > > Bas Wijnen <address@hidden> wrote: > > > I don't like hard links to directories, but of course that would be the > > > most > > > logical equivalent of two files binding the same (directory-implementing) > > > facet. > > > > Why don't you like them? I would like to hear your concerns. > > With two hard links, none of them is more important than the other. That is, > there is not one "real" file, and an extra link to it. They are co-equal, yes. > If you're lucky, that > means a recursive search (such as find does) becomes very long because of that > (because it traverses the directory several times instead of just once). If > you aren't lucky, there's a loop and find will never stop. "Find" will have to detect loops, yes. There is a way to do that (naively, by comparing device and inode numbers). There is a similar problem with symbolic links, of course. > Of course there are remedies for these problems. Find could be changed to not > go into a directory where it has been before. But that would result in > directories coupled and arbitrary links, for example in an ls -R. I don't understand this last sentence, can you rephrase? > That would > make such output much less useful. > Then again, in some cases there really is not one "real" place for the thing > to be bound. In that case, whatever you do will have these problems. But I > think these cases are usually rare. Yes, but are they rare because they are not useful, or are they rare because we have learned to avoid them (because Unix does not offer this feature)? > > > > > Anyway, find will get in trouble if it follows translators (which are > > > > > now called "facets", and are limited to types which do something to > > > > > some related node) anyway. > > > > > > > > Translators are not called facets. They are two quite different > > > > things. Translators are simply object servers, preferably ones that > > > > speak the directory and/or file protocols. > > > > > > Right, sorry about being unclear. What I meant was that for this specific > > > case, a binding of a facet to a filename can very well be implemented as a > > > translator. Thus the problems that find has with following translators > > > are also present when find goes "into" facets. > > > > Ok. I think you still have a misconception: _Everything_ is going to > > be a translator :) "Files" are objects provided by the "file" > > translator. We want to remove the difference between files and > > translated files. > > That's not a misconception, that's a difference in opinion. ;-) I think it > would be a good idea if "normal" files are not translated (other than by their > filesystem, as is the case in the current Hurd on Mach). But there is no file system any more ;) > The reason for this > is that without it, things like tar become a nightmare: "But I tarred the > whole directory, what do you mean `that facet doesn't include the information > you need'?" > > I think it is useful to have a "raw" version of every file, for which you are > certain that it contains all information that a translator can show about it. > That is, if you copy the file to a different machine, then you get the same > result. This should particularly also be true for translators which the > copying person didn't know existed. This is a matter of policy, not of mechanism. > > The reason this works is that in a persistent system with a directory > > server, you don't need to create any "inode" in the "filesystem" with > > which a translator is associated. This is because there is neither a > > filesystem nor inodes. > > Of course there is a filesystem. What you mean is that most things don't > happen there (as opposed to UNIX, where everything happens there). There is no file system :) > And in fact, the backing store of the persistent system can very well be > considered a file system as well. Ok, you are free to call it a file system, but then we are talking about a very different file system from what we call a file system in Unix, or Hurd on Mach. > We need a namespace anyway, and I think it > makes sense to call it a filesystem. Now you are calling the name space the file system. At one point, I will ask you to make a decision what you want to call a file system :) For now, let's shift focus and look at name spaces. There is not a single name space. This is true even in Unix, where you have different name spaces, one for each partition (file system) and potentially even different name spaces per application (chroot). The namespace is partially implemented in the application, and partially in the directory servers. We start with a simple setup where every directory is implemented by its own server process (translator). Then we can optimize it by putting several directories into a single server, but note that then we probably state a policy about resolution of directory traversal into the server (if we shortcut lookup of names like "a/b/c"). However, the actual file objects are probably implemented by a different server. At least conceptually there is a clear separation. > > > "Within that program" means that this default is not valid anywhere else. > > > If I'm in bash, I need to do the explicit binding. Of course once the > > > binding is made, it stays there until it is removed. > > > > Then there is a difference. In my model, the shell would use > > something like ~/.shell/cache/... to create its "private" name space. > > That is one option for the "default", which is indeed private. A public > version would be an other option, but it might not be a good idea. > > I still don't think there is a difference, though. ;-) Maybe I don't understand what you mean by private and public. In which namespaces are the names visible? Thanks, Marcus | http://lists.gnu.org/archive/html/l4-hurd/2006-02/msg00066.html | CC-MAIN-2018-26 | refinedweb | 1,021 | 72.46 |
This is another member describes several complete market economies having a common linear-quadratic-Gaussian structure.
Three examples of such economies show how the DLE class can be used to compute equilibria of such economies in Python and to illustrate how different versions of these economies can or cannot generate sustained growth.
We require the following imports
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from quantecon import LQ, DLE
Common Structure¶
Our example economies have the following features
- Information flows are governed by an exogenous stochastic process $ z_t $ that follows
$$ z_{t+1} = A_{22}z_t + C_2w_{t+1} $$ where $ w_{t+1} $ is a martingale difference sequence.
- Preference shocks $ b_t $ and technology shocks $ d_t $ are linear functions of $ z_t $
$$ b_t = U_bz_t $$ $$ d_t = U_dz_t $$
- Consumption and physical investment goods are produced using the following technology
$$ \Phi_c c_t + \Phi_g g_t + \Phi_i i_t = \Gamma k_{t-1} + d_t $$ $$ k_t = \Delta_k k_{t-1} + \Theta_k i_t $$ $$ g_t \cdot g_t = l_t^2 $$ where $ c_t $ is a vector of consumption goods, $ g_t $ is a vector of intermediate goods, $ i_t $ is a vector of investment goods, $ k_t $ is a vector of physical capital goods, and $ l_t $ is the amount of labor supplied by the representative household.
- Preferences of a representative household are described by
$$ -\frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t [(s_t-b_t)\cdot(s_t - b_t) + l_t^2], 0 < \beta < 1 $$ $$ s_t = \Lambda h_{t-1} + \Pi c_t $$ $$ h_t = \Delta_h h_{t-1} + \Theta_h c_t $$
where $ s_t $ is a vector of consumption services, and $ h_t $ is a vector of household capital stocks.
Thus, an instance of this class of economies is described by the matrices$$ \{ A_{22}, C_2, U_b, U_d, \Phi_c, \Phi_g, \Phi_i, \Gamma, \Delta_k, \Theta_k,\Lambda, \Pi, \Delta_h, \Theta_h \} $$
and the scalar $ \beta $.
A Planning Problem¶
The first welfare theorem asserts that a competitive equilibrium allocation solves the following planning problem.
Choose $ \{c_t, s_t, i_t, h_t, k_t, g_t\}_{t=0}^\infty $ to maximize
$$ - \frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t [(s_t-b_t)\cdot(s_t - b_t) + g_t \cdot g_t] $$
subject to the linear constraints$$ \Phi_c c_t + \Phi_g g_t + \Phi_i i_t = \Gamma k_{t-1} + d_t $$$$ k_t = \Delta_k k_{t-1} + \Theta_k i_t $$$$ h_t = \Delta_h h_{t-1} + \Theta_h c_t $$$$ s_t = \Lambda h_{t-1} + \Pi c_t $$
and$$ z_{t+1} = A_{22}z_t + C_2w_{t+1} $$$$ b_t = U_bz_t $$$$ d_t = U_dz_t $$
The DLE class in Python maps this planning problem into a linear-quadratic dynamic programming problem and then solves it by using QuantEcon’s LQ class.
(See Section 5.5 of Hansen & Sargent (2013) [HS13] for a full description of how to map these economies into an LQ setting, and how to use the solution to the LQ problem to construct the output matrices in order to simulate the economies)
The state for the LQ problem is$$ x_t = \left[ {\begin{array}{c} h_{t-1} \\ k_{t-1} \\ z_t \end{array} } \right] $$
and the control variable is $ u_t = i_t $.
Once the LQ problem has been solved, the law of motion for the state is$$ x_{t+1} = (A-BF)x_t + Cw_{t+1} $$
where the optimal control law is $ u_t = -Fx_t $.
Letting $ A^o = A-BF $ we write this law of motion as$$ x_{t+1} = A^ox_t + Cw_{t+1} $$
Example Economies¶
Each of the example economies shown here will share a number of components. In particular, for each we will consider preferences of the form$$ - \frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t [(s_t-b_t)^2 + l_t^2], 0 < \beta < 1 $$$$ s_t = \lambda h_{t-1} + \pi c_t $$$$ h_t = \delta_h h_{t-1} + \theta_h c_t $$$$ b_t = U_bz_t $$
Technology of the form$$ c_t + i_t = \gamma_1 k_{t-1} + d_{1t} $$$$ k_t = \delta_k k_{t-1} + i_t $$$$ g_t = \phi_1 i_t \, , \phi_1 > 0 $$$$ \left[ {\begin{array}{c} d_{1t} \\ 0 \end{array} } \right] = U_dz_t $$
And information of the] $$
We shall vary $ \{\lambda, \pi, \delta_h, \theta_h, \gamma_1, \delta_k, \phi_1\} $ and the initial state $ x_0 $ across the three economies.
Example 1: Hall (1978)¶
First, we set parameters such that consumption follows a random walk. In particular, we set$$ \lambda = 0, \pi = 1, \gamma_1 = 0.1, \phi_1 = 0.00001, \delta_k = 0.95, \beta = \frac{1}{1.05} $$
(In this economy $ ]' $$
# Parameter Matrices γ_1 = 0.1 ϕ_1 = 1e-5 ϕ_c, ϕ_g, ϕ_i, γ, δ_k, θ_k = (np.array([[1], [0]]), np.array([[0], [1]]), np.array([[1], [-ϕ_1]]), np.array([[γ_1], [0]]), np.array([[.95]]), np.array([[1]])) β, l_λ, π_h, δ_h, θ_h = (np.array([[1 / 1.05]]), np.array([[0]]), np.array([[1]]), np.array([[.9]]), np.array([[1]]) - np.array([[.9]])) a22, c2, ub, ud = (np.array([[1, 0, 0], [0, 0.8, 0], [0, 0, 0.5]]), np.array([[0, 0], [1, 0], [0, 1]]), np.array([[30, 0, 0]]), np.array([[5, 1, 0], [0, 0, 0]])) # Initial condition.
econ1 = DLE(info1, tech1, pref1)
We can then simulate the economy for a chosen length of time, from our initial state vector $ x_0 $
econ1.compute_sequence(x0, ts_length=300)
The economy stores the simulated values for each variable. Below we plot consumption and investment
# This is the right panel of Fig 5.7.1 from p.105 of HS2013 plt.plot(econ1.c[0], label='Cons.') plt.plot(econ1.i[0], label='Inv.') plt.legend() plt.show()
Inspection of the plot shows that the sample paths of consumption and investment drift in ways that suggest that each has or nearly has a random walk or unit root component.
This is confirmed by checking the eigenvalues of $ A^o $
econ1.endo, econ1.exo
(array([0.9, 1. ]), array([1. , 0.8, 0.5]))
The endogenous eigenvalue that appears to be unity reflects the random walk character of consumption in Hall’s model.
- Actually, the largest endogenous eigenvalue is very slightly below 1.
- This outcome comes from the small adjustment cost $ \phi_1 $.
econ1.endo[1]
0.9999999999904767
The fact that the largest endogenous eigenvalue is strictly less than unity in modulus means that it is possible to compute the non-stochastic steady state of consumption, investment and capital.
econ1.compute_steadystate() np.set_printoptions(precision=3, suppress=True) print(econ1.css, econ1.iss, econ1.kss)
[[4.999]] [[-0.001]] [[-0.021]]
However, the near-unity endogenous eigenvalue means that these steady state values are of little relevance.
Example 2: Altered Growth Condition¶
We generate our next economy by making two alterations to the parameters of Example 1.
- First, we raise $ \phi_1 $ from 0.00001 to 1.
- This will lower the endogenous eigenvalue that is close to 1, causing the economy to head more quickly to the vicinity of its non-stochastic steady-state.
- Second, we raise $ \gamma_1 $ from 0.1 to 0.15.
- This has the effect of raising the optimal steady-state value of capital.
We also start the economy off from an initial condition with a lower capital stock$$ x_0 = \left[ {\begin{array}{ccccc} 5 & 20 & 1 & 0 & 0 \end{array} } \right]' $$
Therefore, we need to define the following new parameters
γ2 = 0.15 γ22 = np.array([[γ2], [0]]) ϕ_12 = 1 ϕ_i2 = np.array([[1], [-ϕ_12]]) tech2 = (ϕ_c, ϕ_g, ϕ_i2, γ22, δ_k, θ_k) x02 = np.array([[5], [20], [1], [0], [0]])
Creating the DLE class and then simulating gives the following plot for consumption and investment
econ2 = DLE(info1, tech2, pref1) econ2.compute_sequence(x02, ts_length=300) plt.plot(econ2.c[0], label='Cons.') plt.plot(econ2.i[0], label='Inv.') plt.legend() plt.show()
Simulating our new economy shows that consumption grows quickly in the early stages of the sample.
However, it then settles down around the new non-stochastic steady-state level of consumption of 17.5, which we find as follows
econ2.compute_steadystate() print(econ2.css, econ2.iss, econ2.kss)
[[17.5]] [[6.25]] [[125.]]
The economy converges faster to this level than in Example 1 because the largest endogenous eigenvalue of $ A^o $ is now significantly lower than 1.
econ2.endo, econ2.exo
(array([0.9 , 0.952]), array([1. , 0.8, 0.5]))
Example 3: A Jones-Manuelli (1990) Economy¶
For our third economy, we choose parameter values with the aim of generating sustained growth in consumption, investment and capital.
To do this, we set parameters so that Jones and Manuelli’s “growth condition” is just satisfied.
In our notation, just satisfying the growth condition is actually equivalent to setting $ \beta(\gamma_1 + \delta_k) = 1 $, the condition that was necessary for consumption to be a random walk in Hall’s model.
Thus, we lower $ \gamma_1 $ back to 0.1.
In our model, this is a necessary but not sufficient condition for growth.
To generate growth we set preference parameters to reflect habit persistence.
In particular, we set $ \lambda = -1 $, $ \delta_h = 0.9 $ and $ \theta_h = 1 - \delta_h = 0.1 $.
This makes preferences assume the form$$ - \frac{1}{2}\mathbb{E}\sum_{t=0}^\infty \beta^t [(c_t-b_t - (1-\delta_h)\sum_{j=0}^\infty \delta_h^jc_{t-j-1})^2 + l_t^2] $$
These preferences reflect habit persistence
- the effective “bliss point” $ b_t + (1-\delta_h)\sum_{j=0}^\infty \delta_h^jc_{t-j-1} $ now shifts in response to a moving average of past consumption
Since $ \delta_h $ and $ \theta_h $ were defined earlier, the only change we need to make from the parameters of Example 1 is to define the new value of $ \lambda $.
l_λ2 = np.array([[-1]]) pref2 = (β, l_λ2, π_h, δ_h, θ_h)
econ3 = DLE(info1, tech1, pref2)
We simulate this economy from the original state vector
econ3.compute_sequence(x0, ts_length=300) # This is the right panel of Fig 5.10.1 from p.110 of HS2013 plt.plot(econ3.c[0], label='Cons.') plt.plot(econ3.i[0], label='Inv.') plt.legend() plt.show()
Thus, adding habit persistence to the Hall model of Example 1 is enough to generate sustained growth in our economy.
The eigenvalues of $ A^o $ in this new economy are
econ3.endo, econ3.exo
(array([1.+0.j, 1.-0.j]), array([1. , 0.8, 0.5]))
We now have two unit endogenous eigenvalues. One stems from satisfying the growth condition (as in Example 1).
The other unit eigenvalue results from setting $ \lambda = -1 $.
To show the importance of both of these for generating growth, we consider the following experiments.
l_λ3 = np.array([[-0.7]]) pref3 = (β, l_λ3, π_h, δ_h, θ_h) econ4 = DLE(info1, tech1, pref3) econ4.compute_sequence(x0, ts_length=300) plt.plot(econ4.c[0], label='Cons.') plt.plot(econ4.i[0], label='Inv.') plt.legend() plt.show()
We no longer achieve sustained growth if $ \lambda $ is raised from -1 to -0.7.
This is related to the fact that one of the endogenous eigenvalues is now less than 1.
econ4.endo, econ4.exo
(array([0.97, 1. ]), array([1. , 0.8, 0.5]))
β_2 = np.array([[0.94]]) pref4 = (β_2, l_λ, π_h, δ_h, θ_h) econ5 = DLE(info1, tech1, pref4) econ5.compute_sequence(x0, ts_length=300) plt.plot(econ5.c[0], label='Cons.') plt.plot(econ5.i[0], label='Inv.') plt.legend() plt.show()
Growth also fails if we lower $ \beta $, since we now have $ \beta(\gamma_1 + \delta_k) < 1 $.
Consumption and investment explode downwards, as a lower value of $ \beta $ causes the representative consumer to front-load consumption.
This explosive path shows up in the second endogenous eigenvalue now being larger than one.
econ5.endo, econ5.exo
(array([0.9 , 1.013]), array([1. , 0.8, 0.5])) | https://python-advanced.quantecon.org/growth_in_dles.html | CC-MAIN-2020-40 | refinedweb | 1,888 | 54.63 |
import "golang.org/x/text/message/catalog"
Package catalog defines collections of translated format strings.
This package mostly defines types for populating catalogs with messages. The catmsg package contains further definitions for creating custom message and dictionary types as well as packages that use Catalogs.
Package catalog defines various interfaces: Dictionary, Loader, and Message. A Dictionary maintains a set of translations of format strings for a single language. The Loader interface defines a source of dictionaries. A translation of a format string is represented by a Message.
A Catalog defines a programmatic interface for setting message translations. It maintains a set of per-language dictionaries with translations for a set of keys. For message translation to function properly, a translation should be defined for each key for each supported language. A dictionary may be underspecified, though, if there is a parent language that already defines the key. For example, a Dictionary for "en-GB" could leave out entries that are identical to those in a dictionary for "en".
A Message is a format string which varies on the value of substitution variables. For instance, to indicate the number of results one could want "no results" if there are none, "1 result" if there is 1, and "%d results" for any other number. Catalog is agnostic to the kind of format strings that are used: for instance, messages can follow either the printf-style substitution from package fmt or use templates.
A Message does not substitute arguments in the format string. This job is reserved for packages that render strings, such as message, that use Catalogs to selected string. This separation of concerns allows Catalog to be used to store any kind of formatting strings.
Messages may vary based on any linguistic features of the argument values. The most common one is plural form, but others exist.
Selection messages are provided in packages that provide support for a specific linguistic feature. The following snippet uses plural.Select:
catalog.Set(language.English, "You are %d minute(s) late.", plural.Select(1, "one", "You are 1 minute late.", "other", "You are %d minutes late."))
In this example, a message is stored in the Catalog where one of two messages is selected based on the first argument, a number. The first message is selected if the argument is singular (identified by the selector "one") and the second message is selected in all other cases. The selectors are defined by the plural rules defined in CLDR. The selector "other" is special and will always match. Each language always defines one of the linguistic categories to be "other." For English, singular is "one" and plural is "other".
Selects can be nested. This allows selecting sentences based on features of multiple arguments or multiple linguistic properties of a single argument.
There is often a lot of commonality between the possible variants of a message. For instance, in the example above the word "minute" varies based on the plural catogory of the argument, but the rest of the sentence is identical. Using interpolation the above message can be rewritten as:
catalog.Set(language.English, "You are %d minute(s) late.", catalog.Var("minutes", plural.Select(1, "one", "minute", "other", "minutes")), catalog.String("You are %[1]d ${minutes} late."))
Var is defined to return the variable name if the message does not yield a match. This allows us to further simplify this snippet to
catalog.Set(language.English, "You are %d minute(s) late.", catalog.Var("minutes", plural.Select(1, "one", "minute")), catalog.String("You are %d ${minutes} late."))
Overall this is still only a minor improvement, but things can get a lot more unwieldy if more than one linguistic feature is used to determine a message variant. Consider the following example:
// argument 1: list of hosts, argument 2: list of guests catalog.Set(language.English, "%[1]v invite(s) %[2]v to their party.", catalog.Var("their", plural.Select(1, "one", gender.Select(1, "female", "her", "other", "his"))), catalog.Var("invites", plural.Select(1, "one", "invite")) catalog.String("%[1]v ${invites} %[2]v to ${their} party.")),
Without variable substitution, this would have to be written as
// argument 1: list of hosts, argument 2: list of guests catalog.Set(language.English, "%[1]v invite(s) %[2]v to their party.", plural.Select(1, "one", gender.Select(1, "female", "%[1]v invites %[2]v to her party." "other", "%[1]v invites %[2]v to his party."), "other", "%[1]v invites %[2]v to their party.")
Not necessarily shorter, but using variables there is less duplication and the messages are more maintenance friendly. Moreover, languages may have up to six plural forms. This makes the use of variables more welcome.
Different messages using the same inflections can reuse variables by moving them to macros. Using macros we can rewrite the message as:
// argument 1: list of hosts, argument 2: list of guests catalog.SetString(language.English, "%[1]v invite(s) %[2]v to their party.", "%[1]v ${invites(1)} %[2]v to ${their(1)} party.")
Where the following macros were defined separately.
catalog.SetMacro(language.English, "invites", plural.Select(1, "one", "invite")) catalog.SetMacro(language.English, "their", plural.Select(1, "one", gender.Select(1, "female", "her", "other", "his"))),
Placeholders use parentheses and the arguments to invoke a macro.
Message lookup using Catalogs is typically only done by specialized packages and is not something the user should be concerned with. For instance, to express the tardiness of a user using the related message we defined earlier, the user may use the package message like so:
p := message.NewPrinter(language.English) p.Printf("You are %d minute(s) late.", 5)
Which would print:
You are 5 minutes late.
This package is UNDER CONSTRUCTION and its API may change.
ErrNotFound indicates there was no message for the given key.
A Catalog holds translations for messages for supported languages.
New returns a new Catalog.
Context returns a Context for formatting messages. Only one Message may be formatted per context at any given time.
Languages returns all languages for which the Catalog contains variants.
Set sets the translation for the given language and key.
When evaluation this message, the first Message in the sequence to msgs to evaluate to a string will be the message returned.
SetMacro defines a Message that may be substituted in another message. The arguments to a macro Message are passed as arguments in the placeholder the form "${foo(arg1, arg2)}".
SetString is shorthand for Set(tag, key, String(msg)).
A Context is used for evaluating Messages. Only one Message may be formatted per context at any given time.
Execute looks up and executes the message with the given key. It returns ErrNotFound if no message could be found in the index.
A Message holds a collection of translations for the same phrase that may vary based on the values of substitution arguments.
String specifies a plain message string. It can be used as fallback if no other strings match or as a simple standalone message.
It is an error to pass more than one String in a message sequence.
Var sets a variable that may be substituted in formatting patterns using named substitution of the form "${name}". The name argument is used as a fallback if the statements do not produce a match. The statement sequence may not contain any Var calls.
The name passed to a Var must be unique within message sequence.
An Option configures Catalog behavior.
Package catalog imports 6 packages (graph) and is imported by 2 packages. Updated 2017-08-14. Refresh now. Tools for package owners. | http://godoc.org/golang.org/x/text/message/catalog | CC-MAIN-2017-34 | refinedweb | 1,260 | 59.9 |
Hi,
I was just curious to see how I would implement a friend function in template class. It was purely as an excercise, as I have never really found a use for it in real life. Anyway, I came up with the following code. This works fine in MSVC 2003. However it doesn't compile in g++.
You get the error:
Does anyone know which one is correct?Does anyone know which one is correct?main.cpp:13: declaration of `class T'
main.cpp:5: shadows template parm `class T'
Code:#include <iostream> using namespace std; template <typename T> class CFoo { private: T val; public: CFoo(T d) : val(d) { } template<typename T> friend ostream & operator<<(ostream & os, const CFoo<T> & cf); }; template <typename T> ostream & operator<<(ostream & os, const CFoo<T> & cf) { os << cf.val << endl; return os; } int main() { CFoo<int> foo(10); cout << foo << endl; cin.get(); } | https://cboard.cprogramming.com/cplusplus-programming/56933-friends-template-class.html | CC-MAIN-2017-47 | refinedweb | 150 | 75.1 |
18 June 2012 17:49 [Source: ICIS news]
WASHINGTON (ICIS)--?xml:namespace>
The National Association of Home Builders (NAHB) said that its survey of member contractors, called the housing market index (HMI), rose one point in June to 29.
That is the highest reading since the 30 score recorded in May 2007 when the
At the bottom of the
The
Since the US recession ended in June 2009, the HMI had remained at or below the 20 level until December last year when it edged up to 21, climbed to 25 in January this year and advanced to 28 for both February and March.
The index dipped to 24 in April, then bounced back to 28 in May before inching up again in June to its current 29 reading.
The modest June gain in the index “is reflective of the continued, gradual improvement we are seeing in many individual housing markets”, said Barry Rutenberg, NAHB chairman.
Rutenberg said more prospective home buyers are taking advantage of record low home prices and mortgage interest rates.
NAHB chief economist David Crowe said that recent trends in HMI data suggest “gradually improving single-family home sales this year”.
However, he cautioned that “recent economic reports have shown some weakening in the pace of [the
“In addition, builders across the country continue to report that overly tight lending conditions and inaccurate appraisals are major obstacles to completing sales at this time,” Crowe added.
With bankers’ memories of the subprime collapse, the resulting banking crisis and recession still strong, mortgage lenders are making loans only to top-credit buyers.
And, even if a buyer and seller agree on a price and the lender authorises the loan, the deal can fall apart when the property appraisal assigns a value to the home that is less than the agreed selling price. The bank will then decline to fund the loan. | http://www.icis.com/Articles/2012/06/18/9570610/us-home-builders-gain-confidence-in-june-highest-since-may-2007.html | CC-MAIN-2014-15 | refinedweb | 313 | 51.52 |
'Open Source Media' vs 'Open Source Media, Inc' 136
Karl writes "Last week OSM (Open Source Media) launched to what some are calling an odd start. Most notably naming a controversy has ensued with Christopher Lydon's public radio show Open Source, a production of Open Source Media, Inc.."
"Open Source" buzzword (Score:5, Insightful)
Re:"Open Source" buzzword (Score:3, Informative)
Re:"Open Source" buzzword (Score:1)
Re:"Open Source" buzzword (Score:3, Interesting)
But yup, when the PHBs start to redefine the term, its now a buzzword.
Re:"Open Source" buzzword (Score:2)
If they openly accept and even invite the confusion, then they should also accept and invite the consequences.
Oh, I posted all of our "open source" onto the Internet to help it get more widespread distribution.
even SUN figured this out before you.. (Score:2)
Re:"Open Source" buzzword (Score:2)
At least this is the way ES
Re:"Open Source" buzzword (Score:2)
For free software, you can just read fsf.org to catch up on how that's being redefined on a regular basis.
just in time (Score:5, Funny)
Re:just in time (Score:1, Redundant)
If you don't respect my Authoritah (Score:1)
Communist Propaganda Media (Score:5, Interesting)
Odd start indeed...
Re:Communist Propaganda Media (Score:2, Interesting)
Re:Communist Propaganda Media (Score:4, Interesting)
Or maybe the whole outfit is nothing but a front to promote wingnut propaganda for some corporate interests that have reasons for making nice with Bejing.
It might just be a mistake in configuring their moreover feed, but their terms of use which try to prohibit quoting or satire are not.
The site appears to be a carbon copy of the Huffington Post, only with right wing pundits instead of left and minus the reader comments. They have missed their moment for that, there is no shortage of right wing portal blogs without comments. What there is a growing shortage of is right wing fanatics wanting to endlessly debate why George W. Bush is absolutely right on everything.
What would make a lot more sense would be to set up a straight news and politics blog which does not have an eggregious tilt to either side. The right wing blogs play the Fox news game of pretending to be straight while delivering GOP talking points of the day verbatim. The left wing blogs make no bones about being partisan, the stated purpose of DailyKos is to campaign for Democratic candidates, Americablog makes no bones about being gay rights activism.
If you have any doubt about the right wingnut slant here just read the blogroll. Americablog? Kos? Huffington Post? Crooks and Liars? Nope. How about the commercial blogs, Salon? OK Slate, official blog of the WaPo? Nope, Nope. But pretty much every right wingnut blog you can imagine.
The cleverest thing Matt Drudge did was to put links to right and left wing media and blogs onto his home page. A lot of people still use him as a portal because the links are comprehensive. Of course that started back in the days when Drudge thought he could be a bipartisan bottomfeeder
So given the rest of the nonsense I don't see anything suprising about the deliberately misleading use of 'open source'. Clearly OSM is not open source, they don't even allow fair use of their stuff! (Like they have a choice).
Christopher Lydon appears to be refering to a different, older definition of 'open source', a term used by journalists that means publicly available information, like minutes of congress, stuff published in other media, etc. But the wingnuts are clearly using the term in the geek sense.
Re:Communist Propaganda Media (Score:1)
Damned Communist Propaganda!
Re:Communist Propaganda Media (Score:4, Interesting)
It's funny because they're right-wing and presumably anti-communist, but I expect this is simply lack of competence on their part. Xinhua is available with a lot of newsfeed packages and is very, very cheap. Might even be free. We used to get Xinhua when my company subscribed to a newsfeed a few years ago.
Still, if they doing any filtering of their newsfeeds I wouldn't expect they'd let Xinhua flood everything like that.
Re:Communist Propaganda Media (Score:5, Funny)
Re:Communist Propaganda Media (Score:2, Troll)
Re:Communist Propaganda Media (Score:2)
Re:Communist Propaganda Media (Score:3, Informative)
Re:Communist Propaganda Media (Score:2)
Re:Communist Propaganda Media (Score:1)
Re:Communist Propaganda Media (Score:2)
to finish the phrase (Score:3, Insightful)
Political revolutions (and elections) are similar. During the revolution (and campaigning), any faction trying to gain market share does lots of things to convince people they're the "good guys." After the revolution (or election), the new holders of power stop trying to please, unless they're convinced that they could lose their position if people aren't satisfied.
I don't fully agree myself in this post, but I thought this
Who is Christopher Lydon? (Score:4, Insightful)
Re:Who is Christopher Lydon? (Score:4, Informative)
Re:Who is Christopher Lydon? (Score:5, Informative)
Re:Who is Christopher Lydon? (Score:2)
Re:Who is Christopher Lydon? (Score:5, Informative)
His company, Open Source Media, and the radio show are both very much inspired by open source values (e.g., openness, cooperation and sharing):
- All content is Creative Commons licensed (compare to OSM's obnoxious TOS [osm.org]).
- They actively interact with their audience through blogging.
- They involving the audience in show production (read How this works [radioopensource.org]).
It doesn't seem like an unreasonable translation of the open source ethos to radio and media production within what's feasible.
I think his trademark case is pretty solid; he has a live registered mark (meaning the examiners have accepted it so they have the benefit of the doubt if someone claims it's not trademarkable) on Open Source as applied to a radio show and commentary website, and prior use of the trade name Open Source Media. The potential for confusion (the big criteria in TM issues) is substantial. OSM LLC, meanwhile uses all kinds [osm.org] of weaselly wording [osm.org] to handwave around the fact that they use the phrase "Open Source Media" as an alternate name for the operation everywhere while implying they're just "OSM" so that makes them not really infringing (if I started RH LLC but had the name "Red Hat" plastered all over my site and press releases, do you think I could be in a bit of a bind?).
I have no dog in this fight (except as a longtime fan of The Connection, which is not the same without Lydon), but there is really no contest IMO.
Really? (Score:2)
Things must be really bad in Boston.
They hired him here in the Twin Cities to replace Katherine Lampert when she left to go work with Al Franken. He was horrible. He lasted maybe a month before he was handed his walking papers.
I say he was horrible, because he was clearly leading his interviews. That is, not just asking questions, but blatantly pushing the answers in a certain direction.
That s
Re:Really? (Score:2)
Bah (Score:2)
Agreed, to a point. (Score:2)
This seems to be an approriate characterization of Christopher Lydon... (Albeit my characterization was that he is a blatant blaring asshole, possibly of the Goatse variety). I have been listening to Open Source (which I also thought would be a lwn.net-like publication) for the past few weeks.
Not only does he lead his questions, he also uses
Christopher Lydon =know it all (Score:2)
After a show where he had a folk singer on (for apparently no other reason than, it turns out Mr. Lydon was a huge fan!) I n
Re:Who is Christopher Lydon? (Score:1)
Full of themselves (Score:5, Interesting)
Not only did they launch themselves with an anti-open source attitude (prohibitive copyright terms [phillyfuture.org] which they've since removed from their privacy policy), they didn't do a simple google search to make sure that no confusion would occur as a result of their name selection. OSM should have stuck with "Pajamas Media"... there's nothing wrong with that and it pokes reverent fun at those who shrug off bloggers.
Re:Full of themselves (Score:2)
Actually, they have. [osm.org]
It's about the software, stupid (Score:2, Interesting)
Re:It's about the software, stupid (Score:1)
Re:It's about the software, stupid (Score:1)
"Hip" and "Cool"? (Score:3, Insightful)
Oh please. I'm as much a geek as the next guy, but I'm not going to pretend there's anything "hip" or "cool" about open source.
I can see it now. "Hey baby. I'm hip. Check out my apache install. I'm so cool, I'm running linux. Now how about going back to your place? No? What... that guy? What's so great about him? Sure, he knows wines, plays tennis, and can dance, but seriously, isn't it cooler to know all the switches to the g
We should be happy about this. (Score:2)
Decades ago, phrases like "hi fi" (and many decades before that, "electric") were used as meaningless buzzwords. Hi-fi hula hoops! Electric combs! It's a natural cultural response to something that has made a big dent.
X
Where's the money. (Score:2, Insightful)
Re:Where's the money. (Score:3, Informative)
"So the big question is.... who is financing these guys?"
The startup capital is from the founders themselves -- several of them are well off, either from other blogosphere projects or from other media (Roger L. Simon writes novels). Going forward it's an ad-supported model.
Open Source - just good name (Score:1)
Re:Open Source - just good name (Score:2, Interesting)
Re:Open Source - just good name (Score:2)
I have fixed a few bugs in OSS for our business use (and reported them, naturally).
I have used OSS files to read how a protocol works.
I have read OSS files to see what files are being accessed.
I have added a new feature to OSS and had it merged into the distribution.
Meanwhile, I have been frustrated by CSS that said "file not found" or Software that assumed I had a C: drive and was unable to fix it.
Re:Open Source - just good name (Score:2)
Re:Open Source - just good name (Score:3, Insightful)
You can't swing a dead cat without hitting a Gaming Clan Site based on phpBB or *Nuke. In nearly all those cases you could probably describe the webmaster as a "regular" OSS user. They're just the knob that got volunteered to maintain the site. They've likely had to patch their sites and at least install a template of some kind.
Thank you Yoda (Score:3, Informative)
Wouldn't one normally phrase that as: "Most notably a naming controversy has ensued with Christopher Lydon's public radio show"?
open source it is not (Score:2)
Am I understanding this correctly? (Score:2)
I can't see how this complaint has any legal merit at all. They haven't been granted the trademark yet, and given how descriptive it is I doubt that it will be granted anyway; and what they're trying to trademark ("Open Source") is not the same as what they're complaining about ("Open Sourc
Their community is so clueless (Score:2, Insightful)
Background (Score:3, Informative)
Doesn't sound like their principles are very "open source"...
Given how hard my company tried - and failed (Score:2)
OTOH, I still can't figure out how the OSM site differs from many other sites that already exist.
Re:Given how hard my company tried - and failed (Score:2)
Errr... (Score:2)
While that's certainly how it is portrayed (and the prominent members are generally conservative) It does seem to have a *few* left-of-center blogs and some completely non-political ones as well. Also, some of the ones called "conservative" don't exactly fall into the republican mode. The gay libertarian who runs "Classical Values" comes to mind.
But I think you're right about the ads.
Long before "Open Source" meant software.... (Score:3, Interesting)
Re:Long before "Open Source" meant software.... (Score:2)
Interestingly, he has given some speeches at hacker conventions, such as at H2k2 [h2k2.net] and the Fifth HOPE [the-fifth-hope.org]. You can download his speeches if you follow the links.
I believe the press also uses the term "open source" to refer to a
One summer (Score:2)
There was me, the CEO, the VP of marketing, and my own boss. People tossed out ideas and the VP would Google the names right away. A simple and obvious strategy to avoid such a namespace collision.
My own "net savvy" was useful as well. Someone suggested calling the product "stormfront" for example (for some reason, people in the tech sector like badass weather names) and I told 'em that stormfr
Language Log's take (Score:2, Informative)
get your free, complimentary... (Score:1)
Did I say it was free? Did I say gift? all in the same sentence?
What tha f!!!!! heck!!!
I can see open source as being used the same way in the near future. Just like the never-gets-old "buy it for $9.99". Stupid
Have a good one.
Fraud?? (Score:2)
Re:Fraud?? (Score:2)
Or Apple had to sell produce...
Or Sun had to sell solar systems.
And yet neither are actually about Open Source. (Score:2)
Quoth them: "We consider Open Source Media to be a description of what we are and do, not a trade name.", "We chose the name "Open Source" because it signals the way we produce radio and web content."
Apparently "Open Source" now means blogging about politics. Who knew?
Re:And yet neither are actually about Open Source. (Score:2)
Apparently "Open Source" now means blogging about politics. Who knew?
Actually, if half the people here *had* a clue, they'd realize that "open source" is not something owned by hackers or anyone else. It's a *generic* term, for goodness sake! The fact that the crowd that hangs out here associates it with comp
Re:And yet neither are actually about Open Source. (Score:2)
And this story isn't about two different groups trying to own the term?
I'll conced that open source can be a generic term, but I have yet to see a reason why political bloggers, and the companies they form, should co-opt it. As it stands it seems more like shameless coattail riding than a natual choice. If they are offering transparent journalism or reliable reporting, why can
Christopher Lydon's Public Radio Show, Inc. (Score:2)
OSM are a bunch of Chicken poo if you ask me (Score:1)
Re:OSM are a bunch of Chicken poo if you ask me (Score:1)
'.org' and false advertising (Score:4, Informative)
OK, OK, maybe it's a bit strange to post this comment on slashdot.org [slashdot.org] , but the point at which I got really cross about all this was the point at which the pajama party adopted the domain 'osm.org'. The
.org [wikipedia.org] top-level domain is, at least in theory, intended for non-commercial, non-governmental, non-academic use. By describing themselves as osm.org the pajamas are making an implied claim to be non-commercial, which is not true and is consequently false advertising. Yes, I know this applies to slashdot [slashdot.org] as well...
Capitulation (Score:2)?
Re:OSM Is Chinese Communist Party Mouthpiece (Score:4, Interesting)
Betrayed the Revolution a bit, haven't we, comrades?
Re:OSM Is Chinese Communist Party Mouthpiece (Score:5, Insightful)
The Chinese Communists are a militaristic mafia. They have nothing to do with actual collectivism, destroying class structure, universal "ownership" banishing property, equal distribution of surplus labor. They're mafia capitalists, dictating the transformation of China into a moneymaking factory for their benefit and perpetuation of their power. That's rightwing: fascist corporatism government.
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:3, Insightful)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:1)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Actually looking at OSM I find that maybe one out of one hundred stories has a XIN tag. If the Communist Chinese are funding OSM, they're a very minor player behind AP, UPI, Knight Ridder, PRN, Business Wire, etc, etc.
Re:OSM Is Chinese Communist Party Mouthpiece (Score:3, Informative)
BTW, I'm looking at the OSM homepage, which says:
CURRENT HEADLINES
XIN: Xinhua domestic news advisory -- Nov. 2
Re:OSM Is Chinese Communist Party Mouthpiece (Score:1)
Yup! Proof positive that the right wing of US politics has been coopted by the yellow horde...
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Many of them are also shameless racists (Score:5, Informative)
I don't mind conservatives speaking their minds and having opinons, but these people and their ilk are beyond the pale. Mass murder and inprisonment, just because you're afraid of what people who share the same ethnic or religious designation, that's irrational and completely unacceptable in a democratic state like the US. These people are no better than white supremacists - they've merely picked target groups that aren't taboo yet.
Re:Many of them are also shameless racists (Score:4, Insightful)
Re:Many of them are also shameless racists (Score:1)
If someone rants about "African Americans," can he defend himself on the grounds that Africa and America are contintents rather than races?
You don't have to explicitly cite the exact race you are bashing to be racist. In fact, many racists are not exactly clear on taxonomy anyway. And a lot of racist remarks are suggestive rather than explicit.
In this case, and in many cultures, religion is tied to race, language etc. The dispute in Northern Ireland,
Re:Many of them are also shameless racists (Score:1)
I once knew a racist (well by the parent's definition and thinking, anyway) who actually had the nerve to call out some island cannibals for their questionable dining practices (oftentimes on unsuspecting island tourists/intruders). What a jerk! Needless
Re:Many of them are also shameless racists (Score:1)
Re:Many of them are also shameless racists (Score:3, Insightful)
Mine is a controversial view, but I've long ago decided that you can judge a political blog by its reader comments.
There are some politically slanted blogs whose authors claim to have no slant. But reading the comments, you can see what type of people resonate with the content. Since that is out of your control as a blogger, and
MOD PARENT UP (Score:2)
Re:Many of them are also shameless racists (Score:2)
Re:Many of them are also shameless racists (Score:2) [michellemalkin.com]
Re:Ad-hominem attacks are for the logically impair (Score:3, Insightful)
There are many dark and sinister ideologies. I suggest you focus on those existing within your sphere of influence and let the peaceful practitioners of Islam confront theirs. I know you think you live in an enlightened society, but it is my belief that any society whose leaders cond
Re:Many of them are also shameless racists (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
70% Overrated
30% Interesting
Hmm, do I detect a Commie astroturf TrollMod campaign?
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
20% Troll
40% Overrated
20% Interesting
The truth about these rightwing fascists, their natural allies and secret backers in foreign tyrants, and how they're hungrily destroying America, really hurts. They can't afford to have people talking about the truth - just like their puppetmasters in Beijing. I hope they can get me started on how they tricked America into fighting Iran's war against Iraq, and Qaeda's war against corrupt, but secular,
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2)
Re:OSM Is Chinese Communist Party Mouthpiece (Score:2) | http://slashdot.org/story/05/11/21/1140210/open-source-media-vs-open-source-media-inc | CC-MAIN-2015-11 | refinedweb | 3,425 | 62.48 |
I provided an overview of the BIM 360 Glue REST API and SDK last Friday and hinted at upcoming further exploration. Well, here it is already.
Due to Autodesk University and the world-wide developer conferences, I had to skip my last education day, but this stuff was too exciting to wait any longer :-)
So, unwilling to go for any length of time without trying out something new, I played a bit with the Glue API anyway.
For fun, I will describe here stepping through the exploration of the Glue authentication process completely manually, making use of the Python programming language and a handy library which probably provides an easier access to the REST API than you imagined possible. Here are the steps:
- Python and requests
- Get the Google page
- Access BIM 360 Glue
- Adding authentication
- Timestamp and MD5 digest
- More login credentials
- Successful authentication
Python and Requests
Looking for an easy way to manually interact with REST, I immediately turned to Python and found the requests library, which describes itself as an 'awesome Python HTTP library that's actually usable'. I would agree that is a fair assessment.
Get the Google Page
Here is an example showing how simple it is to issue an HTTP request from scratch, including launching the Python interpreter from the command line; basically, it uses one single line of code, calling the method requests.get with the desired URL:
$ python Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import requests >>> r = requests.get('') >>> print r <Response [ <head> <meta content="Search the world\'s information, including webpages, images, videos and more. Google has many special features..."
The REST response 200 is a typical HTTP status code and means OK.
Accessing BIM 360 Glue
Ok, so requesting the Google home page is simple. Let's try accessing BIM 360 in a similar manner.
Trying to access something, e.g. query the model services, immediately reacts:
>>>>> r = requests.get(u1) >>> print r <Response [400=""]>
Oops. Response codes in the 400 range indicate client errors. 400 itself stands for bad request.
Yes, of course, we need some authentication!
Time to start looking at the documentation. First, find out where it can be found at all. The starting point bim360.autodesk.com/api redirects us to bim360.autodesk.com/api/doc/index.shtml, providing human readable documentation and a link to the web services API documentation.
Adding Authentication
So, let's authenticate ourselves:
Looking at the Glue web services API documentation on creating a signed request, this requires some interesting bits and pieces besides the basic information, which consists of
- Company id
- API key
- API secret
The API key and secret need to be requested from Autodesk. Currently, there is no official developer program running for Glue. You can however buy a normal user account and ask for additional developer access based on that.
As we can see from the documentation, in addition to the API key and secret, plus the normal user account login credentials, the authentication requires a timestamp, more precisely a Unix epoch timestamp using GMT time, the number of seconds since the Unix epoch, January 1 1970 00:00:00 GMT.
The API key and secret are concatenated with the timestamp and encoded using an MD5 cryptographic hash to create a signature, which also has to be sent with the request.
Timestamp and MD5 Digest
Luckily, Python can easily support us in providing the timestamp and signature components.
The timestamp can be generated like this using the time module:
import time def expires(): '''return a UNIX style timestamp representing 5 minutes from now''' return int(time.time()+300)
The Python Standard Library Cryptographic Services includes the MD5 message digest algorithm 'md5', so that is also easily taken care of.
Following the example given in the Glue API documentation, I created the concatenation and digest of the following items:
-)
Here is the code doing that by hand, interacting with the interpreted environment:
>>>>>>>>> s=key+secret+timestamp >>> s 'ddbf3f51b3824ecbb824ae4e65d31be4123456789012345678901234567890121305568169' >>> import md5 >>> signature=md5.new(s) >>> print signature <md5 HASH="" object="" @="" 0x10c0b5d30=""> >>> print signature.hexdigest() b3298cf0b4dc88450d00773b4449ba51
The hexadecimal digest exactly matches the signature string listed in the Glue documentation example, so we seem to be on the right track so far.
More Login Credentials
Studying the documentation further, we end up at the nitty-gritty internals of the Security Service: Login request, specifying the following full list of required parameters:
- format
- login_name
- company_id
- api_key
- api_secret
- timestamp
- sig
Actually, I intuitively fixed an error or two when transferring this list; e.g. the secret was not mentioned at this point, and the timestamp has a wrong description associated with it. So do what every programmer always has to do: ignore the documentation (but only some of it!), trust your own insight, take everything with a grain of salt, and use your brains, intuition and good taste.
By the way, the user name and password required here are the Autodesk id single sign-on credentials, also known as SSO, formerly Autodesk unique login or AUL.
I initially tried to use a GET request and was kindly informed by a suitable error message that I should be using POST instead.
Successful Authentication
I ran into a couple of other not unexpected issues as well, and finally ended up with this method to construct the authentication POST request:
url = '' def bim_360_glue_authenticate( login_name, password, company_id, api_key, api_secret ): timestamp = str(int(time.time())) sig=md5.new(api_key + api_secret + timestamp).hexdigest() data={ 'login_name' : login_name, 'password' : password, 'company_id' : company_id, 'api_key' : api_key, 'api_secret' : api_secret, 'timestamp' : timestamp, 'sig' : sig } r = requests.post(url, data=data) print r.status_code print r.headers['content-type'] print r.content
This call succeeds and prints:
>>> bim_360_glue_authenticate( ... ) 200 application/json; charset=UTF-8; {"auth_token":"b61d3ec10a7042cf884806e4e5a55601","user_id":"b2409a28-08b4-4bd4-a935-a6e33d5b030d"}
Again, 200 means OK, i.e. success. Hooray!
Cool, huh?
There may be easier ways to achieve this, but hardly more instructive :-)
And as you can see, the interactive Python environment and rich library support really help a lot!
The resulting code is also pretty succinct, considering we are starting from absolute zero here.
How long would you have needed to explore and implement this in a compiled environment? | https://thebuildingcoder.typepad.com/blog/2012/12/bim-360-glue-rest-api-authentication-using-python.html | CC-MAIN-2020-34 | refinedweb | 1,056 | 53.61 |
SSL in Python 2.7
It has been almost two years since I wrote about the state of SSL in Python 2.6. If you haven’t read that yet, I suggest you read that first and then continue here, since I will mostly just be talking about things that have changed since then, or things that I have discovered since then.
The good news is that things have improved in the stdlib ssl module. The bad news is that it is still missing some critical pieces to make SSL secure.
Python 2.7 enables you to specify ciphers to use explicitly, rather than just relying on what comes default with the SSL version selection. Additionally, if you compile the
ssl module with OpenSSL 1.0 and later, using
ssl.PROTOCOL_SSLv23 is safe (as in, it will not pick the insecure SSLv2 protocol) as long as you don’t enable SSLv2 specific ciphers (see the ssl module documentation for details).
Servers
With that out of the way, there isn’t really much difference to how you would write a simple SSL server with Python 2.7 compared to what I wrote in 2008. If you know your
ssl module was compiled with OpenSSL 1.0 you can pick
ssl.PROTOCOL_SSLv23 for maximum compatibility. Otherwise you should stick with an explicit version other than v2.
The documentation for the ssl module in 2.7 has improved a lot, and includes good sample code for servers here.
The M2Crypto code hasn’t changed. The next M2Crypto release will add support for 0penSSL 1.0.
Clients
Checking the peer certificate’s hostname is still the weak point of the
ssl module. The SSL version selection situation has improved slightly like I explained above. Othewise follow the example I wrote in 2008.
Again, the M2Crypto API hasn’t changed.
Lately I have been working with pycurl at Egnyte, so I decided to give a client example using that module.
import pycurl c = pycurl.Curl() c.setopt(pycurl.URL, '') c.setopt(pycurl.HTTPGET, 1) c.setopt(pycurl.SSL_VERIFYPEER, 1) c.setopt(pycurl.CAINFO, 'ca.pem') c.setopt(pycurl.SSL_VERIFYHOST, 2) try: c.perform() finally: c.close()
I am not a big fan of pycurl due to difficulties getting it compiled and the non-Pythonic API. But it is based on the very powerful curl library, so it comes full featured out of the box.
Other Resources
Besides the Python crypto libraries capable of doing SSL that I mentioned in my SSL in Python 2.6 article, I have found pycurl. Another find in the Python crypto front is cryptlib.
Mike Ivanov wrote a great series about crypto in Python: part 2, part 3 (link to part 1 seems to have rotted). Mike also produced a comparison of different Python crypto libraries (PDF).
The future is also looking bright for the
ssl module. Already the upcoming Python 3.2 ssl module will include support for
SSLContexts so that you can set options for multiple SSL connections at once, allows you to selectively disable SSL versions, and it allows you to check the OpenSSL version as well.
t:
A simple SSL webserver in python 3: 24, 2010, 12:44 am
Antoine:
Hello Heikki,
I hadn’t read your blog post. Thank you for the feedback!
Best regards
Antoine.October 22, 2010, 2:07 pm | https://www.heikkitoivonen.net/blog/2010/08/23/ssl-in-python-2-7/comment-page-1/ | CC-MAIN-2020-10 | refinedweb | 557 | 67.76 |
Post your Comment
changing selection color of
changing selection color of dear all,
i have loaded image in border less button tag when button is selected with tab key i get brown color rectangle around image. how do i change color of that rectangle from brown to white
color changing - Java Beginners
color changing sir how to change the color of the selected tab in java swing Hi Friend,
Try the following code:
import javax.swing.*;
import java.awt.*;
import javax.swing.event.*;
import java.awt.event.
Changing background color using JavaScript
Changing background color using JavaScript Hi Sir
Is there any way to change the background color using JavaScript ?
Please mention code with your answer.
Hello Friend
Pass the element & color in the given
changing Background Color
changing Background Color
In this program, we are going to tell you how you can
change background color of a pdf file. You can change the background color
How to Changing Toolbar Color using iPhone Application ?
How to Changing Toolbar Color using iPhone Application ? Hi,
i have developing a small application to changing the toolbar color in iphone. But i have facing the problem that color is not changing. Can somebody suggest me
changing Chunk Color
changing Chunk Color
...
change the color of chunk color .This example tell you how you can change
the color of chunk and how you can add fonts and gives the size of the
fonts
Photoshop eye color brown to blue
Photoshop eye color brown to blue
How to change the eye color from brown to blue
If you want...
will help you to design as you want to give color to eye.
Let's start
Open an image
UIButton text color
UIButton text color Hi,
I have to change the text color of UIButton to black. Give me best code for changing UIButton text color to black.
Thanks
Color
to create new colors, and predefines
a few color constants (see below....
White is (255, 255, 255) and black is (0,0,0).
Color Constants
Java originally defined a few color constant names in lowercase,
which violated
How to design a color tray
How to design a color tray
This is a design of the color tray which we used
during the painting, it will surely
help you to design a color tray. Apply every step
UITableView Background Color
];
The above line will change the default background color into gray color. You can also set the gradient to it.
In this example we are simply changing the color...UITableView Background Color
To change the background color of UITableView
How to change the color of a base64-encoded png image JQuery?
How to change the color of a base64-encoded png image JQuery? I am... the value of imgd.data[i] (before changing the color), it is zero for every i. the original image is shown, but the color does not change. Do you know why
color change
color change how to change the color in circle using scrollbar
jquery color picker plugin
Color picker jQuery plug-in
In this section, you will learn how to implement color picker in your web
page using jQuery plug-in. In the given below example... type, directly
implement the color picker on web page. Second, by changing
HTML Color Code for White
. But it
depends on your graphics design you can use other color also.
Color...HTML Color Code for White
In this page you will find the hexadecimal color code for White
color.
The hexadecimal color code for white is #FFFFFF or #ffffff
color cycle
color cycle how do you implement a program for color cycle that has a button such that when i press the button it brings a sequence for example green-red-grey-green e.t.c and should use the getter methods and set methods
HTML Color Code for purple
HTML Color Code for purple
In this page you will find the hexadecimal color code for Purple
color.
The hexadecimal color code for dark purple is #800080. HTML
Purple color is of two types:
Dark Purple
Light Purple
Here
Highlight text by changing color on every click(Toggle effect)
Highlight text by changing color on every click(Toggle effect)
In this tutorial, we will discuss about how to highlight list element by
clicking..., it gets highlighted by changing it's color. If you click
already highlighted
Line Animation in Java
x is changing. The rang of the
value is -500 to 600. The color of the
line is changing by setColor() method. To remove old color and set new
one or ...;
In this example we are creating an animated line. The color
div background color in html
div background color in html How to change the background color of DIV in HTML
Convert Black & White Image to Color Image
Convert Black & White Image to Color Image
How to convert a black and white photograph to
color?
This is an easy tutorial to learn easily the methods to convert blank
UITextfield Background Color
is ..it's showing a background color. Though my text field color is white.
Can you please suggest.. , how can i remove the background color of text field.
Thanks!
iPhone UITextField Background Color
Set the text field background color
select the foreground color for a label
select the foreground color for a label (Using JScrollBar) Write a program that uses scroll bars to select the foreground color for a label, Three... of the color. Use a title border on the panel that holds the scroll bars
Color Wheel - Java Beginners
Color Wheel Hai to all... Just want to need some code about the GUI, were using the java.lang and java.awt.. Were making a color wheel program..
Thank'z to all who help me...
GOD bless
Post your Comment | http://www.roseindia.net/discussion/33566-How-to-design-a-changing-color-of-the-chameleon-changing-color-of-the-chameleon-color-of-the-chameleon.html | CC-MAIN-2013-20 | refinedweb | 966 | 71.44 |
On Sun, 2 Mar 2003, Jeremy Kloth wrote: [...] > I bring up the bug reporting simply because I don't recall anyone reporting > that PyXML broke their Python installation until now. The cases you mention > are abstract, but without concrete details, I cannot reply otherwise. [...] That argument makes sense only when applied to a feature to be introduced in the future, in which case we can't know what effect it will have without considering concrete details. This discussion is about a feature already introduced (the _xmlplus hack), which we *know* causes problems (unless you don't believe your users). And we're suggesting replacing it with another system that is known to work: the normal Python namespace / import system. Hmm, I'm sounding a bit whiny here... maybe I should add that I appreciate all the work the XML sig has done, and that this import thing is of course a relatively tiny (though important) detail! John | https://mail.python.org/pipermail/xml-sig/2003-March/009123.html | CC-MAIN-2016-22 | refinedweb | 157 | 60.65 |
This blog post is going to go over creating a load test plug-in and show a few different ways you can use a plug-in to modify your load test. These plug-ins are a powerful extensibility point of the load test architecture. They give you the opportunity to do things such as change the selected test, change the userload, add data to the web test context, modify the test environment before the test runs or clean up the environment when the test completes.
1) Here are the following events that you can connect to:
a. Heartbeat – This event is raised every second. If you want to monitor some vairable and then modify user load based on that variable, this would be a good event to connect to.
b. LoadTestStarting – This event is raised when the load test starts. This is a good event for things such as setting up the envoironment or maybe starting logging for a report you might want to create.
c. LoadTestFinished – This event is raised when the load test completes. This is a good event for things such as cleaning up the envoironment or maybe stopping logging for a report you might want to create.
d. LoadTestAborted – This event is raised if a load test is aborted.
e. WarmupComplete – This event is raised if you are using a warmup period and when the warmup period completes.
f. TestSelected – This event is raised when a test has been selected to be executed. You can change what test is about to be executed from this event.
g. TestStarting – This event is raised when a test iteration is about to start. You can add items to the test context from this event.
h. TestFinished – This event is raised when a test completes. You get information about the test that was just executed.
i. ThresholdExceeded – This event is raised when a threshold rule has been exceeded. You are given information about the counter that caused the rule to fire.
2) Creating a LoadTest Plug-in
a. Open the test project that contains your load test.
b. Add a new class file
c. Add a using statement for
using Microsoft.VisualStudio.TestTools.LoadTesting;
d. Have the class implement the ILoadTestPlugin interface.
public class Class2 : ILoadTestPlugin
e. There is one method in the interface that you have to implement:
public void Initialize(LoadTest loadTest)
f. Here is an example plugin:
namespace TestProject3
{
public class PluginExample : ILoadTestPlugin
{
LoadTest m_loadTest;
#region ILoadTestPlugin Members
public void Initialize(LoadTest loadTest)
{
m_loadTest = loadTest;
}
#endregion
}
}
g. To set the plugin on your load test, first compile the plugin. Then right click on the root node of the load test and select “Set Load Test Plug-in…”. In the dialog that appears, select your plug-in.
Now let’s create some examples.
The first example we will create is storing extra information along with your load test. One common question we get is I would also like to store build number or something like that with my load tests. Then the users would like to be able to use this information in a report they are creating.
First we will create a separate table in the load test results store to hold the new info. Part of the key for all load test tables in the LoadTestRunId. Here is a simple table definition:
CREATE TABLE [dbo].[LoadTestBuildInfo](
[LoadTestRunId] [int] NOT NULL,
[Build] [nvarchar](100) NULL,
CONSTRAINT [PK_LoadTestBuildInfo] PRIMARY KEY CLUSTERED
(
[LoadTestRunId] ASC
) ON [PRIMARY]
So we need three things to write to this table. The connection string, the run id and the build. We will add the build information as a Load Test Context parameter. The connection string we will make an option parameter. The run id we will get from the database.
Unfortunately, we do not give you the runid through an API, so we will just query the LoadTestRun table for the max run id. Unless you have multiple runs going at once, this will be the value you need.
First here is a hlper class that contains the sql operations:
using System;
using System.Globalization;
using System.Data.SqlClient;
namespace ApiDemo
class SqlHelper
static string s_connectionString;
public static void SetConnectionString(string connectionString)
s_connectionString = connectionString;
public static int GetRunId()
string sql = "SELECT MAX(LoadTestRunId) FROM LoadTestRun";
using (SqlConnection connection = new SqlConnection(s_connectionString))
{
connection.Open();
using (SqlCommand sqlCommand = new SqlCommand(sql, connection))
{
using (SqlDataReader dataReader = sqlCommand.ExecuteReader())
{
if (dataReader.Read())
{
return dataReader.GetInt32(0);
}
}
}
}
throw new Exception("Unable to get the run id");
public static void StoreBuild(int id, string build)
string sql = string.Format("INSERT INTO LoadTestBuildInfo (LoadTestRunId,Build) Values ({0},'{1}')", id, build);
connection.Open();
sqlCommand.ExecuteNonQuery();
Second, here is the load test plugin:
static string s_connectionString = "Data Source=\".\\SQLEXPRESS\";Initial Catalog=LoadTest;Integrated Security=True";
m_loadTest = loadTest;
m_loadTest.LoadTestStarting += new System.EventHandler(m_loadTest_LoadTestStarting);
void m_loadTest_LoadTestStarting(object sender, System.EventArgs e)
//first get the connection string
if (m_loadTest.Context.ContainsKey("ConnectionString"))
SqlHelper.SetConnectionString(m_loadTest.Context["ConnectionString"].ToString());
else
SqlHelper.SetConnectionString(s_connectionString);
//now we need to get the runid.
int loadTestRunId = SqlHelper.GetRunId();
//now store the build info
if (m_loadTest.Context.ContainsKey("Build"))
SqlHelper.StoreBuild(loadTestRunId, m_loadTest.Context["Build"].ToString());
Now that you have the build stored with the run id, you can easily add this to any custom reports.
This sample will show you how to change the test that the engine has selected. Maybe you have a situation in which your test mix has 4 tests. After a certain amount of time, you no longer want to run Test4. So any time it is selected, you want to replace it with one of the other 3. You can do this from the TestSelected event. We will also use the heartbeat event to keep track of how much time has been executed. In the heartbeat event, we will increase a counter once warm-up is complete. Then in the TestSelected event, after 120 seconds, we will always switch WebTest1 with WebTest2. You can make this more intelligent by selecting a test from the list of tests in the scenario specified in the event args.
using System.Collections.Generic;
using System.Linq;
public class PluginExample2 : ILoadTestPlugin
int count = 0;
m_loadTest.TestSelected += new System.EventHandler<TestSelectedEventArgs>(m_loadTest_TestSelected);
m_loadTest.Heartbeat += new System.EventHandler<HeartbeatEventArgs>(m_loadTest_Heartbeat);
}
void m_loadTest_Heartbeat(object sender, HeartbeatEventArgs e)
if(e.IsWarmupComplete)
count++;
void m_loadTest_TestSelected(object sender, TestSelectedEventArgs e)
if (count > 120 && e.TestName.Equals("WebTest1"))
{
e.TestName = "WebTest2";
}
This sample will show you how you can change the user load of a running load test. You can create your own custom load profiles with this extensibility point. In this example we will create a plugin which reads the load from a text file and then changes the load of the running test to that value.
The file name that stores the user load can be set from a load test context parameter. Then we will read that file at every heartbeat event, and change the load to that value. Another good demo of this would be to create a simple winforms app that writes the new user load to a file or registry location. Then have the plugin read from that location. Then you could turn a knob or enter the user load that way. With the way this sample is written, you would simply add a number to the text file and save it. You can change the value while the test is running.
using System.IO;
public class PluginSample3 : ILoadTestPlugin
string m_fileName;
//initialize the file name
if (m_loadTest.Context.ContainsKey("FileName"))
m_fileName = m_loadTest.Context["FileName"].ToString();
m_fileName = @"c:\Userload.txt";
int load = GetUserLoadFromFile();
if (load != -1)
m_loadTest.Scenarios[0].CurrentLoad = load;
private int GetUserLoadFromFile()
int newLoad = -1;
try
using (StreamReader streamReader = new StreamReader(m_fileName))
string load = streamReader.ReadToEnd();
try
if (!string.IsNullOrEmpty(load))
newLoad = int.Parse(load);
}
catch (FormatException)
//ignore
}
catch (IOException)
//ignore
return newLoad;
Here is a picture of my userload. I start at 25, up to 52, down to 7 and back to 25.
This sample will show you how you can modify the test context that is being passed to a test. You do this from the test starting event. Maybe there is a value you are reading in from a database or some other value that you want to pass into each test. Another problem is that load test context parameters are not copied to test context of unit tests. This plug-in would copy those values:
using System.Text;
namespace Blog
public class CopyParamtersPlugin : ILoadTestPlugin
{
//store the load test object.
LoadTest mLoadTest;
mLoadTest = loadTest;
//connect to the TestStarting event.
mLoadTest.TestStarting += new EventHandler<TestStartingEventArgs>(mLoadTest_TestStarting);
void mLoadTest_TestStarting(object sender, TestStartingEventArgs e)
//When the test starts, copy the load test context parameters to
//the test context parameters
foreach (string key in mLoadTest.Context.Keys)
e.TestContextProperties.Add(key, mLoadTest.Context[key]);
}
I hope these example show some of the power of the load test plugins. Here are a few more helpful links on plugins:
MSDN Help :
Bill Barnett’s Blog: | http://blogs.msdn.com/b/slumley/archive/2009/04/10/load-test-plug-ins.aspx | CC-MAIN-2013-48 | refinedweb | 1,492 | 58.38 |
Using VS2015 Update 2. Here is the basic project structure:
App.iOS.csproj
- Main platform dependent project. Platform dependent service implementations...
App.shproj
- Shared project referenced from App.iOS.csproj defining most of the app. XAML files, Images, code, Service interfaces used via DependencyService
App.Forms.csproj
- Portable Forms library for some custom Forms controls and shared UI logic
App.Data.csproj
- Portable shared business object library also used on the backend
App.Client.csproj
- Portable client access library that calls into the service backend
All my XAML and most app code is defined in the App.shproj shared project. I can't seem to access anything via syntax assist defined in App.Forms.csproj, App.Data.csproj or App.Client.csproj. I can get to classes defined in App.iOS.csproj so for example:
App.iOS.Application (the default container for Main() in the Xamarin Forms templates) is accessible via syntax assist from any cs file in the shared project. When I try to access App.Client.AppClient (a simple wrapper around HttpClient) all I get is
The type or namespace 'Client' does not exist in the namespace 'App'
Same if I add a 'using App.Client' at the top of the cs file. In short VS doesn't seem to be able to see those solution libraries from the shared project even tho they are being referenced from the App.iOS.csproj (which references the shared project).
Now this all compiles just fine. No compile time problems at all. I guess when the Xamarin Mac Agent compiles the solution it properly assembles all the shared files and everything resolves correctly. This also was working fine in Xamarin Studio on my Mac before I moved to VS.
Some hacks I tried:
1) Added ProjectReference elements to the App.projitems file that point at the 3 projects I care to reference from the shared project
2) Added ProjectReference elements to the App.shproj file that point at the 3 projects I care to reference from the shared project
3) Tried restructuring the whole thing converting the shared project to a portable project but there were just too many build errors to overcome especially since the shared project solution builds just fine.
More interesting tidbits:
1) Most of my Page level classes derive from a class called App.Forms.AppPage (which itself derives from Xamarin.Forms.Page) defined in the App.Forms.csproj. When editing the code behind of any pages defined in the shared project VS seems to know that X.F.Page is the ultimate base class and presents members from that class but doesn't make any of the members defined by App.Forms.AppPage available
2) Syntax assist does in fact seem to work properly for a small amount of time after the solution is initially loaded. So all the members defined in all the other solution projects are resolvable and syntax assist presents them to me while typing.
3) During the time after the solution is initially loaded and syntax assist appears to work none of the XAML defined members (InitializeComponent, anything with an x:Name in the XAML file) seems to be resolvable.
4) I think that performing a build seems to be the trigger that switches between syntax assist resolving things properly and not but can't be totally sure that is the only trigger.
Be really, really awesome to get syntax assist working here, I just can't seem to figure out how to make VS do what I expect it to do naturally and not sure if I got myself into this situation or if this is a product of churn due to the MSFT deal. Thanks for any help. Sorry for the essay | https://forums.xamarin.com/discussion/65228/trouble-using-classes-in-a-shared-project-from-portable-library-in-same-solution | CC-MAIN-2019-43 | refinedweb | 622 | 65.32 |
I was reading Coldfused about AdBlock Plus, a Firefox plug-in that blocks ads and other elements of websites while you surf. I remembered that since upgrading to Firefox 2.0.0.11, my regular AdBlock plug-in is apparently not compatible. AdBlock Plus is.
Upon installing and giving her a run, I noticed a video player application I’m writing for work started throwing an exception I hadn’t seen before. We’re using Dart for Publishers via a proxy company for one particular project, which still requires a URL request on the client to DoubleClick’s ad servers, ad.doubleclick.net, with 50 billion parameters on the URL. DoubleClick’s one of the biggest ad services on the net, one of the reasons Google bought ‘em I’m sure. Therefore, they are also one of the filters that AdBlock Plus looks for and actively blocks. Any calls to doublelclick anything fail since it’s stopped at the browser level.
This causes an IOError for URLLoader’s in ActionScript 3. I reckon this’ll throw a FaultEvent for HTTPService in Flex, but if you’re doing boiler-plate coding in either ActionScript only projects or Flash CS3 using URLLoader, you’ll need to write code to handle this in case any of your clients have ad blocking software installed. We already have a lot of exception handling on our own stuff since we develop on development & QA servers first and things can get wonky when the server guys update their stuff and something goes awry… or our client code is coded wrong. The only error’s we had seen in the Ad stuff was when the DoubleClick servers were configured wrong at a client site, and you’d get back a 1 pixel by 1 pixel transparent GIF in HTML instead of the XML you were expecting. In that case, we’d just dispatch a custom error event in the parsing function. This rarely happened and our error code was sound.
…so I thought until I installed AdBlock Plus and re-tested our app. For the record, you should be doing at least this level of exception handling, if not more if you are doing any coding with classes that throw errors. In AS2 it was ok if things blew up; no one knew. Not the user, nor you. In AS3, however, the recovery isn’t always as good; a lot of times the code will just abort the whole stack. Since you can’t do this code on _root:
try { runSWF(); } catch(err:Error) { // something somewhere blew up good in my appz0r showErrorScreenMovieClip(); }
You need catch everything possible if you want to ensure your code has any hope of continuing. The whole point of exceptions, according to Java developers, is to catch the exception, and handle as best you can. Obviously, a lot of exceptions leave you with no options. If you own the code, you can show an error screen, and if your code is going into something else, you can dispatch an error event. That’s about it for some exceptions.
For those creating applications that need to show ads inside their Flash & Flex apps, you need to assume you’re entering a hostile landscape where people are going to do everything they can to block your ads. If your policy is that ads drive the revenue of your business, you can abort everything when an exception is thrown. If your policy is to not trust 3rd party servers that cost 5 figures but don’t even allow you to do a test movie in the Flash IDE, then you log the error, and just dispatch an error so the rest of your code can move on with life.
Below is some pseudo code that shows how you write near exception-proof code that hits some URL. The only thing missing is a timeout error. Writing a timer is complicated and controversial, so I’ll skip that part and assume if you get a timeout, you destroy the URLLoader, remove the listeners, and just dispatch a custom error indicating a timeout.
// create our loader adLoader = new URLLoader(); adLoader.addEventListener(Event.COMPLETE, onLoaded); adLoader.addEventListener(IOErrorEvent.IO_ERROR, onIOError); adLoader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, onSecurityError);req = new URLRequest(“ad.doubleclick.net/coldcrazyparams”); // attempt to load try { adLoader.load(req); } catch(err:Error) { // Either HTTP Sniffer like Charles is running, // your VPN is offline, or your interwebs are silly. dispatchEvent(new Event(“adError”)); } // The host is ad-blocked or cannot be reached. protected function onIOError(event:IOErrorEvent):void { dispatchEvent(new Event(“adError”)); } // OT OT OT protected function onSecurityError(event:SecurityErrorEvent):void { dispatchEvent(new Event(“adError”)); } protected function onLoaded(event:Event):void { // all is well in the land of Hannah frikin’ Lee // show ads, make bling, drink & celebrate }
Don’t forget to put [Event] metadata tags at the top, so all people using your class (including you) get code hints in FlexBuilder for addEventListener. Usually, there should be at least 2 events in my opinion; complete and error. Since we don’t have throwable like Java, exceptions break the heck out of encapsulation, and events are both synchronous and asynchronous; events for the win.
Side Rant
Unfortunately, what the code above can’t do is dodge the dreaded VerifyError, aka invalid register accessed etc etc. You get this if your SWF gets corrupted in some way. I got one of these with Flex Builder 3 Beta 2. At first, I tracked it down do a nested XML namespace. Later, it turned out there was “magic funk” around the error function itself I routed exceptions to. I noticed that older CSS files had all of these errors for every style from a Flex 2 project. If I re-wrote them the exact same, letter for letter, they were fine. So, copied and pasted the whole thing to Notepad, and then back… and everything was fine. WTF? UTF-8 demon infestation? Anyway, tried same theory here; re-wrote function a good 5 blank lines away and low and behold, no more VerifyError. I’m hoping it was just a Beta 2 problem.
Anyway, Flex Builder 3 Beta 3 rules so far. | http://jessewarden.com/2007/12/exception-handling-in-actionscript-3-for-adblock-plus.html/trackback | crawl-001 | refinedweb | 1,028 | 61.97 |
DAG Runs¶
A DAG Run is an object representing an instantiation of the DAG in time.
Each DAG may or may not have a schedule, which informs how DAG Runs are
created.
schedule_interval is defined as a DAG argument, and receives
preferably a
cron expression as
a
str, or a
datetime.timedelta object.
Tip
You can use an online editor for CRON expressions such as Crontab guru
Alternatively, you can also use one of these cron “presets”.
Cron Presets¶
Your DAG will be instantiated for each schedule along with a corresponding DAG Run entry in the database backend.
Note
If you run a DAG on a schedule_interval of one day, the run stamped 2020-01-01
will be triggered soon after 2020-01-01T23:59. In other words, the job instance is
started once the period it covers has ended. The
execution_date available in the context
will also be 2020-01-01.
The first DAG Run is created based on the minimum
start_date for the tasks in your DAG.
Subsequent DAG Runs are created by the scheduler process, based on your DAG’s
schedule_interval,
sequentially. If your start_date is 2020-01-01 and schedule_interval is @daily, the first run
will be created on 2020-01-02 i.e., after your start date has passed. interval that has not been run since the last execution date (or has been cleared). This concept is called Catchup.
If your DAG import DAG from airflow.operators.bash_operator import BashOperator from datetime import datetime, timedelta default_args = { 'owner': 'Airflow', 'depends_on_past': False, 'email': ['airflow@example.com'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) } dag = DAG( 'tutorial', default_args=default_args, start_date=datetime(2015, 12, 1), description='A simple tutorial DAG', schedule_interval='@daily', catchup=False) backfill -s START_DATE -e it for the
scheduled date. Clearing a task instance doesn’t delete the task instance record.
Instead, it updates
max_tries to
0 and set the current task instance state to be
None, this forces current DAG’s execution date
Future - All the instances of the task in the runs after the current DAG’s execution date
Upstream - The upstream tasks in the current DAG
Downstream - The downstream tasks in the current DAG
Recursive - All the tasks in the child DAGs and parent DAGs
Failed - Only the failed tasks in the current DAG
You can also clear the task through CLI using the command:
airflow clear dag_id -t task_regex -s START_DATE -d END_DATE
For the specified
dag_id and time interval, the command clears all instances of the tasks matching the regex.
For more options, you can check the help of the clear command :
airflow clear -h
External Triggers¶
Note that DAG Runs can also be created manually through the CLI. Just run the command -
airflow trigger_dag -e execution_date run_id
The DAG Runs created externally to the scheduler get associated with the trigger’s timestamp and are displayed
in the UI alongside scheduled DAG runs. The executionRun as a JSON blob.
Example of a parameterized DAG:
from airflow import DAG from airflow.operators.bash_operator import BashOperator from airflow.utils.dates import days_ago dag = DAG("example_parametrized_dag", schedule_interval=None, start_date=days_ago(2)) parameterized_task = BashOperator( task_id='parameterized_task', bash_command="echo value: {{ dag_run.conf['conf1'] }}", dag=dag, )
Note: The parameters from
dag_run.conf can only be used in a template field of an operator. | https://airflow.apache.org/docs/apache-airflow/1.10.13/dag-run.html | CC-MAIN-2022-21 | refinedweb | 549 | 52.49 |
Start Lecture #01
I start at 0 so that when we get to chapter 1, the numbering will agree with the text.
There is a web site for the course. You can find it from my home page, which is listed above, or from the department's home page.
Start Lecture #01marker above can be thought of as
End Lecture #00.
The course has several texts. material on C is standard, but the order of presentation is not. A difference between this book and K&R is that the latter starts with low-level I/O (read and print one character) whereas this book starts with a higher level approach.
Computer Organizationportion of the course. It is required.
Grades are based on the labs and exams; the weighting will be
approximately
20%*LabAverage + 35%*MidtermExam + 45%*FinalExam (but see homeworks below).
I make a distinction between homeworks and labs.
Labs are
Homeworks are
Homeworks are numbered by the class in which they are assigned. So any homework given today is homework #1. Even if I do not give homework today, any homework assigned next class would be homework #2. So the homework present in the notes for lecture #n is homework #n.
You may develop (i.e., write and test) lab assignments on any system you wish, e.g., your laptop. However, ...
NYU Classes.
This will be covered in the recitations.
I feel it is important for CS students to be familiar with basic
client-server computing (related to
cloud computing) in which
one develops software on a client machine (for us, most likely one's
personal laptop), but runs it on a remote server (for us,
linserv1.cims.nyu.edu).
This requires three steps.
I have supposedly given you each an account on linserv1 (and access), which takes care of step 1. Accessing linserv1 and access is different for different client (laptop) operating systems.
seelinserv1, but can
seeaccess. So from outside (you play the Duo/MFA game and) log into access; then you are inside.
If you receive a message from linserv).
Good methods for obtaining help include
This course uses (and teaches) the C programming language. You may write your labs in C or C++, but we will not teach the latter. Moreover C, but not C++, will appear on exams.
Incomplete
The rules for incompletes and grade changes are set by the school and not the department or individual faculty member.
The rules set by CAS can be found in <>, which states:
Remark: The chapter/section numbers for the material on C, agree with Kernighan and Plauger. However, the material is quite standard so, as mentioned before, if you already own a C book that you like, it should be fine.
Since Java includes much of C, my treatment can be very brief for the parts in common (e.g., control structures).
You should be reading the first few chapters of K&R or Dive into Systems for the next few lectures.
C programs consist of functions, which contain statements, and variables, the latter store values.
Hello WorldFunction
#include <stdio.h> main() { printf("Hello, world\n"); }
Although this program works, the second line should really be
int main(int argc, char *argv[]) {
I know this looks weird for now but remember how long it took you to really understand
public static void main (String[] args)
Like Java.
Like Java
The program on the right is trivial. However, I wish to use it to introduce lvalues and rvalues. Each variable (in this program x and y) has two values associated with it: its address and the contents of that address. The latter is often called the value of the variable.
main() { int x=5, y=8; y = x+2; }
Consider the program's assignment statement. To evaluate the right hand side (RHS) we need to know that the value of x is 5; we are not interested in knowing the address in which this 5 is stored. This value, 5, is called the rvalue of x because it is what is needed when x occurs on the RHS. In contrast the fact that 8 is the rvalue of y is not relevant since y does not occur on the RHS.
The LHS contains just y. But the fact that y has the value (specifically the rvalue) 8, is not relevant. What is relevant is the address of y since that is where the system must store the 7 that results from the addition. The address of y is called its lvalue since it is what is needed when y occurs on the LHS.
#include <stdio.h> main() { int n = 0, *pn; pn = &n; *pn = 33; printf("n = %d\n", n); }
This idea of addresses is a central theme of CSO because it is one key in understanding how Computer Systems are Organized.
The program on the far right is actually correct and prints "n = 33".
The beginnings of an explanation is the diagram on the near right:
pn is a pointer to
n, the (r)value of
pn is the lvalue (aka the address) of
n.
#include <stdio.h> main() { int F, C; int lo=0, hi=300, incr=20; for (F=lo; F<=hi; F+=incr) { C = 5 * (F-32) / 9; printf("%d\t%d\n", F, C); } }
right amountof space to print the corresponding argument.
#include <stdio.h> #define LO 0 #define HI 300 #define INCR 20 main() { int F; for (F=LO; F<=HI; F+=INCR) printf("%3d\t%5.1f\n", F, (F-32)*(5.0/9.0)); }
The simplest (i.e., most primitive) form of character I/O is getchar() and putchar(), which read and print a single character.
Both getchar() and putchar() are declared in stdio.h.
#include <stdio.h> main() { int c; while ((c = getchar()) != EOF) putchar(c); }
File copy is conceptually trivial: getchar() a char and then putchar() this char until eof. The code is on the right and does require some comment despite is brevity.
extraparens, which are definitely not extra.
Homework: (1-7) Write a (C-language) program to print the value of EOF. (This is 1-7 in the book but I realize not everyone will have the book so I will type the problems into the notes.)
Homework: Write a program to copy its input to its output, replacing each string of one or more blanks by a single blank.
while (getchar() != EOF) ++numChars; for (numChars = 0; getchar() != EOF; ++numChars);
This is essentially a one-liner, which I have written in two different ways: once with a while loop and once with a for loop.
Now we need two tests: end-of-line and end-of-input. Perhaps the following is really a two-liner, but it does have only one semicolon.
while ((c = getchar()) != EOF) if (c == '\n') ++numLines;
So if a file has no newlines, it has no lines. Demo this with echo -n >noEOF "hello"
The Unix wc program prints the number of characters, words, and lines in the input. It is clear what the number of characters means. The number of lines is the number of newlines (so if the last line doesn't end in a newline, it doesn't count). The number of words is less clear. In particular, what should be the word separators?
#include <stdio.h> #define WITHIN 1 #define OUTSIDE 0 main() { int c, num_lines, num_words, num_chars; int within_or_outside = OUTSIDE; num_lines = num_words = num_chars = 0; while ((c = getchar()) != EOF) { ++num_chars; if (c == '\n') ++num_lines; if (c == ' ' || c == '\n' || c == '\t') within_or_outside = OUTSIDE; else if (within_or_outside == OUTSIDE) { // starting a word ++num_words; within_or_outside = WITHIN; } } printf("%d %d %d\n", num_lines, num_words, num_chars); }
Homework: (1-12) Write a program that prints its input one word per line.
Start Lecture #02
Remark: Class accounts on linserv1.
First round of your class accounts for 201-003 and 202-002 were
created tonight, and students will receive a welcome message if they
are getting a new account. Any student who previously had a CIMS
account in the past (whether or not it is active) will not get an
email but their account will be adjusted as necessary for use for
your class. The password reset link may be useful especially to
those students:
We will re-run the class account creation scripts daily until the
drop deadline.
Thanks,
Shirley
Remark: The tutor has revised the hours. The hours listed in section 0.10 of these notes have been updated.
We are hindered in our examples because we don't yet know how to input anything other than characters and haven't yet written the program to convert a string of characters into an integer (easy) or (significantly harder) a floating point number.
#include <stdio.h> #define N 10 // imagine you read in N main() { int i; float x, sum=0, mu; for (i=0; i<N; i++) { x = i; // imagine you read in x sum += x; } mu = sum / N; printf("The mean is %f\n", mu); }
#include <stdio.h> #define N 10 // imagine you read in N #define MAXN 1000 main() { int i; float x[MAXN], sum=0, mu; for (i=0; i<N; i++) { x[i] = i; // imagine you read in x[i] } for (i=0; i<N; i++) { sum += x[i]; } mu = sum / N; printf("The mean is %f\n", mu); }
#include <stdio.h> #include <math.h> #define N 5 // imagine you read in N #define MAXN 1000 main() { int i; double x[MAXN], sum=0, mu, sigma; for (i=0; i<N; i++) { x[i] = i; // imagine you read in x[i] sum += x[i]; } mu = sum / N; printf("The mean is %f\n", mu); sum = 0; for (i=0; i<N; i++) { sum += pow(x[i]-mu,2); } sigma = sqrt(sum/N); printf("The std dev is %f\n", sigma); }
I am sure you know the formula for the mean (average) of N numbers: Add the numbers and divide by N. The mean is normally written μ. The standard deviation is the RMS (root mean square) of the deviations-from-the-mean, it is normally written σ. Symbolically, we write μ = ΣXi/N and σ = √(Σ((Xi-μ)2)/N). (When computing σ we sometimes divide by N-1 not N. Ignore the previous sentence.)
The first program on the right naturally reads N, then reads N numbers, and finally computes the mean of the latter. There is a problem; we don't know how to read numbers.
So I faked it by having N a symbolic constant and making x[i]=i.
I do not like the second version with its gratuitous array. It is (a little) longer, slower, and more complicated. Much worse it takes space (i.e., requires memory) proportional to N, for no reason. Hence it might not run at all for large N and small machines. However, I have seen students write such programs. Apparently, there is an instinct to use a three step procedure for all programming assignments:
But that is silly if, as in this example, you no longer need each value after you have read the next one.
The last example is a good use of arrays for computing the standard deviation using the RMS formula above. We do need to keep the values around after computing the mean so that we can compute all the deviations from the mean and, using these deviations, compute the standard deviation.
Note that, unlike Java, no use of new (or the
C
analogue malloc()) appears.
Arrays declared as in this program have a lifetime of the routine in which they are declared. Specifically sum and x are both allocated when main is called and are both freed when main is finished.
Note the declaration int x[MAXN] in the third version. In C, to declare a complicated variable (i.e., one that is not a primitive type like int or char), you write what has to be done to the variable to get one of the primitive types.
In C if we have int X[10]; then writing X in your
program is the same as writing &X[0].
& is the
address of operator.
More on this later when we discuss pointers.
There is of course no limit to the useful functions one can write. Indeed, the main() programs we have written above are all functions.
#include <stdio.h> // Determine letter grade from score // Demonstration of functions char letter_grade (int score) { if (score >= 90) return 'A'; else if (score >= 80) return 'B'; else if (score >= 70) return 'C'; else if (score >= 60) return 'D'; else return 'F'; } // end function letter_grade
main() { short quiz; char grade; quiz = 75; // should read in quiz grade = letter_grade(quiz); printf("For a score of %3d the grade is %c\n", quiz, grade); } // end main cc -o grades grades.c; ./grades For a score of 75 the grade is C
A C program is a collection of functions (and global variables). Exactly one of these functions must be called main and that is the function at which execution begins.
One important issue is type matching. If a function f takes one int argument and f is called with a short, then the short must be converted to an int. Since this conversion is widening, the compiler will automatically coerce the short into an int, providing it knows that an int is required.
It is fairly easy for the compiler to know all this providing f() is defined before it is used, as in the code on the right.
We see on the right a function letter_grade defined. It has one int argument and returns a char.
Finally, we see the main program that calls the function.
The main program uses a short to hold the numerical grade and then calls the function with this short as the argument. The C compiler generates code to coerce this short value to the int required by the function.
// Average and sort array of random numbers #define NUMELEMENTS 50 void sort(int A[], int n) { int temp; for (int x=0; x<n-1; x++) for (int y=x+1; y<n; y++) if (A[x] < A[y]) { temp = A[y]; A[y] = A[y+1]; A[y+1] = temp; } }
double avg(int A[], int num) { int sum = 0; for (int x=0; x<n; x++) sum = sum + A[x]; return (sum / n); }
main() { int table[NUMELEMENTS]; double average; for (int x=0; x<NUMELEMENTS; x++) { table[x] = rand(); /* assume defined */ printf("The elt in pos %d is %d\n", x, table[x]); } average = avg(table, NUMELEMENTS ); printf("The average is %5.1f ", average); sort(table, NUMELEMENTS ); for (x-=; x<NUMELEMENTS; x++) printf("The element in position %3d is %3d \n", x, table[x]); }
The next example illustrates a function that has an array argument.
Remember that in a C declaration you
decorate the item being
declared with enough stuff (e.g., [], *) so that the result is a
primitive type such as int, double, or
char.
The function sort has two parameters, the second one n is simply an int. The parameter A, however, is more complicated. It is the kind of thing that when you take an element of it, you get an int.That is, A is an array of ints.
Unlike the array example in section 1.6, A does not have an explicit upper bound on its index. This is because the function can be called with arrays of different sizes. Since the function needs to know the size of the array (look at the for loops), a second parameter n is used for this purpose.
This example has two function calls: main calls both avg and sort. Looking at the call from main to sort we see that table is assigned to A and NUMELEMENTS is assigned to n. Looking at the code in main itself, we see that indeed NUMELEMENTS is the size of the array table and thus in sort, n is the size of A.
All seems well provided the called function appears before the function that calls it. Our examples have followed this convention.
So far so good; but if f calls g and (recursively) g calls f, we are in trouble. How can we have f before g, and also have g before f?
This will be answered very soon.
#include <stdio.h> int f(int a, int b) { a = a+b; return a; }
main() { int x = 10; int y = 20; int ans; ans = f(x, y); }
Arguments in C are passed by value (the same as Java does for arguments that are not objects).
The simple example on the right illustrates a few points. First, some terminology. The variables a and b in f() are called parameters of f() whereas, x and y are called arguments of the call f(x, y).
When main() calls f() the values in the arguments are copied into the corresponding parameters. However, when f() returns, the values now in the parameters are NOT copied back to the arguments. This explains why the value in ans differs from the final value in x.
Try to avoid the fairly common error of assuming Copy-in AND Copy-out semantics.
Unlike Java, C does not have a string datatype. A string in C is an array of chars. String operations like concatenate and copy (assignment) become functions in C. Indeed there are a number of standard library routines that act on strings.
Strings in C are
null terminated.
That is, a string of length 5 actually contains 6 characters, the 5
characters of the string itself and a sixth character = '\0' (called
null) indicating the end of the string.
This is a big deal.
Our goal is a program that reads lines from the terminal, converts them to C strings by appending '\0', and prints the longest line found. Pseudo code would be
while (more lines) read line if (line longer than previous longest) save line and its length print the saved line
Thus we need the ability to read in a line and the ability to save a line. We write two functions getLine() and copy() for these tasks (the book uses getline (all lower case), but that doesn't compile for me since there is a library routine in stdio.h with the same name and different signature).
#include <stdio.h> #define MAXLINE 1000 int getLine(char line[], int maxline); void copy(char to[], char from[]);
int main() { int len, max; char line[MAXLINE], longest[MAXLINE]; max = 0; while ((len=getLine(line,MAXLINE))>0) if (len > max) { max = len; copy(longest,line); } if (max>0) printf("%s", longest); return 0; }
int getLine(char s[], int lim) { int c, i; for (i=0; i<lim-1 && (c=getchar())!=EOF && c!='\n'; ++i) s[i] = c; if (c=='\n') { s[i]= c; ++i; } s[i] = '\0'; return i; }
void copy(char to[], char from[]) { int i; i=0; while ((to[i] = from[i]) != '\0') ++i; }
Given the two supporting routines, main is fairly simple, needing only a few small comments.
declare (or define) before useso either main would have to come last or the declarations are needed. Since only main uses the routines, the declarations could have been in main but it is common practice to put them outside as shown. Although these routines are not recursive (and hence we could have placed the called routine before the caller), declarations like the one shown are needed for recursive routines.
This function is discussed further in recitation.
The line is returned in the parameter s[], the function
itself returns the length The for
continuation
condition in getLine is rather complex.
(Note that the for loop has an empty body; the entire
action occurs in the for statement itself.)
The condition part of the for tests for 3 situations.
Perhaps it would be clearer if the test was simply i<lim-1 and the rest was done with if-break statments inside the loop.
In C, if you write f(x)+g(y)+h(z) you have
no guarantee of the order the functions will be invoked.
(Thus the program would be non-deterministic if g() modified
something used by f().)
However, the && and || operators do
guarantee left-to-right ordering to enforce
short-circuit
condition evaluation.
This ordering is important here since the test for '\n'
must be performed after the getchar() has
assigned its value to c.
The copy() function is declared and defined to return void.
Homework: Simplify the for condition in getline() as just indicated.
Note: Homework #1 is on nyu Classes.
#include <stdio.h> #include <math.h> #define A +1.0 // should read #define B -3.0 // A,B,C #define C +2.0 // using scanf() void solve (float a, float b, float c); int main() { solve(A,B,C); return 0; } void solve (float a, float b, float c) { float d; d = b*b - 4*a*c; if (d < 0) printf("No real roots\n"); else if (d == 0) printf("Double root is %f\n", -b/(2*a)); else printf("Roots are %f and %f\n", ((-b)+sqrt(d))/(2*a), ((-b)-sqrt(d))/(2*a)); }
#include <stdio.h> #include <math.h> #define A +1.0 // main() should #define B -3.0 // read A,B,C #define C +2.0 // using scanf() void solve(void); // declaration of solve() float a, b, c; // definitions int main() { // definition of main() extern float a, b, c; // declarations a=A; b=B; c=C; solve(); return 0; } void solve () { // definition of solve() extern float a, b, c; // declarations float d; d = b*b - 4*a*c; if (d < 0) printf("No real roots\n"); else if (d == 0) printf("Double root is %f\n", -b/(2*a)); else printf("Roots are %f and %f\n", ((-b)+sqrt(d))/(2*a), ((-b)-sqrt(d))/(2*a)); }
The two programs on the right find the real roots (no imaginary numbers) of the quadratic equation
ax2+bx+c
They proceed by using the standard technique of first calculating the discriminant
d = b2-4ac
Since these programs deal only with real roots, they punt when d<0.
The programs themselves are not of much interest.
Indeed a Java version would be
too easy to be a midterm exam
question in 101.
Our interest is confined to the method in which the
coefficients a, b, and c are passed from
the main() function to the helper
routine solve().
The main() function calls a function solve() passing it as arguments the three coefficients, A,B,C.
There is little to say. Method 1 is a simple program and uses nothing new.
The second main() program communicates with solve() using external variables rather than arguments/parameters.
declare (or define) before use. If you define before using, you don't need to also declare. But if you have recursion (f() calls g() and g() calls f()), you can't have both definitions before the corresponding uses so you
Similar to Java: A variable name must begin with a letter and then can use letters and numbers. An underscore is a letter, but you shouldn't begin a variable name with one since that is conventionally reserved for library routines. Keywords such as if, while, etc are reserved and cannot be used as variable names.
C has very few primitive types.
naturalsize of an integer on the host machine.
There are qualifiers that can be added. One pair is long/short, which are used with int. Typically short int is abbreviated short and long int is abbreviated long.
long must be at least as big as int, which must be as least as big as short.
There is no short float, short double, or long float. The type long double specifies extended precision.
The qualifiers signed or unsigned can be applied to char or any integer type. They basically determined how the sign bit is interpreted. An unsigned char uses all 8 bits for the integer value and thus has a range of 0–255; whereas, a signed char has an integer range of -128–127.
Note: We will have much more to say about data types, e.g., signed and unsigned, next month after we finish our treatment of C.
A normal integer constant such as 123 is an int, unless it is too big in which case it is a long. But there are other possibilities.
Although there are no string variables in C, there are string constants, written as zero or more characters surrounded by double quotes. A null character '\0' is automatically appended.
Alternative method of assigning integer values to symbolic names.
enum Boolean {false, true}; // false is zero, true is 1 enum Month {Jan=1, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec};
Perhaps they should be called definitions since space is allocated.
Similar to Java for scalars.
int x, y; char c; double q1, q2;
(Stack allocated) arrays are simple since the entire array is allocated not just a reference (no new/malloc required).
int x[10];
Initializations may be given.
int x=5, y[2]={44,6}; z[]={1,2,3}; char str[]="hello, world\n";
The qualifier const makes the variable read only so it must be initialized in the declaration.
Mostly the same as java.
Please do not call % the mod (or modulo) operator, unless you know that the operands are positive. he correct name for % is the remainder operator.
Again very little difference from Java.
Please remember that && and || are required to be short-circuit operators. That is, they evaluate the right operand only if needed.
There are two kinds of conversions: automatic conversions, called coercions, and explicit conversions, called casts.
C coerces
narrow arithmetic types to wide ones.
{char, short} → int → long float → double → long double long → float // precision can be lost
int atoi(char s[]) { int i, n=0; for (i=0; s[i]>='0' && s[i]<='9'; i++) n = 10*n + (s[i]-'0'); // assumes ascii return n; }
The program on the right (ascii to integer) converts a character string representing an integer to the integral value.
Unsigned coercions are more complicated; you can read about them in the book or wait a few weeks when we will cover them.
The syntax
(type-name) expression
converts the value to the type specified. Note that e.g., (double) x converts the value of x; it does not change x itself.
Homework: (2.3) Write the function htoi(s), which converts a string of hexadecimal digits (including an optional 0x or 0X) into its equivalent integer value. The allowable digits are 0 through 9, a through f, and A through F.
The same as Java.
Remember that x++ or ++x are not the same as x=x+1 because, with the operators, x is evaluated only once, which becomes important when x is itself an expression with side effects.
x[i++]++ // increments some (which?) element of an array x[i++] = x[i++]+1 // puts incremented value in ANOTHER slot
In fact the last line is illegal (what order do you do the two increments of i?)
Homework: (2-4). Write an alternate version of squeeze(s1,s2) (defined in the text) that deletes each character in the string s1 that matches any character is the string s2.
The same as Java
int bitcount (unsigned x) { int b; for (b=0; x!=0; x>>= 1) if (x&01) // octal (not needed) b++; return b; }
The same as Java: += -= *= /= %= <<= >>= &= ^= |=
The program on the right counts how many bits of its argument are 1. Right shifting the unisigned x causes it to be zero-filled. Anding with a 1, gives the LOB (low order bit). Writing 01 indicates an octal constant (any integer beginning with 0; similarly starting with 0x indicates hexadecimal). Both are convenient for specifying specific bits (because both 8 and 16 are powers of 2). Since the constant in this case has value 1, the 0 has no effect.
printf("You enrolled in %d course%s.\n", n, (n==1) ? "" : "s");
The same as Java:
Homework: (2-10). Rewrite the function lower(), which converts upper case letters to lower case with a conditional expression instead of if-else.
The table on the right is copied (hopefully correctly) from the book. It includes all operators, even those we haven't learned yet. I certainly don't expect you to memorize the table. Indeed one of the reasons I typed it in was to have an online reference I could refer to since I do not know all the precedences.
Homework: Check the table above for typos and report any you find.
Not everything is specified. For example if a function takes two arguments, the order in which the arguments are evaluated is not specified. Consider f(x++,x++);
Also the order in which operands of a binary operator like + are evaluated is not specified. So f() could be evaluated before or after g() in the expression f()+g(). This becomes important if, for example, f() alters a global variable that g() reads.
#include <stdio.h> void main (void) { int x=3, y; y = + + + + + x; y = - + - + + - x; y = - ++x; y = ++ -x; y = ++ x ++; y = ++ ++ x; }
Question: Which of the expressions on the right are
illegal?
Answer: The last three. They apply ++ to values not variables (i.e, to rvalues not lvalues).
I mention this because at this point in a previous semester there was some discussion about ++ ++. The distinction between lvalues and rvalues will become very relevant when we discuss pointers.
Since pointers have presented difficulties for students in the past, I use every opportunity to give ways of looking at the problem.
Since ++ does an assignment (as well as an addition) it needs a place to put the result, i.e., an lvalue.
Start Lecture #03
int t[]={1,2}; int main() { 22; return 0; }
C is an expression language; so the constant
22 and
the assignment
x=33 have values (i.e., rvalues).
One simple statement is an expression followed by a semicolon;
For example, the program on the right is legal.
As in Java, a group of statements can be enclosed in braces to form a compound statement or block. We will have more to say about blocks later in the course.
Same as Java.
Same as Java.
Same as Java.
Same as Java.
#include <ctype.h> int atoi(char s[]) { int i, n, sign; for (i=0; isspace(s[i]); i++) ; sign = (s[i]=='-') ? -1 : 1; if (s[i]=='+' || s[i]=='-') i++; for (n=0; isdigit(s[i]); i++) n = 10*n + (s[i]-'0'); return sign * n; }
Same as Java. As we shall see, the loops in the book show the hand of a master.
The program on the right (ascii to integer) illustrates several points, as well as being extremely useful in its own right.
workis done in the termination test.
for (i=0, j=0; i+j<n; i++,j+=3) printf ("i=%d and j=%d\n", i, j);
If two expressions are separated by a comma, they are evaluated left to right and the final value is the value of the one on the right. This operator often proves convenient in for statements when two variables are to be incremented.
Same as Java.
Same as Java.
The syntax is
goto label;
for (...) { for (...) { while (...) { if (...) goto out; } } } out: printf("Left 3 loops\n");
The label has the form of a variable name. A label followed by a colon can be attached to any statement in the same function as the goto. The goto transfers control to that statement.
Note that a break in C (or Java) only leaves one level of looping so would not suffice for the example on the right.
The goto statement was deliberately omitted from Java. Poor use of goto can result in code that is hard to understand and hence goto is rarely used in modern practice.
The goto statement was much more commonly used in the past.
Homework: Write a C function escape(char s[], char t[]) that converts the characters newline and tab into two character sequences \n and \t as it copies the string t to the string s. Use the C switch statement. Also write the reverse function unescape(char s[], char t[]).
#include <stdio.h> #define MAXLINE 100 int getline(char line[], int max); int strindex(char source[], char searchfor[]); char pattern[]="x y"; // "should" be input int main() { char line[MAXLINE]; int found=0; while (getline(line,MAXLINE) > 0) if (strindex(line, pattern) >= 0) { printf("%s", line); found++; } return found; }
int getline(char s[], int lim) { int c, i; i = 0; while (--lim>0 && (c=getchar())!=EOF && c!='\n') s[i++] = c; if (c == '\n') s[i++] = c; s[i] = '\0'; return i; }
int strindex(char s[], char t[]) { int i, j, k; for(i=0; s[i]!='\0'; i++) { for (j=i,k=0; t[k]!='\0' && s[j]==t[k]; j++,k++) ; if (k>0 && t[k]=='\0') return i; } return -1; }
The Unix utility grep (Global Regular Expression Print) prints all occurrences of a given string (or more generally a regular expression) from standard input. A very simplified version is on the right.
The basic program is
while there is another line if the line contains the string print the line
Getting a line and seeing if there are more is getline(); a slightly revised version is on the right. Note that a length of 0 means EOF was reached; an "empty" line still has a newline char '\n' and hence has length 1.
Printing the line is printf().
Checking to see if the string is present in the line is the new code. The choice made was to define a function strindex() that is given two strings s and t and returns the position in s (i.e., the index in the array) where t occurs. strindex() returns -1 if t does not occur in s.
The program is on the right; further comments follow.
C-style, i.e., the code specifies what you do to each parameter in order to get a char or int. These are not definitions of getline() and strindex(), which are given later. The declarations include only the header information and not the body; they describe only how to use the functions, not what the functions do.
Note that a function definition is of the form
return-type function-name(parameters) { declarations and statements }
The default return type is int, but I recommend not utilizing this fact and instead always declaring the return type.
The return statement is like Java.
The book correctly gives all the defaults and explains why they are what they are (compatibility with previous versions of C). I find it much simpler to always
A C program consists of external objects, which are either variables or functions.
Variables and functions defined outside any function are called external.
Variables defined inside a function are called internal.
Functions defined inside another function would also be
called internal; however standard C does not have internal
functions.
That is, you cannot in C define a function inside another function.
In this sense C is not a fully block-structured language
(see
block structure below).
As stated, a variable defined outside functions is external. All subsequent functions in that file will see the definition (unless it is overridden by an internal definition).
External variables can be used, instead of parameters/arguments to pass information between functions. It is sometimes convenient not to repeat a long list of arguments common to several functions. However, using external variables also has problems: It makes the exact information flow harder to deduce when reading the program.
When we solved quadratic equations in section 1.10 our second method used external variables.
Scope rules determine the visibility of names in a program. In C the scope rules are fairly simple.
Since C does not have internal functions, all internal names are variables. Internal variables can be automatic or static. We have seen only automatic internal variables, and this section will discuss only them. Static internal variables are discussed in section 4.6 below.
An automatic variable defined in a function is visible from the
definition until the end of the function (but see
If the same variable name is defined internal to two functions, the variables are unrelated.
Parameters of a function are the same as local variables in these respects.
int main(...) {...} int value; float joe(...) {...} float sam; int bob(...) {...}
An external name (function or variable) is visible from the point of its definition (or declaration as we shall see below) until the end of that file. In the example on the right
There can be only one definition of a given external name in the entire program (even if the program includes many files). However, there can be multiple declarations of the same name.
A declaration describes a variable (gives its type) but does not allocate space for it. A definition both describes the variable and allocates space for it.
extern int X; extern double z[]; extern float f(double y);
Thus we can put declarations of a variable X, an array z[], and a function f() at the top of every file and then X and z are visible in every function in the entire program. Declarations of z[] do not give its size since space is not allocated; the size is specified in the definition.
If declarations of joe() and bob() were added at the top of the previous example, then main() would be able to call them.
If an external variable is to be initialized, the initialization must be put with the definition, not with a declaration.
How to tell apart declarations or definitions
#include <stdio.h> double f(double x); int main() { float y; int x = 10; printf("x in main is %i\n", x); printf("f(x) is %f\n", f(x)); return 0; } double f(double x) { printf("x in f is %f\n", x); return x; } x in main is 10 x in f is 10.000000 f(x) is 10.000000
The code on the right shows how valuable having the types declared can be. The function f() is the identity function. However, main() knows that f() takes a double so the system automatically converts x to a double when calling f().
It would be awkward to have to change every file in a big programming project when a new function was added or had a change of signature (types of arguments and return value). What is done instead is that all the declarations are included in a single header file. The definitions remain scattered over many files. (Each function is naturally only once>.
For now assume the entire program is in one directory. Create a file with a name like functions.h containing the declarations of all the functions. Then early in every .c file write the line
#include "functions.h"Note the quotes not angle brackets, which indicates that functions.h is located in the current directory, rather than in the
standard placethat is used for <>.
We need to distinguish the lifetime of the value in a variable from the visibility of the variable.
Consider the variable x in the trivial example
void f(void) { int x = 5; printf(%d\n", x++); }
No matter how many times f() is called, the value printed will always be 5. This is because each call re-initializes x to 5. We say that the lifetime of x's value is one execution of the function. In contrast an external variable maintains values assigned to it; its lifetime is permanent.
In addition, x, a local variable, is not visible in any other function. That is, the visibility of x is local to the function in which it is defined.
The adjective static has very different meanings when applied to internal and external variables.
int main(...){...} static int b16; void sam(...){...} double beth(...){...}
If an external variable is defined with the static attribute, its visibility is limited to the current file. In the example on the right b16 is naturally visible in sam() and beth(), but not main(). The addition of static means that if another file has a definition or declaration of b16, with or without static, the two b16 variables are not related.
If an internal variable is declared static, its lifetime is the entire execution of the program. This means that if the function containing the variable is called twice, the value of the variable at the start of the second call is the final value of that variable at the end of the first call.
As we know, there are no internal functions in standard C. If an (external) function is defined to be static, its visibility is limited to the current file (as for static external variables).
Ignore this section. Register variables were useful when compilers were primitive. Today, compilers can generally decide, better than programmers, which variables should be put in register.
Standard C does not have internal functions, that is you cannot in C define a function inside another function. In this sense C is not a fully block-structured language.
Of course C does have internal variables; we have used them in almost every example. That is, most functions we have written (and will write) have variables defined inside them.
#include <stdio.h> int main(void) { int x = 5; printf ("The value of outer x is %d\n", x); { int x = 10; printf ("The value of inner x is %d\n", x); } printf ("The value of the outer x is %d\n", x); return 0; } The value of outer x is 5. The value of inner x is 10. The value of outer x is 5.
Also C does have block structure with respect to variables.
This means that inside a block (remember that a block is a bunch of
statements surrounded by {}) you can define a new variable
with the same name as the old one.
These two variables are
For example, the program on the right produces the output shown.
Remark: The gcc compiler for C does permit one to define a function inside another function. These are called nested functions. Some consider this gcc extension to be evil; we will not use it.
Note that we have used nested blocks many times without calling them out. Specifically, when you use {} to group the body of a for loop or the then portion of an if-then-else these also are blocks since they are enclosed by {}.
Homework: Write a C function int odd (int x) that returns 1 if x is odd and returns 0 if x is even. Can you do it without an if statement?
Static and external variables are, by default, initialized to zero. Automatic i.e., non-static, internal variables (the only kind left) are not initialized by default.
As in Java, you can write int X=5-2;. For external or static scalars, that is all you can do.
int x=4; int y=x-1;
For automatic, internal scalars the initialization expression can involve previously defined values as shown on the right (even function calls are permitted).
int BB[8] = {4,9,2} int AA[] = {3,5,12,7}; char str[] = "hello"; char str[] = {'h','e','l','l','o','\0'}
You can initialize an array by giving a list of initializers as shown on the right.
The same as Java.
Normally, before the compiler proper sees your program, a utility called the C preprocessor is invoked to include files and perform macro substitutions.
#include <filename> #include "filename"
We have already discuss both forms of file inclusion.
In both cases the file mentioned is textually inserted at the point
of inclusion.
The difference between the two is that the first form looks for
filename in a system-defined
standard place;
whereas, the second form first looks in the current directory.
#define MAXLINE 20 #define MULT(A, B) ((A) * (B)) #define MAX(X, Y) ((X) > (Y)) ? (X) : (Y) #undef getchar
We have already used examples of macro substitution similar to the first line on the right. The second line, which illustrates a macro with arguments is more interesting.
Without all the parentheses on the RHS, the macro would be legal,
but would (sometimes) give the wrong answers.
Question: Why?
Answer: Consider MULT(x+4, y+3)
Note that macro substitution is not the same as a function call (with standard call-by-value or call-by-reference semantics). Even with all the parentheses in the third example you can get into trouble since MAX(a++,5) can increment a twice. If you know call-by-name from algol 60 fame, this will seem familiar.
We probably will not use the fourth form. It is used to un-define a macro from a library so that you can write another version.
There is some fancy stuff involving # in the RHS of the macro definition. See the book for details; I do not intend to use it.
#if integer-expr ... #elif integer-expr ... #else ... #endif
The C-preprocessor has a very limited set of control flow items. On the right we see how the C
if (cond1) ... else if (cond2) ... else .. end if
construct is written. The individual conditions are simple integer expressions consisting of integers, some basic operators and little else. Perhaps the most useful additions are the preprocessor function defined(name), which evaluates to 1 (true) if name has been #define'd, and the ! operator, which converts true to false and vice versa.
#if !defined(HEADER22) #define HEADER22 // The contents of header22.h // goes here #endif
We can use defined(name) as shown on the right to ensure that a header file, in this case header22.h, is included only once.
Question: How could a header file be included
twice unless a programmer foolishly wrote the same #include
twice?
Answer: One possibility is that a user might include two systems headers h1.h and h2.h each of which includes h3.h.
Two other directives #ifdef and #ifndef test whether a name has been defined. Thus the first line of the previous example could have been written ifndef HEADER22.
#if SYSTEM == MACOS #define HDR "macos.h" #elsif SYSTEM == WINDOWS #define HDR "windows.h" #elsif SYSTEM == LINUX #define HDR "linux.h" #else #define HDR "empty.h" #endif #include HDR
On the right we see a slightly longer example of the use of preprocessor directives. Assume that the name SYSTEM has been set to the name of the system on which the current program is to be run (not compiled). Assume also that individual header files have been written for macos, windows, and linux systems. Then the code shown will include the appropriate header file.
Note: The quotes used in the various #defines for HDR are not required by #define, but instead are needed by the final #include.
public class X { int a; public static void main(String args[]) { int i1; int i2; i1 = 1; i2 = i1; i1 = 3; System.out.println("i2 is " + i2); X x1 = new X(); X x2 = new X(); x1.a = 1; x2 = x1; // NOT x2.a = x1.a x1.a = 3; System.out.println("x2.a is " + x2.a); } }
Pointers are a big difference between Java and C. You can read chapter 2 of Dive into Systems for another account of C pointers.
Much of the material on pointers has no explicit analogue in Java; it is there kept under the covers. If in Java you have an Object obj, then obj is actually what C would call a pointer. The technical term is that Java has reference semantics for all objects. In C this will all be quite explicit
To give a Java example, look at the snippet on the right. The first part works with integers. We define 2 integer variables; initialize the first; set the second to the first; change the first; and print the second. Naturally, the second has the initial value of the first, namely 1.
The second part deals with X, a trivial class, whose objects have just one data component, an integer called a. We mimic the above algorithm. We define two X's and work with their integer field (a). We then proceed as above: initialize the first integer field; set the second to the first; change the first; and print the second. The result is different from the above! In this case the second has the altered value of the first, namely 3.
The key difference between the two parts is that (in Java) simple scalars like i1 have value semantics; whereas objects like x1 have reference semantics. But enough Java, we are interested in C.
You will learn later this semester and again in 202, that the OS finagles memory in ways that would make Bernie Madoff smile. But, in large part thanks to those shenanigans, user programs can have a simple view of memory. For us C programmers, memory is just a large array of consecutively numbered locations.
The machine model we will use in this course is that the fundamental unit of addressing is a byte and a character (a char) exactly fits in a byte. Other types like short, int, double, float, long normally take more than one byte, but always a consecutive range of bytes.
One consequence of our memory model is that associated with int z=5; are two numbers. The first number is the address of the location in which z is stored. The second number is the value stored in that location; in this case that value is 5. The first number, the address, is often called the lvalue; the second number, the contents, is often called the rvalue. Why l and r? I know we did this already; I think it is worth repeating.
Consider
z = z + 1;
To evaluate the right hand side (RHS) we need to add 5 to 1. In particular, we need the value contained in the memory location assigned to z, i.e., we need 5. Since this value is what is needed to evaluate the RHS of an assignment statement it is called an rvalue.
Then we compute 6=5+1. Where should we put the 6? We look at the LHS and see that we put the 6 into z; that is, into the memory location assigned to z. Since it is the location that is needed when evaluating a LHS, the address is called an lvalue.
Start Lecture #04
As we have just seen, when a variable appears on the LHS, its lvalue or address is used. What if we want the address of a variable that appears on the RHS; how do we get it?
In a language like Java the answer is simple; we don't.
In C we use the unary operator & and write p=&x; to assign the address of x to p. After executing this statement we say that p points to x or p is a pointer to x. That is, after execution, the rvalue of p is the lvalue of x. In other words the value of p is the address of x.
int x=13;
Look at the declarations on the right. x is familiar; it is an integer variable initially containing 13. Specifically, the rvalue of x is (initially) 13. What about the lvalue of x, i.e., the location in which the 13 is stored? It is not an int; it is an address into which an int can be stored. Alternately said it is pointer to an int.
The unary prefix operator & produces the address of a variable, i.e., &x gives the lvalue of x, i.e. it gives a pointer to x.
The unary operator * does the reverse action. When * is applied to a pointer, it gives the object (object is used in the English not OO sense) pointed to. The * operator is called the dereferencing or indirection operator.
int x=13; int *p = &x;
Now look at the declaration of p on thne right. It says that p is the kind of thing, that when you apply * to it you get an int, i.e., p is a pointer to an int. That is why we can initialize p to &x.
Note: Try to avoid the common error of thinking the second line on the right initializes *p to &x. It doesn't. It declares and initializes p not *p.
On the right we show how p and x might be stored in memory. After we finish with C we will study the memory model in more detail. Here I just give enough to understand that pointers like p are also variables that are stored just like ints, floats, and chars.
The basic storage unit on modern computers is a byte. We shall assume that a char fits perfectly in a byte. However, ints, floats, and pointers are bigger. Each requires several bytes. For today assume each is 4 bytes.
In the diagram on the right x happens to be stored in locations 5000-5003 (i.e., each box is 4 bytes). x has value 13; more precisely its rvalue is 13. Since the address of x is 5000, the lvalue of x is 5000.
The integer pointer p happens to be stored in 8040-8043; i.e., its address or lvalue happens to be 8040. Since p points to x, the rvalue of p equals the lvalue of x, which is 5000.
// part one of three int x=1; int y=2; int z[10]; int *ip; int *jp; ip = &x;
Consider the code sequence on the right (part one). The first 3 lines we have seen many times before; the next three are new. Recall that in a C declaration, all the doodads around a variable name tell you what you must do to the variable to get the base type at the beginning of the line. Thus the fourth line says that if you dereference ip you get an integer. Common parlance is to call ip an integer pointer (which is why I named it ip). Similarly, jp is another integer pointer.
At this point both ip and jp are uninitialized. The last line sets ip to the address, of x. Note that the types match, both ip and &x are pointers to an int.
// part two of three y = *ip; // L1 *ip = 0; // L2 ip = &z[0]; // L3 *ip = 0; // L4 jp = ip; // L5 *jp = 1; // L6
In part two, L1 sets y=1 as follows: ip now points to x, * does the dereference so *ip is x. Since we are evaluating the RHS, we take the contents not the address of x and get 1.
L2 sets x=0;. The RHS is clearly 0. Where do we put this zero? Look at the LHS: ip currently points to x, * does a dereference so *ip is x. Since we are on the LHS, we take the address and not the contents of x and hence we put 0 into x.
L3 changes ip; it now points to z[0]. So L4 sets z[0]=0;
Pointers can be used without the deferencing operator. L5 sets jp to ip. Since ip currently points to z[0], jp now does as well. Hence L6 sets z[0]=1;
// part three of three ip = &x; // L1 *ip = *ip + 10; // L2 y = *ip + 1; // L3 *ip += 1; // L4 ++*ip; // L5 (*ip)++; // L6 *ip++; // L7
Part three begins by re-establishing ip as a pointer to x so L2 increments x by 10 and L3 sets y=x+1;.
L4 increments x by 1 as does L5 (because the unary operators ++ and * are right associative).
L6 also increments x, but L7 does not.
By right associativity we see that the increment precedes the
dereference, so the pointer is incremented (not the
pointee).
The full story awaits section 5.4
below.
void bad_swap(int x, int y) { int temp; temp = x; x = y; y = temp; }
The program on the right is what a novice programer just learning C (or Java) would write. It is supposed to swap the two arguments it is called with. However, it fails due to call by value semantics for function calls in C.
When another function calls swap(a,b) the values of the arguments a and b are transmitted to the parameters x and y and then swap() interchanges the values in x and y. But when swap() returns, the final values in x and y are NOT transmitted back to the arguments: a and b are unchanged.
But functions that change the values of their arguments are useful! We won't give them up without a fight.
Actually, what is needed is to be able to change the value of
variables used in the caller (even if some
related variables
become the arguments) and that distinction is the key.
Just because we want to swap the values of a
and b, doesn't mean the arguments have to be literally
a and b.
void swap(int *px, int *py) { int temp; temp = *px; *px = *py; *py = temp; }
The program on the right has two parameters px and py each of which is a pointer to an integer (*px and *py are the integers). Since C is a call-by-value language, changes to the parameters, which are the pointers px and py would not result in changes to the corresponding arguments. But the program on the right doesn't change the pointers at all, instead it changes the values they point to.
Since the parameters are pointers to integers, so must be the
arguments.
A typical call to this function would be
int A=10,B=20; swap(&A,&B);
It is crucial to understand, how this call results in A becoming 20, the value previously in B, and B becoming 10, the value previously in A.
On the right is a pictorial explanation.
A has a certain address.
&A
equals that address (more precisely the
rvalue of &A = the lvalue of A).
Similarly for py, &B, and B.
These are shown by the solid arrows in the diagram.
The call swap(&A,&B) copies (the rvalue of) &A into (the rvalue of) the first parameter, which is px. Similarly for &B and the second parameter, py. These are shown by the dotted arrows. Thus the value of px is the address of A, which is indicated by the arrow. Again, to be pedantic, the rvalue of px equals the rvalue of &A, which equals the lvalue of A. Similarly for py, &B, and B.
Swapping px with py would change the dotted arrows, but would not change anything in the caller. However, we don't swap px with py; instead we swap *px with *py. That is, we dereference the pointers and swap the things pointed to! This subtlety is the key to understanding the effect of many C functions. It is crucial.
Homework: Write rotate3(A,B,C) that sets A to the old value of B, sets B to old C, and C to old A.
Homework: Write plusminus(x,y) that sets x to old x + old y and sets y to old x - old y.
Start Lecture #05
#include <stdio.h> #define BUFSIZE 100 char buf[BUFSIZE]; int bufp = 0; int getch(void); void ungetch(int); int getint(int *pn);
int getch(void) { return (bufp>0) ? buf[--bufp] : getchar(); }
void ungetch(int c) { if (bufp >= BUFSIZE) printf("ungetch: too many chars\n"); else buf[bufp++] = c; }
#include <stdio.h> #include <ctype.h> int getint(int *pn) { int c, sign; while (isspace(c=getch())) ; if (!isdigit(c) && c!=EOF && c!='+' && c!='-') { ungetch(c); return 0; } sign = (c=='-') ? -1 : 1; if (c=='+' || c=='-') c = getch(); for (*pn = 0; isdigit(c); c=getch()) *pn = 10 * *pn + (c-'0'); *pn *= sign; if (c != EOF) ungetch(c); return c; }
The program pair getch() and ungetch() generalize getchar() by supporting the notion of unreading a character, i.e., having the effect of pushing back several already read characters.
Note that ungetch() is careful not to exceed the size of the buffer used to stored the pushed back characters. Remember that the C compiler does not generate run-time checks that prevent you from accessing an array beyond its bound. As mentioned previously, a number of break ins had been enabled by the lack of such checks in library programs.
Also shown is getint(), which reads an integer from standard input (stdin) using getch() and ungetch().
getint() returns the integer read via a parameter. As we have seen the new value of a parameter is not passed back to the caller. Hence, getint() uses the pointer/address business we just saw with swap().
Specifically any change made to pn by getint() would be invisible to the caller. However, getint() changes *pn; a change the caller does see.
The value returned by the function itself is the status: zero means the next characters do not form an integer, EOF (which is negative) means we are at the end of file, positive means an integer has been found.
Briefly the program works as follows.
Skip blanks Check for legality Determine sign Evaluate number one digit at a time
Although short, the program is not trivial. Indeed, there are some details to note.
123(no newline at the end), it will set *pn=123 as desired but will return EOF. I suspect that most programs using getint() will, in this case, ignore *pn and just treat it as EOF.
If you were asked to produce a getint() function you would have three tasks.
The third is clearly the easiest task. I suspect that the first is the hardest.
Homework: 5-1. As written, getint() treats a + or - not followed by a digit as a valid representation of zero. Fix it to push such a character back on the input.
> Hi, > > Many students have submitted CIMS acount requests because they are > enrolled in UA.0201-005. This is just a heads up that I am rejecting > all of these in the request system since the class accounts should be > made directly by the systems staff with use of the class roster. This > is Allan Gottlieb's class so I am copying in case he hasn't yet formally > requested class accounts for all his students. > > Best, > Stephanie Hi Stephanie, Thanks for the heads up. Indeed, Allan has requested class accounts for his students in this course, and they have been created by us based on the roster. Allan, in case you don't have it, you may point any student who is unsure of their account status to this link, where they may view their current status and reset their password if desired: Thanks, Aric
In C pointers and arrays are closely related. As the book says
Any operation that can be achieved by array subscripting can also be done with pointers.
The authors go on to say
The pointer version will in general be faster but, at least to the uninitiated, somewhat harder to understand.
The second clause is doubtless correct; but perhaps not the first. Remember that the 2e was written in 1988 (1e in 1978). Compilers have improved considerably in the past 30+ years and, I suspect, would turn out nearly as fast code for many of the array versions.
The next few sections present some simple examples using pointers.
int a[5], *pa; pa = &a[0];
int x = *pa; x = *(pa+1);
x = a[0]; x = *a;
int i; x = a[i]; x = *(a+i);
On the far right we see some code involving pointers and arrays. After the first two lines are executed we get the diagram shown on the near right. pa is a pointer to the first element of the array a. Remember that, as in Java, the first element of a C array a is a[0]. Similarly, pa+3 is a pointer to the fourth element of the array.
But note that pa+3 is just an expression and not a container (no lvalue): you can't put another pointer into pa+3 just like you can't put another integer into i+3.
The next line sets x (which is a container) equal to (the rvalue of) a[0]; the line after that sets x=a[1].
Then we explicitly set x=a[0].
The line after that has the same effect! That is because in C the value of array name equals the address of its first element. (The rvalue of a = the rvalue of &a[0] = the address of a[0] = the lvalue of a[0].) Again note that a (i.e., &a[0]) is an expression, not a variable, and hence is not a container.
Said yet another way a and pa have the same value
(rvalue) but are not the same
thing!
Similarly, the last two lines each have the same effect, this time for a general element of the array a[i].
int a[5], *pa; pa = &a[0]; pa = a; a = pa; // illegal &a[0] = pa; // illegal
Both pa and a are pointers to ints. In particular a is defined to be &a[0]. Although pa and a have much in common, there is an important difference: pa is a variable, its value can be changed; whereas &a[0] (and hence a) is an expression and not a variable. In particular the last two lines on the right are illegal.
Another way to say this is that &a[0] is not an lvalue.
This is similar to the legality of
x=y+5; versus the
illegality of
y+5=x;
int mystrlen(char *s) { int n; for (n=0; *s!='\0'; s++,n++) ; return n; }
The code on the right illustrates how well C pointers, arrays, and strings work together. What a tiny program to find the length of an arbitrary string!
Note that the body of the for loop is null; all the work is done in the for statement itself.
char str[50], *pc; // calculate str and pc mystrlen(pc); mystrlen(str); mystrlen("Hello, world.");
Note the various ways in which mystrlen() can be called.
decoratea variable with enough stuff to obtain one of the primitive types.
#include <stdio.h> int x, *p; int main () { p = &x; x = 12; printf("p = %p\n", p); printf("*p = %d\n", *p); p++; printf("p = %p\n", p); printf("*p = %d\n", *p); }
The example on the right below illustrates well the difference between a variable, in this case x, and its address &x. The first value printed is the address of x. This is not 12. Instead, it is some (probably large) number that happens to be the address of x.
In fact when run on my laptop the program produced the following output.
p = 0x7fc41fc78040 *p = 12 p = 0x7fc41fc78044 *p = 0
Let's go over this 7-line main() function line by line.
next integer after xis printed. But there is no integer after x. Hence the program is erroneous! Its output in unpredictable!
Note: Incrementing p does not increment x. Instead, the result is that p points to the next integer after x. In this program there is no further integer after x, so the result is unpredictable and the program is erroneous. Specifically, the value of *p is now unpredictable. On my system the value of *p was 0, but that can NOT be counted on. If, instead of pointing to x, we had p point to A[7] for some large int array A, then the last line would have printed the value of A[8] and the penultimate line would have printed the address of A[8].
Remarks:
#include <stdio.h> int mystrlen (char *s); int main () { char stg[] = "hello"; printf ("The string %s has %d characters\n", stg, mystrlen(stg)); }
int mystrlen (char s[]) { int i; for (i = 0; s[i] != '\0'; i++) ; return i; }
int mystrlen (char *s) { int i = 0; while (*s++ != '\0') i++; return i; }
On the right we show two versions of a string length function. The first version uses array notation for the string; the second uses pointer notation. The main() program is identical for the two versions so is shown only once.
Note how very close the two string length functions are. This is another illustration of the similarity of arrays and pointers in C.
Note the two declarations
int mystrlen (char *s); int mystrlen (char s[]);
They are used 3 times in the code on the right. In C these two declarations are equivalent. Changing any or all of them to the other form does not change the meaning of the program.
I realize an array does not at first seem the same as a pointer. Remember that the array name itself is equal to a pointer to the first element of the array. Hence declaring
float a[5], *b;
results in a and b having the same type (pointer to float). But the array a has additionally been defined; that is, space for 5 floats has been allocated. Hence a[3] = 5; is legal. b[3] = 5 is syntactically legal, but may be semantically invalid and abort at runtime, unless b has previously be set to point to sufficient space.
In the pointer version of mystrlen() we encounter a common C idiom *s++. First note that the precedence of the operators is such that *s++ means *(s++). That is, we are moving (incrementing) the pointer and examining what it used to point at. We are not incrementing a part of the string. Specifically, we are not executing (*s)++;
void changeltox (char *s) { while (*s != '\0') { if (*s == 'l') *s = 'x'; s++; } }
The program on the right loops through the input string and replaces each occurence of l with x.
The while loop and increment of s could have been combined into a for loop.
This version is written in pointer style.
Homework: Rewrite changeltox() to use array style and a for loop.
void mystrcpy (char *t, char *s) { while ((*t++ = *s++) != '\0') ; }
Check out the ONE-liner on the right. Note especially the use of standard idioms for marching through strings and for finding the end of the string.
Slick, very slick!
Even slicker is to note that '\0' has value 0 and testing != 0 is
just testing so the while statement is equivalent to
while (*t++ = *s++);
But the program is scary, very scary!
Question: Why is it scary?
Answer: Because there is no length check.
If the character array t (or equivalently the block of characters t points to) is smaller than the character array s, then the copy will overwrite whatever happens to be located right after the array t.
The lack of such length checks has permitted a number of security breaches.
double f(int *a); double f(int a[]);
The two lines on the right are equivalent when used as a function declaration (or, with the semicolon replaced by a {, as the head line of a function definition). The authors say they prefer the first. For me it is not so clear cut. In mystrlen() above I would indeed prefer char *s as written since I think of a string as a block of chars with a pointer to the beginning.
double dotprod(double A[], double B[]);
However, if I were writing an inner product routine (a.k.a. dot product), I would prefer the array form as on the right since I think of dot product as operating on vectors.
But of course, more important than which one I prefer or the authors prefer, is the fact that they are equivalent in C.
Note: The definition
int a[10]; reserves space for 10 ints and no
pointers; whereas the definition
int *a reserves space for no ints and 1
pointer.
#include <stdio.h> void f(int *p);
int main() { int A[20]; // initialize all of A f(A+6); return 0; }
void f(int *p) { printf("legal? %d\n", p[-2]); printf("legal? %d\n", *(p-2)); }
In the code on the right, main() first declares an integer array A[] of size 20 and initializes all its members (how the initialization is done is not important). Then main(), in a effort to protect the beginning of A[], passes only part of the array to f(). Remembering that A+6 means (&A[0])+6, which is &A[6], we see that f() receives a pointer to the 7th element of the array A.
The author of main() mistakenly believed that A[0],..,A[5] are hidden from f(). Let's hope this author is not on the security team for the board of elections.
Since C uses call by value, we know that f() cannot change the value of the pointer A+6 in main(). But f() can use its copy of this pointer to reference or change all the values of A, including those before A[6]. On the right, f() successfully references A[4].
It naturally would be illegal for f() to reference (or worse change) p[-9].
Start Lecture #06
A important point is that, given the declaration int *pa; the increment pa+=3 does not simply add three to the address stored in pa. Instead, it increments pa so that it points 3 integers further forward (since pa is a pointer to an integer). If pc is a pointer to a double, then pc+3 increments pc so that it points 3 doubles forward.
#include <stdio.h> void main (void) { int q[] = {11, 13, 15, 19}; int *p = q; // initializes p NOT *p printf("*p = %d\n", *p); printf("*p++ = %d\n", *p++); printf("*p = %d\n", *p); printf("*++p = %d\n", *++p); printf("*p = %d\n", *p); printf("++*p = %d\n", ++*p); }
To better understand pointers, arrays, ++, and *, let's go over the code on the right line by line. For reference the precedence table is here. The output produced is
*p = 11 *p++ = 11 *p = 13 *++p = 15 *p = 15 ++*p = 16
#define ALLOCSIZE 15000 static char allocbuf[ALLOCSIZE]; static char *allocp = allocbuf;
char *alloc(int n) { if (allocp+n ≤ allocbuf+ALLOCSIZE) { allocp += n; return allocp-n; // previous value } else // not enough space return 0; }
void afree (char *p) { if (p>=allocbuf && p<allocbuf+ALLOCSIZE) allocp = p; }
On the right is a primitive storage allocator and freer, alloc() and afree(). This pair of routines distributes and reclaims memory from a buffer allocbuf. The internal pointer allocp points to the boundary between already allocated memory (on the left of allocp in the diagrams) and memory still available for allocation (on the right).
The top picture shows the initial state: nothing is allocated; everything is free.
Looking at the middle (before) diagram we see four blocks that have been allocated and a large free region on the right. The routines alloc() and afree control the internal pointer allocp.
When alloc(n) is called, with a non-negative integer argument, it returns a pointer to a block of n characters and then moves allocp to the right, indicating that these n characters are no longer available.
When afree(p) is called with the pointer returned by alloc(), it resets the state of alloc()/afree() to what it was before the call to alloc().
A very strong assumption is being made that calls
to alloc()/afree() are executed in a stack-like manner, i.e.,
the routines assume that a block being freed is
the
These routines would be useful for managing storage for C automatic, local variables. They are far from general. The standard library routines malloc()/free() do not make this assumption and as a result are considerably more complicated.
Since pointers, not array positions are communicated to users of alloc()/afree(), these users do not need to know the name of the array, which is kept under the covers via static.
Notes:
no object. Although a literal 0 is permitted; most programmers use NULL.
Homework: What is wrong with the following calls to alloc() and afree()? Assume that ALLOCSIZE is big enough.
char *p1, *p2, *p3; p1 = alloc(10); p2 = alloc(20); p3 = alloc(15); afree(p3); afree(p1); afree(p2);
If pointers p and q point to elements of the same array (or string), then comparisons between the pointers using <, <=, ==, !=, >, and >= all work as expected.
If pointers p and q do not point to members of the same array, the value returned by comparisons is undefined, with one exception: p pointing to an element of an array and q pointing to the first element past the array.
Any pointer can be compared to 0 via == and !=.
Normally,
Again we need p and q pointing to elements of the same array. In that case, if p<=q, then q-p+1 equals the number of elements from p to q (including the elements pointed to by p and q).
#include <stdio.h> void changeltox(char *z); void mystrcpy char *s, char *t); char *alloc(int n);
int main() { char string[] = "hello"; char *string2 = alloc(6); mystrcpy(string2, stg); changeltox(string); printf ("String is now %s\n", string); printf ("String2 is now %s\n", string2); }
These examples are interesting in their own right, beyond showing how to use the allocator.
We have already written a program changeltox() that changes one character to another in a given string.
After initializing the string to "hello", the code on the right first copies it (using mystrcpy(), a one liner presented above) and then makes changes in the original. Thus, at the end, we have two versions of the string: the before and the after.
As expected the output is
String is now hexxo String2 is now hello
So far, so good. Let's try something fancier.
Recall the danger warning given with the code for mystrcpy(char *x, char *y): The code copies all the characters in y (i.e., up to and including '\0') to x ignoring the current length of x. Thus, if y is longer than the space allocated for x, the copy will overwrite whatever happens to be stored right after x.
#include <stdio.h> void changeltox (char*); void mystrcpy (char *s, char *t); char *alloc(int n); int main () { char string[] = "hello"; char *string2 = alloc(2); char *string3 = alloc(6); mystrcpy (string2, string); printf ("String2 is now %s\n", string2); printf ("String3 is now %s\n", string3); mystrcpy (string3, string); changeltox (string); printf ("The string is now %s\n", string); printf ("String2 is now %s\n", string2); printf ("String3 is now %s\n", string string contains the 5 characters in the word
hello plus the ascii null '\0' to end the string.
(The array string has 6 elements so the string fits
perfectly.)
The major problem occurs with the first execution of
mystrcpy() because we are copying 6 characters into a
string that has room for only 2 characters (including the ascii
null).
This executes
flawlessly copying the 6 characters to an area
of size 6 starting where string2 points.
These 6 locations include the 2 slots allocated to string2
and then the next four locations.
Normally it is very hard to tell what has been overwritten, and the
resulting bugs can be very difficult to find and fix.
In this situation it is not hard to see what was overwritten since
we know how alloc() works.
The
excess 6-2=4 characters are written into the first 4
slots of string3.
When we print string2 the first time we see no problem!
A string pointer just tells where the string starts, it continues up
to the ascii null.
So string2 does have all of
hello (and the
terminating null).
Since string3 points 2 characters after string2,
the string string3 is just the substring
of string2 starting at the third character.
The second mystrcpy copies the six(!) characters in the
string
hello to the 6 bytes starting at the location pointed
to by string3.
Since the string string2 includes the location pointed to by
string3, both string2 and string3 are
changed.
The changeltox() execution works as expected.
As we know, C does not have string variables, but does have string constants. This arrangement sometimes requires care to avoid errors.
char amsg[] = "hello"; char *msgp = "hello"; int main () {...}
Let's see if we can understand the following rules, which can appear strange at first glance.
Perhaps the following will help.
void mystrcpy (char *s, char *t) { while (*s++ = *t++) ; }
Our first version of this program tested if the assignment did not return the character '\0', which.
If you have been trembling with fright over this scary function, rest assured and see the following homework problem.
Homework: 5-5 (first part). Write a version of the library functions
char *strncpy(char *s, char *t, int n)This copies at most n characters from t to s. This code is not scary like other copies since a user of the routine can simply declare s to have space for n characters.
int mystrlen(char *s) { char *p = s; while (*p) p++; return p-s; }
The code on the right applies the technique used to get the slick string copy to the related function string length. In addition it use pointer subtraction. Note that when the return is executed, p points just after the string (i.e., to the terminating null) and s points to its beginning. Thus the difference gives the length.
Normally, pointer subtraction is defined only when both pointers point to the same array or string (or some other objects we haven't studied yet). The point is that you cannot meaningfully subtract two pointers pointing to different objects (say both point to different integer variables). One exception is that subtraction is guaranteed to work if one points to an element of an array and the other points one element past that same array. The function mystrlen() does not utilize this exception since the terminating null is part of the string.
int mystrcmp(char *s, char *t) { for (; *s == *t; s++,t++) if (*s == '\0') return 0; return *s - *t; }
We next produce a string comparison routine that returns a negative integer if the string s is lexicographically before t, zero if they are equal, and a positive integer if s is lexicographically after t.
The loop takes care of equal characters. The function returns 0 if we reached the end of the equal strings.
If the loop concludes early, we have found the first difference.
A key is that if exactly one string has ended, its character ('\0')
is
smaller then the other string's character.
This is another ascii fact (ascii null is zero. the rest are
positive).
I tried to produce a version using while(*s++ == *t++), but I failed since the loop body and the post loop code was dealing with the subsequent character. I suppose it could have been forced to work if I used a bunch of constructions like *(s-1), but that would have been ugly.
For the moment forget that C treats pointers and arrays almost the same. For now just think of a character pointer as another data type.
So we can have an array of 9 character pointers, e.g., char *A[9]. We shall see fairly soon that this is exactly how some systems (e.g. Unix) transmit command-line arguments to the main() function.
#include <stdio.h> int main() { char *STG[3] = { "Goodbye", "cruel", "world" }; printf ("%s %s %s.\n", STG[0], STG[1], STG[2]); STG[1] = STG[2] = STG[0]; printf ("%s %s %s.", STG[0], STG[1], STG[2]); return 0; }
Goodbye cruel world. Goodbye Goodbye Goodbye.
The code on the right defines an array of 3 character pointers, each of which is initialized to (point to) a string. The first printf() has no surprises. But the assignment statement should fail since we allocated space for three strings of sizes 8, 6, and 6 and now want to wind up with three strings each of size 8 and we didn't allocate any additional space.
However, it works perfectly and the resulting output is shown as well.
Question: What happened?
How can space for 8+6+6 characters be enough for 8+8+8?
Answer: We do not have three strings of size 8. Instead, we have one string of size 8, with three character pointers pointing to it.
The picture on the right shows a before and after view of the array and the strings.
This suggests and interesting possibility. Imagine we wanted to sort long strings alphabetically (really lexicographically). Let's not get bogged down in the sort itself and assume it is a simple interchange sort that loops and, if a pair is out of order, executes a swap, which is something like
temp = x; x = y; y = temp;
If x, y, and temp are (long but varying size) strings then we have some issues to deal with.
Both of these issues go away if we maintain an array of pointers to the strings. If the string pointed to by A[i] is out of order with respect to the string pointed to by A[j], we swap the (fixed size, short) pointers not the strings that they point to.
This idea is illustrated on the right.
The code on the right below, plus the mystrcmp() function above, produces the output on the left.
#include <stdio.h> void sort(int n, char *C[]) { int i,j; char *temp; for (i=0; i<n-1; i++) for (j=i+1; j<n; j++) if (mystrcmp(C[i],C[j]) > 0) { temp = C[i]; C[i] = C[j]; C[j] = temp; } } int main() { char *STG[] = {"Hello","99","3","zz","best"}; int i,j; for (i=0; i<5; i++) printf ("STG[%i] = \"%s\"\n", i, STG[i]); sort(5,STG); for (i=0; i<5; i++) printf ("STG[%i] = \"%s\"\n", i, STG[i]); return 0; }
STG[0] = "Hello" STG[1] = "99" STG[2] = "3" STG[3] = "zz" STG[4] = "best" STG[0] = "3" STG[1] = "99" STG[2] = "Hello" STG[3] = "best" STG[4] = "zz"
You might feel that the sort fails due to call-by-value the same way bad_swap failed previously. Since call-by-value initially copies the arguments into the parameters, but does not, at the end, copy the parameters back to the arguments, swapping C[I] with C[j] has no effect since the parameters C[i] and C[j] are not copied back. But no, C[i] is not a parameter, the array C is the parameter and C[i] is pointed to by C. Yes, this is subtle; but it is also crucial!
You might question if the output is indeed sorted. For example, we remember that ascii '3' is less than ascii '9', and we know that in ascii 'b'<'h'<'z', but why is '9'<'b' and why is 'H'<'b'?
Well, I don't know why they are, but they are. That is, in ascii the digits come before the capital letters, which in turn come before the lower-case letters.
#include <stdio.h> int main(int argc, char *argv[]) { char c1 = '1', c2 = '2';
char ac[10] = "wxyXYZ"; // ac = Array of Chars ac[1] = c1; ac[2] = c2; printf("ac[1]=%c ac[2]=%c\n", ac[1], ac[2]);
char *pc1, *pc2; // pc = Pointer to Char pc1 = &ac[3]; pc2 = pc1+1; printf("*pc1=%c *pc2=%c\n", *pc1, *pc2);
char *apc[10]; // Array of Pointers to Char apc[3] = pc1; // Points at ac[3] apc[4] = pc2-2; // Points at ac[2] printf("*apc[3]=%c *apc[4]=%c\n", *apc[3], *apc[4]); return 0; }
The program on the right includes several types of variables. In particular we find chars, an array of chars, pointers to chars, and an array of pointers to chars.
The program, when run, produces the following output.
ac[1]=1 ac[2]=2 *pc1=X *pc2=Y *apc[3]=X *apc[4]=2
You should first confirm that the types are correct. For example, is * always applied to a pointer? Since all the prints use %c for the values printed, all those values must be chars. Are they?
Then confirm that you agree with the values produced.
At one point the program adds 1 to the char pointer pc1. At another point it subtracts 2 from another char pointer. This is valid only if the final value of the pointer is pointing inside the same array as the initial value. Is this the case?
Start Lecture #07
void matmul(int n, int k, int m, double A[n][k], double B[k][m], double C[n][m]) { int i,j,l; for (i=0; i<n; i++) for (j=0; j<m; j++) { C[i][j] = 0.0; for (l=0; l< k; l++) C[i][j] += A[i][l]*B[l][j]; } }
C does have normal multidimensional arrays. For example, the code on the right multiplies two matrices. Matrices (the simple 2-dimensional type in linear algebra) are rectangular: all rows have the same number of columns.
In some sense C, like Java, has only one-dimensional arrays. However, a one-dimensional array of one-dimensional arrays of doubles is close to a two-dimensional array of doubles. One difference is the notation: C/Java uses A[][], indicating a 1D array of 1D arrays, rather than A[,] of algebra. Another difference is that, in the example on the right, A[n] is a legal (one-dimensional) array.
The biggest difference is that a C/Java 2D array need not be rectangular, that is the rows need not be the same length. This will become clear in the next few sections.
int A[2][3] = { {5,4,3}, {4,4,4} }; int B[2][3][2] = { { {1,2}, {2,2}, {4,1} }, { {5,5}, {2,3}, {3,1} } };
Multidimensional arrays can be initialized. Once you remember that a two-dimensional array is a one-dimensional array of one-dimensional arrays, the syntax for initialization exemplified on the right is not surprising.
(C, like most modern languages uses row-major ordering so the last subscript varies the most rapidly.)
#include <stdio.h> int main(int argc, char *argv[]) { int A[3][3] = { {1,2,3}, {4,5,6}, {7,8,9} }; printf("*(A[1]+1)=%d\n", *(A[1]+1) ); return 0; }
Looking at the code on the right we see that A[1] can be thought of as a 1D array so, when written without the second subscript.
char amsg[] = "hello"; int main(int argc, char *argv[]) { printf("%c\n", amsg[100]); }
Note: Note that an array of size 1 is similar to an array of size 10 and that a pointer to X is very similar to array of X. For example the code on the right compiles and runs (it is illegal but not caught by the compiler) in part because the types match.
A related comment is that a pointer to a character is the same as a pointer to 10 characters as far as the C compiler is concerned.
char *monthName(int n) { static char *name[] = {"Illegal", "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"}; return (n<1 || n>12) ? name[0] : name[n]; }
The initialization syntax for an array of pointers follows the general rule for initializing an array: Enclose the initial values inside braces.
Looking at the code on the right we see this principle in action. I believe the most common usage of pointer arrays is for an array of character pointers as in this example.
Question: How are those initializers pointers;
they look like constant strings?
Answer: A string is a pointer to the first character.
int A[3][4]; int *B[3];
Consider the two definitions on the right. They look different, but both A[2][3] and B[2][3] are legal (at least syntactically). The real story is that the two definitions most definitely are different. (In fact Java arrays have a great deal in common with the 2nd form in C.)
The declaration int A[3][4]; allocates space for 12 integers (really 12 containers each of which can hold an integer), which are stored consecutively so that A[i][j] is (a container holding) the (4*i+j)th integer stored (counting from zero). With the simple declaration written, none of the integers is initialized, but we have seen how to initialized them.
The declaration int *B[3]; allocates space for
NO integers.
It does allocate space for 3 pointers (to
integers).
The pointers are not initialized so they currently point to junk.
The program must somehow arrange for each of them to point to a
group of integers (and must figure out when the group ends).
An important point is that the groups may have different lengths.
The technical jargon is that we can have a
ragged array as
shown in the bottom of the picture.
#include <stdio.h> int main(int argc, char *argv[]) { int A[3][3] = { {1,2,3}, {4,5,6}, {7,8,9} }; int B0[1] = {2}, B1[3]={5,14,5}, B2[2] = {11,4}; int *B[3] = {B0, B1, B2}; printf("*(A[1]+1)=%d\n", *(A[1]+1) ); printf("A[1,1]=%d\n", A[1][1] ); printf("*(B[1]+1)=%d\n", *(B[1]+1) ); printf("B[1][1]=%d\n", B[1][1]); return 0; }
The code sequence on the right shows the comparison between
initializing a 2-D array of integers and initializing a 1-D array of
pointers to integers.
Note how B is initialized to a 1D array of integer pointers.
The example also illustrates that C supports
ragged arrays.
When the program is run the output produced is
*(A[1]+1)=5 A[1][1]=5 *(B[1]+1)=14 B[1][1]=14
Although ragged arrays of Integers (and Floats) are used in C, you are more likely to see a ragged array of chars, that is a 1-D array of pointers to (varying length) strings.
We have already seen two examples of this: The monthName program just above and the Goodbye Cruel World diagrams in section 5.6. We next illustrate that every C main() program on Unix (e.g., on Linux) also uses a (ragged) array of strings, i.e. an array of character pointers.
On the right is a picture of how arguments are passed to a (Unix) command. It this case the command executed was
./cmdline xx y;
The arguments generated by the system are shown on the left The green arrows show those arguments being copied into the parameters of the main program. (The black arrows are simply pointers, as before.) Each main() program has two parameters: an integer, normally called argc for argument count, and an array of character pointers, normally called argv for argument vector.
As always, a naked array name is a pointer to the first element. argv in the main() program is best thought of as a pointer that has been initialized to point to the first element of the array of pointers. The diagram makes clear that both arguments, argc and argv, are containers In particular they have lvalues, i.e., they can appear on the LHS of an assignment statement. (Of course, with call-by-value any changes to the parameters are not passed back to the arguments.)
Since the same program can have multiple names (more on that later), argv[0], the first element of the argument vector, is a pointer to a character string containing the name by which the command was invoked. Subsequent elements of argv point to character strings containing the arguments given to the command. Finally, there is a NULL pointer to indicate the end of the pointer array.
The integer argc gives the total number of pointers, including the pointer to the name of the command. Thus, the smallest possible value for argc is 1 and argc=3 for the picture above.
#include <stdio.h> int main(int argc, char *argv[argc]) { int i; printf("My name is %s; ", argv[0]); printf("I was called with %d argument%s.\n", argc-1, (argc==2) ? "" : "s"); for (i=1; i<argc; i++) printf("Argument #%d is %s.\n", i, argv[i]); }
sh-4.0$ cc -o cmdline cmdline.c sh-4.0$ ./cmdline My name is ./cmdline; I was called with 0 arguments. sh-4.0$ ./cmdline x My name is ./cmdline; I was called with 1 argument. Argument #1 is x. sh-4.0$ ./cmdline xx y My name is ./cmdline; I was called with 2 arguments. Argument #1 is xx. Argument #2 is y. sh-4.0$ ./cmdline -o cmdline cmdline.c My name is ./cmdline; I was called with 3 arguments. Argument #1 is -o. Argument #2 is cmdline. Argument #3 is cmdline.c. sh-4.0$ cp cmdline mary-joe sh-4.0$ ./mary-joe -o cmdline cmdline.c My name is ./mary-joe; I was called with 3 arguments. Argument #1 is -o. Argument #2 is cmdline. Argument #3 is cmdline.c.
The code on the right shows how a program can access its name and any arguments it was called with.
Having both a count (argc) and a trailing NULL pointer (argv[argc]==NULL) is redundant, but convenient. The code on the right treats argv as an array. It loops through the array using the count argc as an upper bound, but does not use the trailing NULL. Another style (using NULL but not argc) would look something like
while (*argv) printf("%s\n", *argv++);
which treats argv as a pointer and terminates when argv points to NULL.
The second frame on the right shows a session using the code directly above. We assume the first frame is stored in the file cmdline.c
Now we can get rid of some symbolic constants that should have been specified at run time.
Here are two before and after examples. The code on the left uses symbolic constants; on the right we use command-line arguments.
#include <stdlib.h> #include <stdio.h> #include <stdio.h> #define LO 0 #define HI 300 #define INCR 20 main() { int main (int argc, char *argv[argc]) { int F; int F; for (F=LO; F<=HI; F+=INCR) for (F=atoi(argv[1]); F<=atoi(argv[2]); F+=atoi(argv[3])) printf("%3d\t%5.1f\n", F, printf("%3d\t%5.1f\n", F, (F-32)*(5.0/9.0)); (F-32)*(5.0/9.0)); return 0; } }
Notes.
abnormally(it doesn't return 0).
#include <stdlib.h> #include <stdio.h> #include <stdio.h> #include <math.h> #include <math.h> #define A +1.0 // should read #define B -3.0 // A,B,C #define C +2.0 // using scanf() void solve (float a, float b, float c); void solve (float a, float b, float c); int main() { int main(int argc, char *argv[argc]) { solve(A,B,C); solve(atof(argv[1]), atof(argv[2]), atof(argv[3])); return 0; return 0; } } void solve (float a, float b, float c){ void solve (float a, float b, float c){ float d; float d; d = b*b - 4*a*c; d = b*b - 4*a*c; if (d < 0) if (d < 0) printf("No real roots\n"); printf("No real roots\n"); else if (d == 0) else if (d == 0) printf("Double root is %f\n", printf("Double root is %f\n", -b/(2*a)); -b/(2*a)); else else printf("Roots are %f and %f\n", printf("Roots are %f and %f\n", ((-b)+sqrt(d))/(2*a), ((-b)+sqrt(d))/(2*a), ((-b)-sqrt(d))/(2*a)); ((-b)-sqrt(d))/(2*a)); } }
Notes.
don't check the arguments. Now we specify them correctly.
include <string.h> include <stdio.h> include <ctype.h> int main (int argc, char *argv[argc]) { int c, makeUpper=0; if (argc > 2) return -argc; // error return if (argc == 2) if (strcmp(argv[1], "-toupper")) { printf("Arg %s illegal.\n", argv[1]); return -1; } else // -toupper was arg makeUpper=1; while ((c = getchar()) != EOF) if (!isdigit(c)) { if (isalpha(c) && makeUpper) c = toupper(c); putchar(c); } return 0; }
Often a leading minus sign (-) is used for command-line arguments that are optional.
The program on the right removes all digits from the input.
If it is given the optional argument
-toupper it also
converts all letters to upper case using the toupper()
library routine.
Notes
BooleanmakeUpper.
Demo this function on my laptop. It is the file c-progs/rem-digit.c.
Homework: At the very end of chapter 3 you wrote escape() that converted a tab character into the two characters \t (it also converted newlines but ignore that). Call this function detab() and call the reverse function entab(). Combine the entab() and detab functions by writing a function tab that has one command-line argument.
tab -en # performs like entab() tab -de # performs like detab()
#include <stdio.h> #include <string.h> #define MAXLINE 1000 int getline(char *line, int max); // find: print lines matching argv[1] int main(int argc, char *argv[]) { char line[MAXLINE]; int found = 0; if (argc != 2) printf("Usage: find pattern\n"); else while (getline(line, MAXLINE) > 0) if (strstr(line, argv[1]) != NULL) { printf("%s", line); found++; } return found; }
Each of the programs in this section accepts a command-line argument (call it pattern) and when executed the program echos all input lines that contain the pattern. These programs are useful in their own right. However, our main interest is the pointer/character/string/array manipulations that occur.
This first version, which is shown on the right, simply echos those input lines that contain the command-line argument. This version is fairly simple thanks to the library routine strstr(s1, s2), which checks whether string s2 occurs in s1. The declaration of strstr(s1,s2) is found in string.h.
In fact strstr(s1,s2) indicates the location in s1 where s2 occurs, but we do not use this information as we want to know only if the pattern occurs in the line, not where.
The pattern we are looking for is the first command-line argument so the routine checks each input line to see if argv[1] occurs. If it does occur, the line is printed.
Now we permit two optional command-line arguments.
except, indicates that we are to reverse the sense of the comparison and print those lines that do not contain the pattern.
number, specifies that the line number is printed for all matching lines.
A common convention, which is followed for this example, is to use
a single letter (preceded by
-) for optional command-line
arguments.
In this case we use -x for
except and -n
for
number.
We also follow the convention of allowing these single letter
options to be combined.
Hence the single argument
-nx (or
-xn) can be used
instead of
-n -x (or
-x -n).
In all four cases, we print lines not matching the
string given in the the required argument and for each such line we
also print its line number.
In summary we want to process all arguments that start
with
- and for each one check every character after
the
-.
#include <stdio.h> #include <string.h> #define MAXLINE 1000 int getline(char *line, int max); // find: print lines matching pattern int); } } return found; }
The entire program is quite clever and well done, especially the part that handles the variable number of optional arguments. I strongly suggest you give it careful study. In class we will concentrate on how the program processes the variable number of arguments. In particular we will study the distinction between the pink *(++argv)[0] and the yellow *++argv[0].
In class I want to discuss the pink and yellow highlighted regions, both of which contain *, ++, argv, and [0] in that order when read left to right. The difference between them is a pair of parentheses, that determine the order the operations are applied. Let's start with the pink.
Recall that, when execution begins, argv points to an array of char pointers. Specifically, it initially points at the first entry of the array, argv[0], which itself points at the name of the executable. Hence ++argv initially points at a pointer to the first command-line argument, which is a string (during subsequent iterations it points at subsequent arguments). Hence, *++argv initially points to the first argument and (*++argv)[0] (which can also be written as **++argv) is the first character of the first argument. This character is what would be a '-', if we have an optional argument. Subsequent iterations of this while loop increment argv to point to subsequent arguments.
The () are needed since [] has higher precedence than *. Indeed, it is these () that distinguish the pink from the yellow, which we look at next.
When the yellow is executed, argv points at an argument that begin with a '-'. More precisely argv points at the pointer to a character string that begins with a '-'. Hence argv[0] is the character pointer, and ++argv[0] (initially) points at the character after the '-', and *++argv[0] is (initially) the character after the '-'.
Since we can have multiple options, each specified by a single character (in this example the max is 2, but the code is more general), the (inner) while loop moves character by character across the argument.
The outer while moves from argument to argument executing the inner loop for each one until it reaches an argument not beginning with a '-' (or runs out of arguments, which is an error).
Start Lecture #08
#include <ctype.h> #include <string.h> #include <stdio.h> // Program to illustrate function pointers int digitToStar(int c); // Cvt digit to * int letterToStar(int c); // Cvt letter to * int main (int argc, char *argv[argc]) { int c; int (*funptr)(int c); if (argc != 2) return argc; if (strcmp(argv[1],"digits") == 0) funptr = &digitToStar; else if (strcmp(argv[1],"letters") == 0) funptr = &letterToStar; else return -1; while ((c=getchar()) != EOF) putchar((*funptr)(c)); return 0; }
int digitToStar(int c) { if (isdigit(c)) return '*'; return c; }
int letterToStar(int c) { if (isalpha(c)) return '*'; return c; }
In C you can do very little with functions, mostly define them and call them (and take their address, see what follows).
However, pointers to functions (called function pointers) are real values. You can do a lot with function pointers.
One reason the system can do more with function pointers than with functions is that all function pointers (indeed all pointers) are the same length.
The program on the right is a simple demonstration of function pointers. Two very simple functions are defined.
The first function, digitToStar() accepts an integer (representing a character) and return an integer. If the argument is a digit, the value returned is (the integer version of) '*'. Otherwise the value returned is just the unchanged value of the argument.
Similarly letterToStar() converts a letter to '*' and leaves all other characters unchanged.
The star of the show is funptr. Read its declaration carefully: The variable funptr is the kind of thing that, once de-referenced, is the kind of thing that, given an integer, produces an integer.
So it is a pointer to something. That something is a function from integers to integers.
The main program checks the (mandatory) argument. If the argument is "digits", funptr is set to the address of digitToStar(). If the argument is "letters", funptr is set to the address of letterToStar(). So funptr is a pointer to one of two functions.
Then we have a standard getchar()/putchar() loop with a slight twist. The character (I know it is an integer) sent to putchar() is not the naked input character, but is instead the input character processed by whatever function funptr points to. Note the "*" in the call to putchar().
Note: C permits abbreviating &function-name to function-name. So in the program above we could write
funptr = digitToStar; funptr = letterToStar;
instead of
funptr = &digitToStar; funptr = &letterToStar;
I don't like that abbreviation so I don't use it. Others do like it and you may use it if you wish.
#include <stdio.h> #include <stdlib.h> int funA(int x) {printf("A x=%d\n", x); return x+10; } int funB(int x) {printf("B x=%d\n", x); return x+20; } int funC(int x) {printf("C x=%d\n", x); return x+30; } int (*funPtrArr[])(int x) = {&funA, &funB, &funC}; int main(int argc, char *argv[]) { int x = atoi(argv[1]); int y = atoi(argv[2]); printf("x=%d\n", x); int z; z = (*funPtrArr[0])(x); printf ("z=%d\n", z); z = (*funPtrArr[1])(y); printf ("z=%d\n", z); z = (*funPtrArr[2])(100); printf ("z=%d\n", z); }
Function pointers are especially useful when there are many functions involved and you have a function pointer array.
On the right is a simple, but rather silly example.
When run with input
4 5, it produces the following
output.
x=4 A x=4 z=14 B x=5 z=25 C x=100 z=130
We are basically skipping this section.
It gives some examples of more complicated declarations than we have
seen (but are just
more of the same—one example is
below).
The main part of the section presents a program that converts C
definition to/from more-or-less English equivalents.
Here is one example of a complicated declaration. It is basically the last one in the book with function arguments added.
char (*(*f[3])(int x))[5]
Remembering that *f[3] (like *argv[argc]) is an array of 3 pointers to something not a pointer to an array of 3 somethings, we can unwind the above to.
The variable f is an array of size three of pointers.
Remembering that *(g)(int x) = *g(int x) is a function returning a pointer and not a pointer to a function, we can further unwind the monster to.
The variable f is an array of size three of pointers to functions taking an integer and returning a pointer to an array of size five of characters.
One more (the penultimate from the book).
char (*(f(int x))[5])(float y)
The function f takes an integer and returns a pointer to an array of five pointers to functions, each taking a float and returning a character.
For a start, a Java programmer can think of structures as basically classes and objects without methods.
On the right we see some simple structure declarations for use in a geometry application. They should be familiar from your experience with Java classes in CS101 and CS102.
#include <math.h> struct point { double x; double y; };
struct rectangle { struct point ll; struct point ur; } rect1;
double f(struct point pt); struct point mkPoint(double x, double y); struct point midPoint(struct point pt1, struct point pt2);
int main(int argv, *char argv[]) { struct point pt1={40.,20.}, pt2; struct rectangle rect1; pt2 = pt1; rect1.ll = pt2; pt1.x += 1.0; pt1.y += 1.0; rect1.ur = pt1; rect1.ur.x += 2.; return 0; }
The top declaration defines the struct point type. This is similar to defining a class without methods.
As with Java classes, structures in C help organize data by permitting you to treat related data as a unit. In the case of a geometric point, the x and y coordinates are closely related mathematically and, as components of the struct point type, they become closely related in the program's data organization.
The next definition defines both a new type struct rectangle and a variable rect1 of this type. Note that we can use struct point, a previously defined struct, in the declaration of struct rectangle.
Recall from plane geometry that a rectangle (we assume its sides are parallel to the axes) is determined by its lower left ll and upper right ur corners.
The next group declares a function f() having a structure parameter, then a function mkPoint() with a structure result, and finally midPoint() with both structure parameters and a structure result.
The definition in main() of pt1 illustrates an initialization. C does not support structure constants. Hence you could not in main() have the assignment statement
pt1 = {40., 20.};
We see in the executable statements of main() that one can assign a point to a point as well as assigning to each component.
Since the rectangle rect1 is composed of points, which are in turn composed of doubles, we can assign a point to a point component of a rectangle and can assign a double to a double component of a point component of a rectangle.
If you wrote Java programs for geometry (we did when I last taught 201/202), they probably had classes like rectangle and point and had objects like pt1, pt2, and rect1. Given these classes, the assignment statements in our C-language main() function would have been more or less legal Java statements as well.
The only legal operations on a structure are copying it, assigning
to it as a unit, taking its address with &, and assessing its
Note that
copying a structure includes passing one as a
parameter to a function or returning the value of a function.
double dist (struct point pt) { return sqrt(pt.x*pt.x + pt.y*pt.y); }
struct point mkPoint(double x, double y) { // return {x, y}; invalid in C struct point pt; pt.x = x; pt.y = y; return pt; }
struct point midpoint(struct point pt1, struct point pt2){ // return (pt1 + pt2) / 2; not C struct point pt; pt.x = (pt1.x+pt2.x) / 2; pt.y = (pt1.y+pt2.y) / 2; return pt; }
void mvToOrigin(struct rectangle *r){ (*r).ur.x = (*r).ur.x - (*r).ll.x; r->ur.y = r->ur.y - r->ll.y; r->ll.y = 0; r->ll.x = 0; }
On the right we see four geometry functions. Although all four deal with structs, they do so differently. A function can receive and return structures, but you may prefer to specify the constituent native types instead. A third alternative is to utilize a pointer to a struct.
As we have seen, functions can take structures as parameters, but is that a good idea? Should we instead use the components as parameters or perhaps pass a pointer to the structure? For example, if main() wishes to pass pt1 (of type struct point) to a function f(), should we write.
Naturally, the declaration of f() will be different in the three cases. When would each case be appropriate?
Java constructor likefunction that produces a structure from its constituents, for example mkPoint(pt1.x, pt2.y) above would produce a new point having coordinates a
mixtureof pt1 and pt2.
*followed by the standard component selection operator
.. Due to precedence, the parentheses are needed.
->.
Note: The
-> abbreviation is
employed almost universally.
Constructs like ptr1->elt5 are very common; the
long form (*ptr1).elt5 is much less common.
Homework: Write two versions of mkRectangle, one that accepts two points, and one that accepts 4 real numbers.
int f(int x) { if (x&1) return 3*x+1; return x >> 1; }
#define MAXVAL 10000 #define ARRAYBOUND (MAXVAL+1) int G[ARRAYBOUND]; int P[ARRAYBOUND];
struct gameValType { int G[ARRAYBOUND]; int P[ARRAYBOUND]; } gameVal;
struct gameValType { int G; int P; } gameVal[ARRAYBOUND];
#define NUMEMPLOYEES 2 struct employeeType { int id; char gender; double salary; } employee[NUMEMPLOYEES] = { { 32, 'M', 1234. }, { 18, 'F', 1500. } };
Consider the following game. (The code is on the right does one step.)
So, starting with N=5, you get 16 8 4 2 1.
starting with N=7, you get 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1.
and starting with N=27, you get 27 82 41 ... 9232 ... 160 80 40 20 10 5 16 8 4 2 1.
It is an open problem in number theory if every positive integer eventually get to 1. This has been checked for MANY numbers. Let G[i] be the number of rounds of the game needed to get from i to 1. G[1]=0, G[2]=1, G[7]=16, G[27]=111. Define (G[0]=-1)
Factoring into primes is fun too. So let P[N] be the number of distinct prime factors of N. P[2]=1, P[16]=1, P[12]=2 (define P[0]=P[1]=0).
This leads to two arrays as shown on the right in the second frame.
We might want to group the two arrays into a structure as in the third frame. This version of gameVal is a structure of arrays. In this frame the number of distinct prime factors of 763 would be stored in gameVal.P[763]
In the fourth frame we grouped together the values of G[n] and P[n]. This version of gameVal is an array of structures. In this frame the number of distinct prime factors of 763 would be stored in gameVal[763].P
If we had a database with employeeID, gender, and salary, we might use the array of structures in the fifth frame. Note the initialization. The inner {} are not needed, but I believe they make the code clearer.
How big is the employee array of structures? How big is employeeType?
C provides two versions of the sizeof unary operator to answer these questions.
These functions are not trivial and indeed the answers are system dependent ... for two reasons.
Example: Assume char requires 1 byte, int requires 4, and double requires 8. Let us also assume that each type must be aligned on an address that is a multiple of its size and that a struct must be aligned on an address that is a multiple of 8.
So the data in struct employeeType requires 4+1+8=13 bytes. But three bytes of padding are needed between gender and salary so the size of the type is 16.
Homework: How big is each version of sizeof(struct gameValType)? How big is sizeof employee?
#include <stdio.h> int main (int argc, char *argv[argc]) { struct howBig { int n; double y; } howBigAmI[] = { {26, 18.}, {33, 99.} }; printf ("howBigAmI has %ld entries.\n", sizeof howBigAmI / sizeof(struct howBig)); }
In the example above it is easy to look at the initialization and count the array bound for employee. An annoyance is that you need to change the #define for NUMEMPLOYEES if you add or remove an employee from the initialization list.
A more serious problem occurs if the list is long in which case manually counting the number of entries is tedious and, much worse, error prone.
Instead we can use sizeof and sizeof() to have the compiler compute the number of entries in the array. The code is shown on the right. The output produced is
howBigAmI has 2 entries.]; }
As its name suggests the purpose of getword() is to get (i.e., read) the next word from the input. It's first parameter is a buffer into which getword() will place the word found. Although declared as a char *, the parameter is viewed as pointing to many characters, not just one. The second parameter throttles getword(), restricting the number of characters it will read. Thus getword() is not scary; the caller need only ensure that the first parameter points to a buffer at least as big as the second parameter specifies.
The definition of a word is technical. (It is chosen to enable programs like the keyword counting example in the next section.) A word is either a string of letters and digits beginning with a letter, or a single non-whitespace character. The return value of the function itself is the first character of the word, or EOF for end of file, or the character itself if it is not alphabetic.
The program has a number of points to note.
man isalnum.
Note that getword() above (which is from the text) requires the use of getch() and ungetch() from the text (and notes). The versions of these two routings in the standard library are slightly different and getword() fails if you use them.
Start Lecture #09
Remark:
#include <stdio.h> #include <ctype.h> #include <string.h> #define MAXWORDLENGTH 50 struct keytblType { char *keyword; int count; } keytbl[] = { { "break", 0 }, { "case", 0 }, { "char", 0 }, { "continue", 0 }, // others { "while", 0 } }; #define NUMKEYS (sizeof keytbl / sizeof keytbl[0]) int getword(char *, int); // no var names given struct keytblType *binsearch(char *);
int main (int argc, char *argv[argc]) { char word[MAXWORDLENGTH]; struct keytblType *p; while (getword(word,MAXWORDLENGTH) != EOF) if (isalpha(word[0]) && ((p=binsearch(word)) != NULL)) p->count++; for (p=keytbl; p<keytbl+NUMKEYS; p++) if (p->count > 0) printf("%4d %s\n", p->count, p->keyword); return 0; }
struct keytblType *binsearch(char *word) { int cond; struct keytblType *low = &keytbl[0]; struct keytblType *high = &keytbl[NUMKEYS]; struct keytblType *mid; while (low < high) { mid = low + (high-low) / 2; if ((cond = strcmp(word, mid->keyword)) < 0) high = mid; else if (cond > 0) low = mid+1; else return mid; } return NULL; }
The program on the right illustrates well the use of pointers to structures and also serves as a good review of many C concepts. The overall goal is to read text from the console and count the occurrence of C keywords (such as break, if, int, and others.). After reading the input, the program prints a list of all the keywords that were present and how many times each occurred.
Lets examine the code on the right.
enoughso that it points to the next entry.
midpointbetween high and low. But, other than that oddity, I find it striking how array-like the code looks. That is, the manipulations of the pointers could just as well be manipulating indices.
Note/Suggestion: The code just above won't compile and run by itself. It needs getword(), which needs getch() and ungetch(), which are further back. Some of these are in standard libraries, but the library versions are slightly different and will not work with program above.
I believe it would be instructive for you to put the pieces all together into a single .c file which you then compile and run. As data you can type in (or cut and past in) any C program and it should work. At least it worked for me.
Consider a basic binary tree. A small example is shown on the near right; one cell is detailed on the far right. Looking at the diagram on the far right suggests a structure with three components: left, right, and value. The first two refer to other tree nodes and the third is an integer.
I am fairly sure you did trees in 101-102 but I will describe the C version as though it is completely new. I will say that in both Java and C the key is the use of pointers. In C this will be made very explicit by the use of *. In Java it was somewhat under the covers.
struct bad { struct bad left; int value; struct bad right; };
struct treenode_t { struct treenode_t *left; int value; struct treenode_t *right; };
Since trees are recursive data structures you might expect some sort of recursive structure in the C or Java declaration. Consider struct bad defined on the right. (You might be fancier and have a struct tree, which contains a struct root, which has in turn an int value and two struct tree's).
But struct bad and its fancy friends are infinite
data structures: The left and right components are the same type as
the entire structure.
So the size of a struct bad is the size of
an int plus the size of two struct bad's.
Since the size of an int exceeds zero, the total size must
be infinite.
Some languages permit infinite structures providing you never try to
materialize more than a finite piece.
But C is not one of those languages.
So for us, struct bad is bad!
Instead, we use struct treenode_t as shown on the right (names like treenode_t are a shorter and very commonly used alternative to names like treenodeType).
The key is that a struct treenode_t does not contain an internal struct treenode_t. Instead it contains pointers to two internal struct treenodes t.
Be sure you understand why struct treenode_t is finite and corresponds exactly to the tree picture above.
struct s { int val; struct t *pt; }; struct t { double weight; struct s *ps; };
What if you have two structure types that need to reference each other. You cannot have a struct s contain a struct t if struct t contains a struct s. If you did try that, then each struct s would contain a struct t, which would in turn contain a struct s, which would contain ... .
Once again pointers come to the rescue as illustrated on the right. Neither structure is infinite. A struct s contains one integer and one pointer. A struct t contains one double and one pointer. Neither is a subset of the other, instead each references (points at) the other
struct llnode_t { long data; struct llnode_t *next; }
Probably the most familiar 1D unbounded data structure (beyond the 1D array) is the linked list, which is well studied in 101-102. On the near right we have a diagram of a small linked list and on the far right we show the C declaration of a structure corresponding to one node in the diagram. Again we note that a struct llnode_t does not contain a struct llnode_t. Instead, it contains a pointer to such a node.
With one pointer in each node the structure has a natural 1D geometric layout. Trees, in contrast, have two pointers per node and have a natural 2D geometric layout.
Instead of trees, we will investigate a different 2-dimensional structure, a linked list of linked lists. This structure (or something similar) will likely become the subject of the future lab 2.
Although all the actual data are strings (i.e., char *), there are two different types of structures present, the vertical list of struct node2d's (2D nodes) and the many horizontal lists of struct node1d's (1D nodes).
Actually it is a little more complicated.
Each horizontal list has a list head that is a node2d and
there must be somewhere (not shown in the diagram) a pointer to
the
first node2d (i.e., the node with
data joe).
The three decreasing length horizontal lines indicate that the
pointer in question is null.
(I borrow that symbol from electrical engineering, where it is used
to represent
ground.)
struct node1d { struct node1d *next; char *name; }; struct node2d { struct node1d *first; char *name; struct node2d *down; };
The structure declarations are on the right. Perhaps I should have used struct node1d_t and note2d_t.
Be sure you understand why the picture above agrees with the C declarations on the right.
The diagram (and the code) suggests a hierarchy: the nodes in the
left hand column are
higher level than the others.
You can think of the struct node1d's on a single row
belonging to a list headed by the struct node2d on the left
of that same row.
Note that every struct node1d is the same (rather small)
size independent of the length of the name.
Similarly, all the struct node2d's are the same size (but
bigger that the struct node1d's).
In that sense the figure is misleading since is suggests that
alice is larger that
joe. The confusion is that the
node does not contain the actual 6 characters in
alice
('a', 'l', 'i', 'c', 'e',
'\0') but rather a (fixed size) pointer to the
name.
Said using C terminology the name component is a fixed size pointer. The possibly large string is the object pointed to by name, i.e., it is *name. But *name is a char, which is even smaller than a pointer. It would be better to say that name points to the first character of the string; you must look at the string itself to see where it ends.
2d node name=joe 1d node name=xy2 1d node name=sally 1d node name=e342 2d node name=alice 2d node name=R2D2 1d node name=cso 1d node name=c3pO
How should we print the above structure.
I suggest, and probably lab2 will require, that you use the style shown on the right. (You might want quotes around the strings.)
The idea behind this style is the following.
From this printout one can see immediately, for example, that the 2D list has three entries and that the middle 2D node has an empty sublist of 1D nodes.
One question remains.
The string itself can be big.
If the length is a constant, then the compiler can be asked to leave
space
for it.
Question: What if the string is generated at runtime?
Answer: malloc().
As you know, in Java, objects (including arrays) have to be created via the new operator. We have seen that in C this is not always needed: you can declare a struct rectangle and then declare several rectangles.
However, this doesn't work if you want to generate the rectangles during run time. When you are writing a program to process 2D lists, you won't know how many 2d nodes or 1d nodes will be needed. That number will be determined by the data read when the program is run.
In addition the size of the strings that name each node will also not be known until runtime.
So we need a way to create an object during run time. In C this uses the library function malloc(), which takes one argument, the amount of space to be allocated. The function malloc() allocates the requested space and returns a pointer to it. The companion function free() takes as argument a pointer that was obtained from malloc and makes the corresponding space available for future malloc()s.
These two new functions should remind you of the similar pair we studied a few lectures ago. The new functions are considerably more sophisticated than the old ones.
Since malloc() is not part of C, but is instead just a library routine, the compiler does not treat it specially (unlike the situation with new, which is part of Java). Since malloc() is just an ordinary function, and we want it to work for dynamically created objects of any type (e.g., an int, a char *, a struct treenode, etc), and there is no way to pass the name of a type to a function, two questions arise.
The alignment question is easy and can be essentially ignored at this time. This is fortunate since we haven't studied (or even defined) alignment yet, but will do so soon after we finish with C.
The answer to the alignment question (which will become clear when we study alignment) is that we simply have malloc() return space aligned on the most stringent requirement. So, on a system where long doubles and all structures require 16-byte alignment and all other data types require 8-byte, 4-byte, 2-byte, or 1-byte alignment, then malloc() always returns space aligned on a 16-byte boundary (i.e., the address is a multiple of 16).
Ensuring type correctness is not automatic, but not hard. Specifically, malloc() returns a void *, which means that the value returned is a pointer that must be explicitly coerced to the correct type. For example, lab 2 might contain code like
struct node2d *p2d; p2d = (struct node2d *) malloc(sizeof(struct node2d));
An application calls the library routine free(void *p) to return memory obtained by malloc(). Indeed p must be a pointer returned by a previous call to malloc(). Note, as mentioned above, that the order in which chunks of memory are freed need not match the order in which they were obtained.
It is clearly an error to continue using memory you already freed. Such errors often lead to a crash with very little useful diagnostic information available.
Advice: Try very hard not to make this error.
Note: See, in addition, section 7.8.5 below.
#include <stdio.h> #include <stdlib.h> int main(int argc, char *argv) { int n; char *As; scanf("%d", &n); As = (char *) malloc(1 + n * sizeof(char)); for (int i = 0; i<n; i++) As[i] = 'A'; As[n] = '\0'; printf("As is: %s\n", As); return 0; }
The program on the right reads an integer n and then produces a string containing n As (plus, of course, the trailing NULL to mark the end of the string). Note that n is unbounded and the only limit is the (virtual and physical) memory on your system. malloc() is used to obtain the memory needed for all n As.
Note that malloc() is declared in the system library stdlib.h, which explains the second #include.
Also note that As, which is a character pointer, is quite
comfortable in its dual role as a character array.
Of course the declaration
char *A does not allocate
any space for the characters.
That job is handled by malloc().
At various points in lab2, you may need to
create a
node, either a struct node2d or a
struct node1d.
These individual nodes cannot be simply declared since we don't know
until runtime how many there will be of each type and what will be
the individual names.
The situation will be that a
user of your lab has entered a
command such as:
append2d name2
A first call to getword() yields
append2d so you
know you are creating a new struct node2d and placing it at
the end of the existing
vertical list.
A second call to getword() yields
name2 which is
the string you are to place in the newly-created
struct node2d.
Note that the lab provides an upper bound on the length
of
name2.
Since the node and the string must be created, TWO calls to malloc() are used. My code has the following comments.
// create a 2D node with the given name and null 1D sublist // first malloc space for the node // now malloc() space for the name (i.e., the real string)
Skipped
Instead of declaring pointers to trees via
struct treenode *ptree;we can write
typedef struct treenode *Treeptr; Treeptr ptree;Thus treeptr is a new name for the type struct treenode *. As another example, instead of
char *str1, *str2;We could write
typedef char *String; String str1, str2;
Note that this does not give you a new type; it just gives you a new name for an existing type. In particular str1 and str2 are still pointers to characters even if declared as a String above.
A common convention is to capitalize the a typedef'ed name.
struct something { int x; union { double y; int z; } }
Traditionally union was used to save space when memory was expensive. Perhaps with the recent emphasize on very low power devices, this usage will again become popular. Looking at the example on the right, y and z would be assigned to the same memory locations. Since the size allocated is the larger of what is needed the union takes space max(sizeof(double),sizeof(int)) rather than sizeof(double)+sizeof(int) if a union was not done.
It is up to the programmer to know what is the actual variable stored. The union shown cannot be used if y and z are both needed at the same time.
It is risky since there is no checking done by the language.
A union is aligned on the most severe alignment of its constituents. This can be used in a rather clever way to meet a requirement of malloc().
As we mentioned above when discussing malloc(), it is sometimes necessary to force an object to meet the most severe alignment constraint of any type in the system. How can we do this so that if we move to another system where a different type has the most severe constraint, we only have to change one line?
struct something { int x; struct something *p; // others } obj;
// assume long most severely aligned typedef long Align union something { struct dummyname { int x; union something *p; // others } s; Align dummy; } typedef union something Something;
Say struct something, as shown in the top frame on the right, is the type we want to make most severely aligned.
Assume that on this system the type long has the most severe alignment requirement and look at the bottom frame on the right.
The first typedef captures the assumption that long has the most severe alignment requirement on the system. If we move to a system where double has the most severe alignment requirement, we need change only this one line. The name Align was chosen to remind us of the purpose of this type. It is capitalized since one common convention is to capitalize all typedefs.
The variable dummy is not to be used in the program. Its purpose is just to force the union, and hence s to be most severely aligned.
In the program we declare an object say obj to be of type Something (with a capital S) and use obj.s.x instead of obj.x as in the top frame. The result is that we know the structure containing x is most severely aligned.
See section 8.7 if you are interested.
Skipped
Start Lecture #10Remark: There was a typo in lab1 problem 2 (the parameters to reorder() needed ** not just *). I added some words and a diagram to the problem to make this clear. I will discuss the revisio at the end of this class.
This pair form the simplest I/O routines.
#include <stdio.h> int main (int argc, char *argv[argc]) { int c; while ((c = getchar()) != EOF) if (putchar(c) == EOF) return EOF; return 0; }
The function getchar() takes no parameters and returns an integer. This integer is the integer value of the character read from stdin or is the value of the symbolic parameter EOF (normally -1), which is guaranteed not the be the integer value of any character.
The function putchar() takes one integer parameter, the integer value of a character. The character is sent to stdout and is returned as the function value (unless there is an error in which case EOF is returned).
The code on the right copies the standard input (stdin), which is usually the keyboard, to the standard output (stdout), which is usually the screen.
We built getch() / ungetch() from getchar().
Homework: 7.1. Write a program that converts upper case to lower or lower case to upper, depending on the name it is invoked with, as found in argv[0]
We have already seen printf(). A surprising characteristic of this function is that it has a variable number of arguments. The first argument, called the format string, is required. The number of remaining arguments depends on the value of the first argument. The function returns the number of characters printed, but the return value is rarely used. Technically the declaration of printf() is
int printf(char *format, ...);
The format string contains regular characters, which are just sent
to stdout unchanged and
conversion specifications,
each of which determines how the value of the next argument is to be
printed.
Each conversion specification begins with a
%, which is
optionally followed by some modifiers, and ends with a conversion
character.
We have not yet seen any modifiers but have seen a few conversion characters, specifically d for an integer (i is also permitted), c for a single character, s for a string, and f for a real number.
There are other conversion characters that can be used, for example, to get real numbers printed using scientific notation. The book gives a full table.
There are a number of modifiers to make the output line up and look
better.
For example, %12.3f means that the real number will be
printed using 12 columns (or more if the number is too big to fit in
12 columns) with 3 digits after the decimal point.
So, if the number was 36.3 it would be printed as
||||||36.300 where I used
| to represent a blank.
Similarly -1000. would be printed as
|||-1000.000.
These two would line up nicely if printed via
printf("%12.3f\n%12.3f\n\n", 36.3, -1000.);
The function
int sprintf(char *string, char *format, ...);
is very similar to printf(). The only difference is that, instead of sending the output to stout (normally the screen), sprintf() assigns it to the first argument specified.
char outString[50]; int d = 14; sprintf(outString, "The value of d is %d\n", d);
For example, the code snippet on the right sets the first 23 characters of outString to The value of d is 14 \n\0 while the remaining 27 characters of outString continue to be uninitialized.
Since the system cannot in general check that the first argument is big enough, care is needed by the programmer, for example checking that the returned value is no bigger than the size of the first argument. In summary, sprintf() is scary. A good defense is to use instead snprintf(), which like strncpy(), guarantees than no more than n bytes will be assigned (n is an additional parameter to snprintf).
As we mentioned, printf() takes a variable number of arguments. But remember that printf() is not special, it is just a library function, not an object defined by the language or known specially to the compiler. That is, anyone can write a C program with declaration
int myfunction(int x, float y, char *z, ...)
and it will have three named arguments and zero or more unnamed arguments.
There is some magic needed to get the unnamed arguments. However, the magic is needed only by the author of the function; not by a user of the function.
Related to the Java Scanner class is the C function scanf().
The function scanf() is to printf() as getchar() is to putchar(). As with printf(), scanf() accepts one required argument (a format string) and a variable number of additional arguments. Since this is an input function, the additional arguments give the variables into which input data is to be placed.
Consider the code fragment shown on the top frame to the right and assume that the user enters on the console the lines shown on the bottom frame.
int n; double x; char str[50]; scanf("%d %f %s", &n, &x, str);
22 37.5 no-blanks-here
The function
int sscanf(char *string, char *fmt, ...);
is very similar to scanf(). The only difference is that, instead of getting the input from stdin (normally the keyboard), sscanf() gets it from the first argument specified.
So far all our input has been from stdin and all our output has been to stdout (or from/to a string for sscanf()/sprintf).
What if we want to read or write a file?
In Unix you can use the redirection operators of the command interpreter (the shell), namely < and >, to have stdin and/or stdout refer to a file.
But what if you want input from 2 or more files?
Before we can specify files in our C programs, we need to learn a (very) little about the file pointer.
Before a file can be read or written, it must be opened.
The library function fopen() is given two arguments, the
name of the file and the
mode; it returns a file pointer.
Consider the code snippet on the right. The type FILE is defined in <stdio.h>. We need not worry about how it is defined.
FILE *fp1, *fp2, *fp3, *fp4; FILE *fopen(char *name, char *mode); fp1 = fopen("cat.c", "r"); fp2 = fopen("../x", "a"); fp3 = fopen("/tmp/z", "w"); fp4 = fopen("/tmp/q", "r+");
After the file is opened, the file name is no longer used; subsequent commands (reading, writing, closing) use the file pointer.
The function fclose(FILE *fp) breaks the connection established by fopen().
Just as getchar()/putchar() are the basic one-character-at-a-time functions for reading and writing stdin/stdout, getc()/putc() perform the analogous operations for files (really for file pointers). These new functions naturally require an extra argument, a pointer to the file to read from or write to.
Since stdin/stdout are actually file pointers (they are constants not variables) we have the definitions
#define getchar() getc(stdin) #define putchar(c) putc((c), stdout)
I think this will be clearer when we do an example, which is our next task.
#include <stdio.h> main (int argc, char *argv[argc]) { FILE *fp; void filecopy(FILE *, FILE *); if (argc == 1) // NO files specified filecopy(stdin, stdout); else while(--argc > 0) // argc-1 files); }
The name cat is short for catenate, which is a synonym of concatenate.
If cat is given no command-line arguments (i.e., if argc=1), then it just copies stdin to stdout. This is not useless: for one thing remember < and >.
If there are command-line arguments, they must all be the names of existing files. In this case, cat concatenates the files and writes the result to stdout. The method used is simply to copy each file to stdout one after the other.
The copyfile() function uses the standard getc()/putc() loop to copy the file specified by its first argument ifp (input file pointer) to the file specified by its second argument. In this application, the second argument is always stdout so copyfile() could have been simplified to take only one argument and to use putchar().
Note the check that the call to fopen() succeeded; a very good idea.
Note also that cat uses very little memory, even if concatenating 100GB files. It would be an unimaginably awful design for cat to read all the files into some ENORMOUS character array and then write the result to stdout.
A problem with cat is that error messages are written to the same place as the normal output. If stdout is the screen, the situation would not be too bad since the error message would occur at the end. But if stdout were redirected to a file via >, we might not notice the message.
Since this situation is common, there are actually three standard file pointers defined: In addition to stdin and stdout, the system defines stderr.Although the name suggests that it is for errors (and that is indeed its primary application), stderr is really just another file pointer, which (like stdout) defaults to the screen).
Even if stdout is redirected by the standard > redirection operator, stderr will still appear on the screen.
There is also syntax to redirect stderr, which can be used if desired.
As mentioned previously a command should return zero if successful and non-zero if not. This is quite easy to do if the error is detected in the main() routine itself.
What should we do if main() has called joe(), which has called f(), which has called g(), and g() detects an error (say fopen() returned NULL)?
It is easy to print an error message (sent to stderr, now that we know about file pointers). But it is a pain to communicate this failure all the way back to main() so that main() can return a non-zero status.
Exit() to the rescue. If the library routine exit(n); is called, the effect is the same as if the main() function executed return n. So executing exit(0) terminates the command normally and executing exit(n) with n>0 terminates the command and gives a status value indicating an error.
The library function
int ferror(FILE *fp);
returns non-zero if an error occurred on the stream fp. For example, if you opened a file for writing and sometime during execution the file system became full and a write was unsuccessful, the corresponding call to ferror() would return non-zero.
The standard library routine
char *fgets(char *line, int maxchars, FILE *fp)
reads characters from the file fp and stores them plus a trailing '\0' in the string line. Reading stops when a newline is encountered (it is read and stored) or when maxchars-1 characters have been read (hence, counting the trailing '\0', at most maxchars will be stored).
The value returned by fgets is normally line. If an end of file or error occurs, NULL is returned instead.
The standard library routine
int fputs(char *line, FILE *fp)
writes the string line to the file fp. The trailing '\0' is not written and line need not contain a newline. The return value is zero unless an error occurs in which case EOF is returned.
A laundry list. I typed them all in to act as convenient reference. Let me know if you find any errors.
This subsection represents a technical point; for this class you can replace size_t by int.
Consider the return type of strlen(), which the length of the string parameter. It is surely some kind of integral type but should it be short int, int, long int or one of the unsigned flavors of those three?
Since lengths cannot be negative, the unsigned versions are better since the maximum possible value is twice as large. (On the machines we are using int is at least 32-bits long so even the signed version permits values exceeding two billion, which is good enough for us).
The two main contenders for the type of the return value from strlen() are unsigned int and unsigned long int. Note that long int can be, and usually is, abbreviated as long.
If you make the type too small, there are strings whose length you cannot represent. If you make the type bigger than ever needed, some space is wasted and, in some cases, the code runs slower.
Hence the introduction of size_t, which is defined in
stdlib.h.
Each system specifies whether size_t is unsigned int or unsigned long (or something else).
For the same reason that the system-dependent type size_t is used for the return value of strlen, size_t is also used as the return type of the sizeof operator and is used several places below.
These are from string.h, which must be #include'd. The versions with n added to the name limit the operation to n characters. In the following table n is of type size_t and c is an int containing a character; src and dest are strings (i.e., character pointers, char *); and cs and ct are constant strings (const char *).
I indicated which inputs may be modified by writing the string name in red. Remember that a string in C is represented by a character pointer.
These functions are from ctype.h, which must be #include'd. Each of them takes an integer argument (representing a character or the value EOF) and return an integer.
int ungetc(int c, FILE *fp)
pushes back to the
input stream the character c.
It returns c or EOF if an error was
encountered.
This function is from stdio.h, which must be #include'd.
Only one character can be pushed back, i.e., it is not safe to call ungetc() twice without an call in between that consumes the first pushed back character. The function ungetch() found in the book and these notes does not have this restriction.
#include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[argc]) { int status; printf("Hello.\n"); status = system("dir; date"); printf("Goodbye: status %d\n", status); return 0; }
The function system(char *s) runs the command contained in the string s and returns an integer status.
The contents of s and the value of the status is system dependent.
On my system, the program on the right when run in a directory containing only two files x and y produced the following output.
Hello. x y Sun Mar 7 16:05:03 EST 2010 Goodbye: status 0
This function is in stdlib.h, which must be #include'd.
We have already seen
void *malloc(size_t n)
which returns pointer to n bytes of uninitialized storage. If the request cannot be satisfied, malloc() returns NULL.
The related function
void *calloc(size_t n, size_t size)returns a pointer to a block of storage adequate to hold an array of n objects each of size size. The storage is initialized to all zeros.
The function
void free (void *p)
is used to return storage obtained from malloc() or calloc().
for (p = head; p != NULL; p = p->next) free(p);
for (p = head; p != NULL; p = q) { q = p-> next; free (p); }
These functions are from math.h, which must be #include'd. In addition (at least on on my system and linserv1.nyu.edu) you must specify a linker option to have the math library linked. If your mathematical program consists of A.c and B.c and the executable is to be named prog1, you would write
cc -o prog1 -l m A.c B.c
All the functions in this section have double's as arguments and as result type. The trigonometric functions express their arguments in radians and the inverse trigonometric functions express their results in radians.
Random number generation (actually pseudo-random number generation) is a complex subject. The function rand() given in the book is an early and not wonderful generator; it dates from when integers were 16 bits. I recommend instead (at least on linux and linserv.nyu.edu)
long int random(void) void srandom(unsigned int seed)
The random() function returns an integer between 0 and RAND_MAX. You can get different pseudo-random sequences by starting with a call to srandom() using a different seed. Both functions are in stdlib.h, which must be #include'd.
On my linux system RAND_MAX (also in stdlib.h) is defined as 231-1, which is also INT_MAX, the largest value of an int. It looks like linserv.nyu.edu doesn't define RAND_MAX, but does use the same psuedo-random number generator.
Remark: Let's write some programs/functions.
At the lowest level of abstraction each of these forms of code are just sequences of bits.
Modern electronics can quickly distinguish 2 states of an electric signal: low voltage and high voltage. Low has always been around 0 volts; high was 5 volts for a long while now is below 3.5 volts.
Since this is not a EE course we will abstract the situation and say that a signal is in one of two states, low (a.k.a. 0) and high (a.k.a. 1).
On the right we see plots of voltage (vertical) vs time (horizontal).
in the middle.
It is fine if you ignore the middle and bottom pictures.
Since (for us) a signal can be in one of two states, it is convenient to use binary (a.k.a. base 2) notation. That way if we have three signals with the first and third high and the middle one low, we can represent the situation using 3 binary digits, specifically 101.
Recall that to calculate the numeric value of a ordinary (base 10, i.e., decimal) number the right most digit is multiplied by 100=1 the next digit to the left by 101=10, the next digit by 102=100, etc.
For example 6205 = 6*103 + 2*102 + 0*101 + 5*100 = 6*1000 + 2*100 + 0*10 +5*1.
Similarly binary numbers work the same way so, for example the
binary number 11001 has value (written in decimal)
1*24 + 1*23 + 0*22 + 0*21 + 1*22 = 1*16 + 1*8 + 0*4 + 0*2 +1*1 = 16+8+1 = 25.
We normally use decimal (i.e., base 10) notation where each digit is conceptually multiplied by a power of 10. The use of 10 digits is strongly related to our having 10 fingers (aka digits).
We all know about the
ten's place,
hundred's place,
etc.
The feature that the same digit is valued 1/10 as much if it is one
place further to the right continues to hold to the right of the
decimal point.
Computer hardware uses binary (i.e., base 2) arithmetic so to understand hardware features we could write our numbers in binary. The only problem with this is that binary numbers are long. For example, the number of US senators would be written 1100100 and the number of miles to the sun would need 25 bits (binary digits).
This suggests that decimal notation is more convenient. The problem with relying on decimal notation is that we need binary notation to express multiple electrical signals and it is difficult to convert between decimal and binary because ten is not an integral power of 2.
The table on the right (for now only look at the first two columns) shows how we write the numbers from 0 to 16 in both base 10 and base 2.
Start Lecture #11
Remarks:
Base 10 is familiar to us, which is certainly an enormous advantage, but it is hard to convert base 10 numbers to/from base 2 and we need base 2 to express hardware operation. Base 2 corresponds well to the hardware but is verbose for large numbers.
Let's try a compromise, base 4.
To convert between base four and base two is easy since the four
base 4
digits (I hate that expression, for me digit means
base 10) correspond exactly to the four possible pairs of bits.
base 4 bits 0 00 1 01 2 10 3 11
Look again at the table above but now concentrate on columns two and three.
We see that it is easy to convert back and forth between base 2 and base 4. But base 4 numbers are still a little long for comfort: a number needing n bits would use ⌈n/2⌉ base four digits.
A base 8 number would need ⌈n/3⌉ digits for an n-bit base 2 number because 8=23 and a base 16 number would need ⌈n/4⌉. Base 8 (called octal) would be good, and was used when I learned about computers. The C language dates from this time and C has support for octal. Base 16 (called hexadecimal) is used now and C supports it.
Question: Why the switch from base 8 to base
16?
Answer: Words in a 1960s computer had 36 bits and 36 is divisible by 3 so a word consisted of exactly 12 octal digits. Words in modern computers have 32 bits and 32 is divisible by 4 (but not by 3) so a 32-bit word consists of exactly 8 base-16 digits. (Recently the word size has increased to 64 bits, but 64 is also divisible by 4 and a 64-bit word consists of exactly 16 base-16 digits.)
Question: Why were there 36-bit words?
Answer: Characters then were 6 bits so a 36-bit word held six characters.
Base 16 is called hexadecimal.
We need 16 symbols for the 16 possible digits; the first 10 are obvious 0,1,...,9. We need 6 more to represent ten, eleven, ..., fifteen.
We use A, B, C, D, E, F to represent the extra 6 digits. When we write a hexadecimal number we precede it with 0x.
So 0x1234 = 1*(16)3 + 2*(16)2 + 3*(16)1 + 4*(16)0 is quite a bit bigger than 1234 = 1*(10)3 + 2*(10)2 + 3*(10)1 + 4*(10)0
You convert a base-16 to/from binary one hexadecimal digit (4 bits) at a time. For example
1011000100101111 = 1011 0001 0010 1111 = B 1 2 F = 0xB12F
Look again at the table above right and notice that groups of four bits do match one hex digit.
You need to learn (or figure out) that 0xA3 + 0x3B = 0xDE and worse 0xFF + 0xBB = 0x1BA and much worse 0xFA * 0xAF = 0xAAE6.
Although fundamentally hardware is based on bits, we will normally think of computers as byte oriented. A byte (aka octet) consists of 8 bits (or two hex characters). As we learned, the primitive types in C (char, int, double, etc) are each a multiple of bytes in size. In fact, the multiples are powers of 2 so individual data items are 1, 2, 4, 8, or 16-bytes long.
#include <string.h> #include <stdio.h> void showBytes (unsigned char *start, int len) { int i; for (i=0; i<len; i++) printf("%p %5x %c\n", start+i, *(start+i), start[i]); } int main(int argc, char *argv[]) { showBytes(argv[1], strlen(argv[1])); }
The simple program on the right prints its first argument in hex. Actually it does a little more, it prints the address of each character of the first argument, and then prints the character twice, first as a hex number and then as a character. Remember that in C, a character is an integer type.
./a.out jB4k 0x7ffd3892789d 6a j 0x7ffd3892789e 42 B 0x7ffd3892789f 34 4 0x7ffd389278a0 6b k
Several points to note.
#include <stdio.h> int main(int argc, char *argv[]) { int idx; char c[3]; short s[3]; int i[3]; long l[3]; float f[3]; double d[3]; for (idx=0; idx<3; idx++) { printf("%p %p %p %p\n", &c[idx], &s[idx], &i[idx], &l[idx]); } printf("\n"); for (idx=0; idx<3; idx++) { printf("%p %p\n", &f[idx], &d[idx]); } }
The program on the right produces the following output.
0x7fff73546565 0x7fff73546502 0x7fff73546508 0x7fff73546520 0x7fff73546566 0x7fff73546504 0x7fff7354650c 0x7fff73546528 0x7fff73546567 0x7fff73546506 0x7fff73546510 0x7fff73546530 0x7fff73546514 0x7fff73546540 0x7fff73546518 0x7fff73546548 0x7fff7354651c 0x7fff73546550
Note that the chars are one byte apart, shorts are two bytes apart, ints and floats, are four bytes apart, and longs and doubles are eight bytes apart.
Also note that chars (which are of size 1) can start on any byte, shorts (which are of size 2) can start only on even numbered byte, ints and floats (which are of size 4) can start only on addresses that are a multiple of 4, and longs and doubles (which are of size 8) can start only on addresses that are a multiple of 8.
In general data items of size n must be aligned on addresses that are a multiple of n.
This answers a question we posed concerning malloc(), namely malloc() returns addresses that are a multiple of the most severe alignment restriction on the system. Normally this is 16.
We think of memory as composed of 8-bit bytes and the bytes in memory are numbered. So if you could find a memory as small as 1KB (kilobyte) you could address the individual bytes as byte 0, byte 1, ... byte 1023. If you numbered them in hexadecimal it would be byte 0 ... byte 3FF.
As we learned a C-language char takes one byte of storage so its address would be one number.
A 32-bit integer requires 4 bytes. I guess one could imagine storing the 4 bytes spread out in memory, but that isn't done. Instead the integer is stored in 4 consecutive bytes, the lowest of the four byte addresses is the address of the integer.
Normally, integers are aligned i.e, the lowest address is a multiple of 4. On many systems a C-language double occupies 8 consecutive bytes the lowest numbered of which is a multiple of 8.
Let's consider a 4-byte (i.e., 32-bit) integer N that is stored in the four bytes having address 0x100-0x103. The address of N is therefore 0x100, which is a multiple of 4 and hence N is considered aligned.
Let's say the value of N in binary is
0010|1111|1010|0101|0000|1110|0001|1010
which in hex (short for hexadecimal) is 0x2FA50E1A. So the four bytes numbered 100, 101, 102, and 103 will contain 2F A5 0E 1A. However, a question still remains: Which byte contains which pair of hex digits?
Unfortunately two different schemes are used. In little endian order the least significant byte is put in the lowest address; whereas in big endian order the most significant byte is put in the lowest address.
Consider storing in address 0x1120 our 32-bit (aligned) integer, which contains the value 0x2FA50E1A. A little endian machine would store it this way.
byte address 0x1120 0x1121 0x1122 0x1123 contents 0x1A 0x0E 0xA5 0x2F
In contrast a big endian machine would store it this way.
byte address 0x1120 0x1121 0x1122 0x1123 contents 0x2F 0xA5 0x0E 0x1A
int main(int argc, char *argv[]) { int a = 54321; showBytes((char *)&a, sizeof(int)); }
On the right is an example using the showBytes() routine defined just above that gives (in hex) the four bytes in the integer 54321. The output produced is (ignoring the third column)
0x7ffd0a0ed8f4 31 0x7ffd0a0ed8f5 d4 0x7ffd0a0ed8f6 0 0x7ffd0a0ed8f7 0
So the four bytes are 0x31, 0xD4, 0x0, and 0x0. If the number in hex is 31 D4 00 00 it would be much bigger than 54321 decimal. Instead the number is 00 00 D4 31 hex which does equal 54321 decimal.
So the processor in my laptop is little endian (as are all x86 processors).
Homework: 2.58.
Remark: Now imagine connecting a little endian machine to a big-ending machine and sending an int from one to the other one byte at a time.
As we know a string is a null terminated array of chars; each char occupies one byte. Given the string "tom", the char 't' will occupy one byte, 'o' will occupy the next (higher) byte, 'm' will occupy the next byte and '\0' the next (last) byte.
There is no issue of byte ordering (endian) since each character is stored in one byte and consecutive characters are stored in consecutive bytes.
Compiled code is stored in the same memory as data. However, unlike data, the format of code is not standardized. That is, the same C program when compiled on different systems will result in different bit patterns.
We will see many examples later.
Now we know how to represent integers and characters in terms of bits and how to write each using hexadecimal notation. But what about operations like add, subtract, multiply, and divide.
We will approach this slowly and start with operations on individual bits, operations like AND and OR.
To define addition for integers you need to give a procedure or adding 2 numbers, you can't simply list all the possible addition problems since there are an infinite number of integers. However there are only 2 possible bits and hence for a binary (i.e., two operand) operation on bits there are only four possible examples and we simply list all four possible questions and the corresponding answers. This list is often called a truth table.
The following diagram does this for six basic bit-level operations.
Just below each truth tables is the symbol used for that operation
when drawing a diagram of an electronic circuit
(a
circuit diagram).
Once you know to compute A|B for A and B each a single bit, you can define A|B for A and B equal length bit vectors. You just apply the operator to corresponding bits.
The same applies to &, ~, and ^.
For example 0101 | 0010 = 0111 and 1100 ^ 1010 = 0110.
It turns out that if you have enough chips that compute only NAND, you are able to wire them together to support any Boolean function. We call NAND universal for this reason. This is also true of NOR but it is not true of any other two input primitive.
C directly supports NOT, AND, OR, and XOR as shown on the table to the left and mentioned previously in section 2.9. Note that these operations are bit-wise. That is, bit zero of the result depends only on bit zero of the operand(s), bit one of the result depends only on bit one of the operands, etc.
C does not have explicit support for NAND or for NOR.
Done previously. Be careful not to confuse bit-level AND (&) with logical AND (&&). The logical operators (&&, ||, and ! treat any nonzero value as TRUE, and zero as FALSE. Also the value returned is always 0 or 1.
Note, for example that !0x00 = 0x01; whereas ~0x00=0xFF.
Also remember that C guarantees
short-circuit evaluation of
&& and ||.
In particular ptr&&*ptr cannot generate a null
pointer exception since, when ptr is null, *ptr is
not evaluated.
This was introduced in C99 so is not in the text. You may use it, but it is not required for the course.
In C, the expression x<<b shifts x b bits to the left. The b most left bits of x are lost and the b right most bits of x become 0.
There is a corresponding right shift >> but there is a question on what to do with the high order (sign) bit.
In a logical right shift all the bits move right and the new HOB becomes a zero. The >> operator is always a logical right shift for unsigned values.
In an arithmetic right shift, again all the bits shift right, but the new HOB becomes a copy of the old high order bit.Most (perhaps all) systems perform arithmetic right shifts when the values are signed.
Homework: 2.61, 2.64.
Integers in C come in several sizes and two
flavors.
A char is a 1-byte integer; a short is a 2-byte
integer; and an int is a 4-byte integer.
The size of a long is system dependent.
It is 4 bytes (32 bits) on a 32-bit system and 8 bytes (64 bits) on
a 64-bit system.
What about the two
flavors?
That comes next.
The first flavor of C integers is unsignted.
We illustrate only unsigned short; the other sizes are
essentially the same (but with a different number of bits).
So we have 16 bits in each short integer, representing from right to
left 20 to 215.
If all these 16 bits are 1s, the value is
215+214+213+212+ 211+210+29+28+ 27+26+25+24+ 23+22+21+20 = 216-1 = 65,535.
Question: Why?
Answer: If the number were one bigger, it would be a 1 followed by 16 zero bits so its value would be 216.
In a sense these encodings are the most natural. They are used and they are well supported in the C language. Naturally the sum of two very big 16-bit unsigned numbers would need 17 bits; this is called overflow. Nonetheless, the situation is good for unsigned addition:
But there is a problem. Unsigned encodings have no negative numbers. That is why I didn't mention subtracting the bigger from the smaller.
To include negative numbers there must be a way to indicate the sign of the number. Also, since some shorts will be negative and we have the same number of shorts as unsigned shorts (because we sill have 16 bits), there will be fewer positive shorts than we had for unsigned shorts.
Before specifying how to represent negative numbers, let's do the easy case of non-negative numbers (i.e., positive and zero). For non-negative numbers set the leftmost bit (called the sign bit) to zero and use the remaining bits as above. Since the left bit (the high order bit or HOB) is for the sign we have one fewer for the number itself so the largest short has a zero HOB and 15 one bits, which equals 215-1 = 32,767.
We could do the analogous technique for negative
numbers: set the HOB to 1 and use the remaining 15 bits for the
magnitude (the absolute value in mathematics).
This technique is called the
sign-magnitude representation
and was used in the past, but is not common now.
One annoyance is that you have two representations of zero
0000000000000000 and 1000000000000000.
We will not use this encoding.
Instead of just flipping the leftmost (or sign) bit as above we form the so-called 2s-complement. For simplicity I will do 4-bit two's complement and just talk about the 16-bit analogue (and 32- and 64-bit analogues), which are essentially the same.
With 4 bits, there are 16 possible numbers. Since twos complement notation has one only representation for each number (including 0), there are 15 nonzero values. Since there are an odd number of nonzero values, there cannot be the same number of positive and negative values. In fact 4-bit two's complement notation has 8 negative values (-8..-1), and 7 positive values (1..7). (In sign magnitude notation there are the same number of positive and negative values, which is convenient; but there are two representations for zero, which is inconvenient.)
The high order bit (hob) i.e., the leftmost bit is called to zero and write 1-7 using the remaining three lob's (low order bits). This last statement is also true for zero.
-1, -2, ..., -7 are written by taking the two's complement of the corresponding positive number. The two's complement of a (binary) number.
Start Lecture #12
Remarks:
Recall that the two's complement of x is ~x + 1. We want the two's complement to be the additive inverse. Let's see if it is. Remember that ~x is x complement and -x is the twos complement which equals ~x+1.
Two'sComp(x) = ~x + 1 x + Two'sComp(x) = x + (~x + 1) = (x + ~x) + 1 = (111...111) + 1 = (-1) + 1 = 0
Success!
Amazingly easy (if you ignore overflows).
You could reasonably ask what does this funny notation have to do with negative numbers. Let me make a few comments.
Question: What does -1 mean mathematically?
Answer: It is the unique number that, when added to 1, gives zero.
Our representation of -1 does do this (using regular binary addition and discarding the final carry-out) so we do have -1 correct.
Question: What does negative n
mean, for n>0?
Answer: It is the unique number that, when added to n, gives zero.
The 1s complement of n when added to n gives
all 1s, which is -1.
Thus the 2s complement, which is one larger, will give zero, as desired.
The table on the right shows the extreme values for both unsigned and signed 16-bit integers. It the signed case we also show the representation of -1 (there is no unsigned -1).
Note that the signed values all use the twos-complement representation. In fact I doubt we will use sign/magnitude (or ones'-complement) for integers any further.
The second table on the right shows the max and min values for various sizes of integers (1, 2, 4, and 8 bytes).
General rule:
Be Careful!.
#include <stdio.h> int main(int argc, char *argv[]) { int i1=-1, i2=-2; unsigned int u1, u2=2; u1 = i1; // implicit cast (unsigned) printf("u1=%u\n", u1); printf( "%s\n", (i2>u2) ? "yes" : "no"); return 0; }
The code in the right illustrates why we must be careful when mixing unsigned and signed values. The fundamental rule that is applied in C when doing such conversions (actually called casts) is that the bit pattern remains the same even though this sometimes means that the value changes.
When I ran the code on the right, the output was
u1=4294967295 yes
When the code executes u1=i1, the bits in i1 are all ones and this bit pattern remains the same when the value is cast to unsigned and placed in u1. So u1 becomes all 1s which is a huge number, as we see in the output.
When we compare i2>u2, either the -2 in i2 must be converted to unsigned or the 2 in u2 must be converted to signed. The rule in C is that the conversion goes from signed to unsigned so the -2 bit pattern in i2 is reinterpreted as an unsigned value. With that interpretation i2 is indeed much bigger that the 2 in u2.
We have just seen signed/unsigned conversions.
How about short to int or int to long?
How about unsigned int to unsigned long? I.e., converting when the sizes are different but the
signedness is the same.
In summary C converts in the following order. That is, types on the left are converted to types on the right.
int → unsigned int → long → unsigned long → float → double → long double.
What if you want to put an int into a short or put a long into an int?
Bits are simply dropped from the left, which can alter both the value and the sign.
Advice: Don't do it.
Be careful!!
Binary addition (i.e., addition of binary numbers) is performed the same as decimal addition. You can add a column of numbers in binary as with decimal, but we will be content to just add two binary numbers.
You proceed right to left and may have to carry a "1".
The only problem is overflow, i.e., where the sum requires more bits than are available. That means there is a carry out of the HOB. For example if you were using 3-digit decimals, the sum 834+645 does not fit in 3 digits (there is a carry out of the hundreds place into what would be the thousands place). Similarly using 4-bit binary numbers, the sum 0111+1001 does not fit in 4 bits.
When there is no overflow, (computer, i.e., binary) addition is conceptually done right to left one bit at a time with carries just like we do for base 10.
In reality very clever tricks are used to enable
multiple bits to be added at once.
You could google
ripple carry and
carry lookahead or
see my lecture notes for computer architecture.
The news is very good—you just add as though it were unsigned addition and throw away any carry-out from the HOB (high order bit).
Only overflow is a problem (as it was for unsigned). However, detecting overflow is not the same as for unsigned. Consider 4-bit 2s complement addition; specifically (-1) + (-1). 1111 + 1111 = 11110 becomes 1110 after dropping the carry-out. But overflow did not occur 1110 is the correct sum of 1111 + 1111!
The correct rule is that overflow occurs when and only when the carry into the HOB does not equal the carry out of the HOB.
Recall that with two's complement there is one more negative number than positive number. In particular, the most-negative number has no positive counterpart. Specifically, for n-bit twos complement numbers, the range of values is
most neg = -2n-1 ... 2n-1-1 = most pos
For every value except the most neg, the negation is obtain by simply taking the two's complement, independent of whether the original number was positive, negative, or zero.
Multiply the two n-bit numbers, which gives up to 2n-bits and discard the n HOBs. Again, the only problem is overflow.
3 * (-4) = 11100 11 ----- 11100 11100 ------ 1010100
A surprise occurs. You just mulitply, the twos complement numbers and truncate the HOBs and ... it works—except for overflow.
On the board do 3 * (-4) using 5 bits.
3 = 00011; 4 = 00100; (~4) = 11011; (-4) = 11100
The multiplication is on the right.
Now truncate 1010100 to 5 bits.
You get y = 10100.
Is this y = -12?
~y = 01011. -y = 01100 = 8+4 = 12
It works!
You can multiply x*2k (k≥0) by just shifting x<<k. This is reasonably clear for x≥0, but works for 2s complement as well.
Note that compilers are clever and utilize identities like
x * 24 = x * (32-8) = x*32 - x*8 = x<<5 - x<<3
The reason for doing this is that shift/add/sub are faster than multiplication.
Division is even slower than multiplication; so we note that right shifting by k gives the same result as dividing by 2k. Actually it gives the floor of the division.
If the value 2k is unsigned, use logical right shift; if it is signed use arithmetic right shift.
Addition and multiplication work unless there is an overflow.
Adding two n-bit unsigned numbers gives (up to) an (n+1)-bit result, which we fit into n bits by dropping the HOB. So you get an overflow if the HOB of the result is 1
Multiplying two n-bit unsigned numbers gives (up to) a 2n-bit result, which we fit into n bits by dropping the n HOBs. So you get an overflow if any of the n HOBs of the result are 1.
Same idea but detecting overflow is more complicated. For addition of n-bit numbers, which includes subtraction, the non-obvious rule is that an overflow occurs if the carry into the HOB (bit n-1) != the carry-out from that bit.
Homework:
Exactly analogous to decimal numbers with a decimal point. Just as 0.01 in decimal is one-hundredth, 0.01 in binary is one-quarter and 0x0.01 is one-twohundredfiftysixth.
If instead of powers of 2, we used powers of 10, the above would be how we write numbers with decimal points.
Fractional binary notation requires considerable space for numbers that are very large in magnitude or very near zero.
5 * 2100 = 101000000...0 | 100 0s | 2-100 = 0.00000000001 | 100 0s |
(The second example above uses sign-magnitude.
But numbers like these comes up in science all the time and the solution
used is often called
scientific notation.
Avagadro's number ~ 6.02 * 1023
Light year ~ 5.88 * 1012 miles
The coefficient is called the mantissa or significand.
In computing we use IEEE floating point, which is basically the same solution but with an exponent base of 2 not 10. As we shall see there are some technical differences.
Represent a floating number as
(-1)s × M × 2E
Where
Naturally, s is stored in one bit.
For single precision (float in C) E is stored 8 bits and M is stored in 23. Thus, a float in C requires 1+8+23 = 32 bits.
For double precision (double in C) E is stored in 11 bits and M in 52. Thus, a double in C requires 1+11+52 = 64 bits.
Now it gets a little complicated; the values stored are not simply E and M and there are 3 classes of values.
Lets just do single precision, double precision is the same idea just with more bits. The number of bits used for the exponent is 8
Although the exponent E itself can be positive, negative, or zero the value stored exp is unsigned. This is accomplished by biasing the E (i.e., adding a constant so the result is never negative).
With 8 bits of exponent, there are 256 possible unsigned values for exp, namely 0...255. We let E = exp-127 so the possible values for E are -127...128.
Stated the other way around, the value stored for the exponent is the true exponent +127.
With scientific notation we write numbers as, for example. 9.4534×1012. An analogous base 2 example would be 1.1100111×210.
Note that in 9.4535 the four digits after the decimal point each
distinguish between 10 possibilities whereas the digit before the
decimal point only distinguishes between 9 possibilities, so is not
fully used.
Note also that in 1.1100111 the 1 to the left distinguishes between one possibility, i.e. is useless.
IEEE floating point does not store the bit to the left of the
binary point because is always 1 (actually see below for the other
two
classes of values).
Let F = 15213.010 = 111011011011012 = 1.11011011011012×213 fract stored = 110110110110100000000002 exp stored = 13+127 = 140 = 100011002 sign stored = 0 value stored = 0 10001100 11011011011010000000000
Used when the stored exponent is all zeros, i.e., when the exponent is as negative as possible, i.e., when the number is very close to 0.0.
The value of the significant and exponent in terms of the stored value is slightly different.
Note there are two zeros since ieee floating point is basically sign magnitude.
Used when the stored exponent is all ones, i.e., when the exponent is a large as possible.
If the significand stored is all zeros, the value represents
infinity (
positive or negative), for example overflow when
doing 1.0/0.0.
If the significand is not all zero, the value is called NaN for not-a-number. It is used in cases like sqrt(-1.0), infinity - infinity, infinity × 0.
IEEE floating point represents numbers as (-1)s × M × 2 E. There are extra complications to store the most information in a fixed number of bits.
The book covers the Intel architecture, which dominates laptops, desktops, and data centers. Some of the fastest supercomputers also have (many) intel CPUs.
It is not used in cell phones and tablets.
The Intel architecture has been remarkably successful from commercial and longevity standpoints.
Modern systems are backwards compatible with the 8086 version introduced in 1978 (more than 40! years ago). In addition to the commercial advantages of backwards compatibility, its implementation is a technological tour-de-force, which has won awards for its engineering.
It has a horrendously complicated instruction set. It is called a CISC (Complex Instruction Set Computer) design. Architectures designed in the last few decades tend to have RISC (Reduced Instruction Set Computer) designs and current implementations of the Intel architecture actually (during execution!) translate many of the complex instructions to a simpler core set.
The book (wisely) only covers a small subset of the possible instructions. For example, we limit arithmetic to operations on 64-bit data, ignoring the 32- 16- and 8-bit arithmetic supplied for backwards compatibility. If you use gcc (or cc) on your laptop (or access, or linserv1) you will see these instructions
We have normally compiled C program with a simple cc command. Different C compilers can produce different assembly for the same C program. Also normally the goal of the compile is to generate high performance output. We instead are interested in simple assembler output. As a result, to compile the program joe.c we will use the command
gcc -Og -S joe.cOn many computers, in particular on linserv1, gcc and cc are the same, but the -Og is needed to generated simple (vs. high performance) assembly code. The -S tells the compiler to make available the generated assembly language code.
The machine state of any processor has details that are under the covers in C or Java or Python or ... . The state for the Intel architecture includes.
Memory is simply a huge array of bytes. Compiled instructions as well as data reside here. A portion of memory is used as a stack to support procedure calls and returns.
Since the data and the program instructions are stored in memory, the CPU needs to fetch both during execution. The CPU sends an address to memory which responds with the contents of that address.
When the CPU needs to store a computed result into memory, it again sends the address in addition to sending the new value.
In summary
Start Lecture #13
Remarks: Mostly repeated from last class.
Read. We will soon learn what many of these instructions do.
On access.cims.nyu.edu I have written mstore.c from the book.
long mult2(long, long); void multstore(long x, long y, long *dest) { long t = mult2(x,y); *dest = t; }
I compiled mstore.c on crackle2 with gcc -Og -S , which generates mstore.s. I then removed the lines in mstore.s beginning with a dot. This gives the same code as the book. Here it is with line numbers and comments added (as in the book).
// void multistore(long x, long y, long *dest) // x in %rdi, y in %rsi, dest in %rdx 1 multstore: 2 pushq %rbx // save %rbx on stack 3 movq %rdx, %rbx // copy dest to %rbx 4 call mult2 // call mult2(x, y) 5 movq %rax, (%rbx) // store result at *dest 6 popq %rbx // restore %rbx 7 ret // return
integers(which includes pointers, i.e., addresses) can be (on a 64-bit machine) either 1, 2, 4, or 8 bytes in length.
Most operations are performed on the 16 registers (the fastest memory in the system). But memory can be accessed directly as well. Typically, data is moved from memory to registers, then operated on (add, sub, etc) and then put back in memory.
For historical reasons concerning backward compatibility the registers have funny names.
We will look at 3 types of assembly instructions.
As we shall see, most operations have one or two operands. There are three types of operands.
The register names above are for the full 64-bit registers. For each of these registers there are other names for the low-order 32-bit subset, the low-order 16-bit subset, and the low-order 8-bit subset. We will use only the names for the full 64-bit operands.
The basic data movement instruction is called move and is written mov with a suffix to indicate the size of the data item moved. It is somewhat misnamed; it is really a copy not a move.
The src is given first then the destination (the reverse of C). For example the C statement *dest = t; might become movq %rax, (%rbx)
A move instruction cannot have both operands in memory, at least one must be a register (or the source an immediate).
long plus(long x, long y); void sumstore (long x, long y, long *dest) { long t = plus(x, y); *dest = t; }
sumstore: pushq %rbx movq %rdx, %rbx call plus movq %rax, (%rbx) popq %rbx ret
The size of a specified register must match the size of the move itself.
There are variants of move that sign extend or zero extend. These are used when you move a value to a longer format.
For example, movzbl moves from a byte to a doubleword (32-bits) by zero extending (on the left).
Similarly, movsbw moves from a byte to a word (16-bits) by sign extending (replicating the sign bit).
There are other special cases as well, see the book for details. One oddity is that, when the target is a register movzbl acts the same as movzbq: it moves the byte and then zeros the high-order 7 bytes of the quadword even though it name suggest it would only zero the 3 high-order bytes of the double word.
void swap(int *xp, int *yp) { int t0, t1; t0 = *xp; t1 = *yp; *xp = t1; *yp = t0; }
Note: This swap uses two temporaries. The one we did a month ago used only one. However, that algorithm uses memory to memory moves so needs to be modified for the intel machine language.
The most general form is Disp(Rb,Ri,S)
Explain why this (complicated) address is useful for stepping through a C array.
Special Cases
Assume these two registers have been set in advance.
Then the table on the right shows various address that can be composed using these two registers and some subsets of the general addressing mode above.
The intel architecture has support for a stack maintained in
memory.
The three key components are the two instructions
pushq Src and popq Dest
and the dedicated register %rsp
Although the Src operand of pushq can be fairly
general we will use only the case where Src is simply a
register, for example %rbp.
Then the instruction
pushq %rbp
has the same effect as the two instruction sequence.
subq $8,%rsp
movq %rbp,(%rsp)
Analogously,
popq %rax
has the same effect as the two instruction sequence.
movq (%rsp),%rax
addq $8,%rsp
Show how this corresponds to an (upside down) stack.
Recall that a memory address can be complicated, it can involve two registers, a scale factor, and an additive constant. Sometimes you want that arithmetic on some registers but don't want to reference memory at all. There is an instruction to do just that. It is called load effective address: leaq.
leaq Src, Dest
Typical uses
leaq (%rdi,%rdi,2), %rax # t <-- x+x*2 salq $2, %rax # t <-- t<<2
Start Lecture #14
Start Lecture #15Remarks on Midterm
For now we shall ignore possible overflows.
So by the
miracle of 2s complement signed and unsigned are
the same!
Instruction Effect C Equivalent incq Dest Dest = Dest + 1 Dest++ decq Dest Dest = Dest - 1 Dest-- negq Dest Dest = -Dest Dest = -Dest; notq Dest Dest = ~Dest Dest = ~Dest;
Instruction Effect C Equivalent addq Src,Dest Dest = Dest + Src Dest += Src; subq Src,Dest Dest = Dest - Src Dest -= Src; imulq Src,Dest Dest = Dest * Src Dest *= Src; xorq Src,Dest Dest = Dest ^ Src Dest ^= Src; orq Src,Dest Dest = Dest | Src Dest |= Src; andq Src,Dest Dest = Dest & Src Dest &= Src;
There are of course left and right shifts, but remember that there are two kinds of right shift: arithmetic right shift, which sign extends, and logical, which just adds zeros on the left.
For
consistency the assembler also has logical and
arithmetic left shift commands but they are just synonyms for the
same operation, which adds zeros on the right.
Instruction Effect C Equivalent salq k,Dest Dest = Dest << k Dest <<= Src; shlq k,Dest Dest = Dest << k Dest <<= Src; sarq k,Dest Dest = Dest >> k Dest >>= Src; shrq k,Dest Dest = Dest >> k Dest >>= Src;
Although I wrote the two right shifts as having the same C equivalent; they are different. The book writes >>A and <<L to distinguish them. In C they are written the same, but most, if not all, C compilers use arithmetic right shift for signed values and logical right shift for unsigned.
void swap3(long *xp, long *yp, long *zp) { long t = *xp; *xp = *yp; *yp = *zp; *zp = t; }
Homework: Write an assembly language version of swap3().
Notes:
A = B + Cas a binary operation. However, it does not fit the examples of binary operators above because, counting the destination, there are three operands.
Below left is a C program (written to look a little like assembler).
On the right is the assembler version, which assumes that initially x is in %rdi, y is in %rsi, and z is in rdx.
The register usage is in the middle.
arith: leaq (%rdi,%rsi), %rax // t1 addq %rdx, %rax // t2 leaq (%rsi,%rsi,2), %rdx salq $4, %rdx // t4 leaq 4(%rdi,%rdx), %rcx // t5 imulq %rcx, %rax // ans ret
long arith (long x, long y, long z) { long t1 = x+y; long t2 = z+t1; long t3 = x+4; long t4 = y*48; long t5 = t3 + t4; long ans = t2 * t5; return ans; }
Note that
The normal (integer) multiply imulq Src,Dest is the analogue of addq. That is the 64-bit (quadword) src is multiplied by the 64-bit Dest and the (low-order 64 bits of the) product becomes the new contents of Dest.
Thanks to the miracle of 2s complement, this one instructions works for both unsigned and signed operands.
As with addq and subq, overflow is possible. Indeed, the true product can require 128 bits. There is a special operation (indeed two operations) that preserve all 128 bits of the product.
The 128-bit multiplies (one for signed and one for unsigned) do not fall into the pattern of the previous operations. Instead only one of the operands is specified in the instruction; the other operand must be %rax.
In addition, the location of the 128-bit result is not given in the instruction. Instead, the high-order 64-bits always go into %rdx and the low order 64-bits go inot %rax.
There is no
normal divide or modulus
instruction.
Instead you first place the 128-bit dividend in %rdx
(high order) and %rax (low order) and then issue
divq src (for unsigned division) or idivq src (for
signed division).
In either case the quotient is put into %rax and the
remainder into %rdx
If the dividend is only 64-bits, it naturally is placed in the low-order register %rax and %rdx should be either all zeros or all one to act as the sign extension of %rdx. The cqto (copy quad to octal) does exactly this.
So far we can do assignment statements and arithmetic. What about if/then/else or while?
The idea is that some arithmetic (or other) operation (e.g., an
add) generates a condition (e.g., a negative value) and some
subsequent operation (e.g., a conditional jump) needs to know the
condition from the add.
The solution employed is to have every add (and other operations)
set certain 1-bit
condition codes that can be used by a
subsequent jump instruction to decide whether to actually jump.
We will consider four condition codes; each 1-bit in size.
In fact we will mostly ignore overflow and will emphasize only ZF and SF.
Remember that the arithmetic instructions (like add and sub) can not tell if the operands are signed or unsigned so they might set either (or both) the CF and OF flags.
The condition for setting OF for an addition t=a+b is
(a>0 && b>0 && t<0) || (a<0 && b<0 && t>0)
The lea instruction does not affect the condition codes.
Logical operations set carry and overflow to zero.
Shifts set carry to the last bit shifted out and set overflow to zero.
Inc and Dec set OF and ZF but leave CF unchanged.
cmpq S1,S2 sets the condition codes the same as sub S2-S1 would set them (note S2-S1), but does not store the arithmetic result anywhere.
Similarly testq S1,S2 sets the condition codes the same as andq but does not store the result. So testq %rdx,%rdx sets ZF if %rdx is zero.
You can set a single byte to 0 or 1 based on certain combination of condition codes. This enables you to save the value of the flags, which change frequently during execution This uses the so-called setX operation, where X is replaced by am abbreviated name of the comparison desired (e.g., Less-or-Equal). For example.
In all cases the low order byte is set to zero or one and the remaining bytes are unchanged (see below for movzbq that addresses this issue).
Recall that the set instruction sets a byte. We will normally want values in C longs (which are 8 bytes). The solution is to use the 1 byte register that is contained inside the desired 8 byte register. There are names for the low byte of each of the 16 registers. In particular, %al is the low byte of %rax. We then must zero out the other 7 bytes. The movzbq does this and (as mentioned previously) so does (oddly) movzbl. I mention this oddity twice since, in the following example, the compiler on access uses movzbl (not movzbq) which, were it true to its name, would not zero the high-order 4 bytes.
Below left is a C program. On the right is the assembler version and the register usage is in the middle.
cmpq %rsi, %rdi # compare x and y setg %a1 # set low byte %rax movzbq %a1, %rax # zero out the rest
long GT(long x, long y) { return x > y; }
The operation specifies the condition that decides whether you jump, the operand specifies the target to jump to. Elsewhere in the program you have a statement label with that target.
.always ... je .goHere // jumps if ZF is set ... jge .goThere // jumps if ~(SF^OF) evaluates true (for us, just ~SF) ... jmp .always // unconditional jump .goHere ... .gothere ...
In general jmp *operand evaluates the operand and jumps to that location. This usage of * is similar to C's For example:
Skipped.
Below left is a C program (written to look a little like
assembler).
On the right is the assembler version.
The register usage is in the middle.
absdiff: cmpq %rsi, %rdi // compare x, y jle .L4 movq %rdi, %rax // x > y subq %rsi, %rax ret .L4: // x <= y movq %rsi, %rax subq %rdi, %rax ret
long absdiff (long x, long y) { long ans; if (x > y) ans = x - y; else ans = y - x; return ans; }
Instead of jumping to the correct case, it is sometimes faster to
evaluate both possibilities and them move the right one to the
answer.
This is because, in modern machines, pipelining is very important
and conditional branches
break the pipeline.
Below left is the C program (written to look a little like assembler). On the right is the assembler version. The register usage is in the middle.
absdiff: movq %rdi, %rax subq %rsi, %rax // ans = x-y movq %rsi, %rdx subq %rdi, %rdx // tmp = y-x cmpq %rsi, %rdi // cmp x : y cmovle %rdx, %rax // if <=, ret // overwrite
long absdiff (long x, long y) { long ans; if (x > y) ans = x - y; else ans = y - x; return ans; }
The assembly language does not include a do-while construct. Hence we re-write the C using an if and a goto.
long count (unsigned long x) { long ans = 0; loop: ans += x & 0x1; x >>= 1; if (x) goto loop; return ans; }
===>>>
long count (unsigned long x) { long ans = 0; do { ans += x & 0x1; x >>= 1; } while (x); return ans; }
For this problem we are given a C long and wish to determine how many of its 64 bits are one. The algorithm is clear; check each bit and, if it is 1, increment ans.
First we show the easy conversion from
normal C to
goto C.
We simply replace the do-while by an if plus a
goto.
count: movq $0, %rax // ans = 0 .Loop: movq %rdi, %rdx andq $1, %rdx // tmp = x & 1 addq %rdx, %rax // ans += tmp shrq %rdi // x >>= 1 jne .Loop // if (x) ret // goto loop
Next we generate the assembler from the "goto C". For this simple program, that is an easy task and is shown on the right. Also shown is the usual register assignment table.
Note, however, that even in this easy program, one can stumble. Specifically, it is easy to mistakenly use sarq, instead of shrq, thereby introducing a infinite loop.
The idea is to convert a while loop into a do-while loop. All that is needed is to deal with testing the loop condition on entry to the loop.
We will consider two (fairly similar) methods and apply them to the same simple example as above.
Note that for our simple example, the same Body can be used with while and with do-while.
Talk about midterm setup with MOSES
goto test; loop: Body test: if (Test) goto loop; done:
===>>>
while (Test) Body
The idea is to keep the Body before the test as in do-while, but jump over Body when you first enter the loop.
On the right we show the C version of a generic while and then its conversion to do-while.
long count (unsigned long x) { long ans = 0; goto test; loop: ans += x & 0x1; x >>= 1; test: if (x) goto loop; return ans; }
===>>>
long count (unsigned long x) { long ans = 0; while (x) { ans += x & 0x1; x >>= 1; } return ans; }
On the right we show the conversion for the specific program used to count 1 bits. You can see how close the while and do-while are.
long count (unsigned long x) { long ans = 0; if (!x) goto done: loop: ans += x & 0x1; x >>== 1; if (x) goto loop; done: return ans; }
===>>>
long count (unsigned long x) { long ans = 0; while (x) { ans += x & 0x1; x >>= 1; } return ans; }
The second conversion method is to introduce an initial test and goto to the do-while version to reproduce the while behavior.
In the example shown on the right the transformation is applied blindly, which results in an obvious inefficiency: Specifically, the first goto jumps to a return.
Any decent compiler would replace this goto with a return. It is normally silly to jump to a jump.
Start Lecture #16
Covered in recitation. Slides on home page.
Slides on home page.
I will start by showing the basic idea. Then I will give an example with actual assembly code.
if (x==13) // do thing13; else if (x==22) // do thing22; else if (x==5) // do thing5 ... else // do default;
if (x==1) // do thing1; else if (x==2) // do thing2; else if (x==3) // do thing3 ... else // do default;
You can always treat switch(x) as a big if-then-else, something like what is shown on the far right above. A disadvantage is that if you have n different cases you will execute on average about n/2 tests before you are successful.
The time when a switch statement is particularly efficient is when the various cases are selected by a (nearly) contiguous range of integers.
Highlighted in yellow we show the simplest case where the possible values for x are 1,2,3,... .
In this case we construct a jump table, i.e., a table of jumps. The first jump in the table jumps to thing1, the second to thing2, etc. See the diagram above in light green. You use the value in x to jump to the correct jump in the table and from there you jump to the correct thing.
Note that even if there are hundreds of things, you execute only two jumps: one into the table and then one to the correct thing.
// the jump table .align 8 .JmpTbl jmp .L10 jmp .L11 jmp .L12 jmp .Ldefault jmp .L1415 jmp .L1415
// The switch code // preamble, assuming // s is stored in %rdi // ans stored in %rax .switch_eg: movq $1, %rax // ans = 1 subq 10, %rdi cmpq $5, %rdi ja .Ldefault jmp .JmpTbl(,%rdi,8) // the "things" go here
Note the possibilities.
long sw_eg (long s, long y, long z) { long ans=1; switch (s) { case 10: ans = y+z; break; case 11: ans = y-z; case 12: ans += 7 case 14: case 15: ans =z-y default: ans = 2; } return ans; }
Above we first see a simple C case statement. Next to it we note that this simple example covers many possibilities.
Then we show the begining of the switch code. We first initialize ans = 1; and then handle the out-of-bounds cases. Note that the ja (jump above) uses an UNsigned comparison so catches both %rdi>5 and %rdii<0. The jmp instruction jumps to a location 8*%rdi bytes past the .JmpTbl, making the assumption that each jmp instruction in the table is 8 bytes long.
Finally, we see the jump table. In this simple implementation the jump table is just a table of jumps to labels corresponding to the cases in the switch statement. So the switch statement translates to a jump into the table and then a jump to the specific case. We will soon see a better implementation that reduces these two jumps to just one (indirect) jump. This new implementation will also drop the (perhaps incorrect) assumption that jmp instructions are 8 bytes long.
Later we will show the assembly code for the various cases, which I
referred to as
things above.
.align 8 .JmpTbl .quad .L10 // s = 10 .quad .L11 // s = 11 .quad .L12 // s = 12 .quad .Ldefault // s = 13 .quad .L1415 // s = 14 .quad .L1415 // s = 15 // replace jmp .JmpTbl(,%rdi,8) // with jmp* .JmpTbl(,%rdi,8)
Instead of jumping to the correct jump, we can use a single
jmp*, a so-called indirect jump.
The idea is that instead of a table of jumps with
the ith jump targeting the ith
thing, we have, as shown on the right, a table of address,
where the ith address is the address
of ith thing.
We still have to write the
things.
When we do, the
thing for s==10 will have statement
label .L10, etc.
As mentioned above, the indirect jump is quite a powerful instruction. When executed it first does the address calculation specified in the instruction (for us multiplying %rdi by 8 and adding the address specified by the label .LJmpTbl). It then accesses the resulting address and reads its contents. Finally, it jumps to the address just read. Phew.
Another advantage is that we can specify (via .quad) that each address will be 8 bytes in length (as needed by the jmp*).
We will show each case separately. The C code is on the right; the assembler code is in the middle with a yellow background; and the registers assigned are a table on the left
ans = 1; switch(s) { case 10: // .L10 ans = y+z; break; ... }
.L10: movq %rsi, %rax # y addq %rdx, %rax # y + z ret # return
We will see very soon that the specific registers chosen were not arbitrary. There are definite conventions that we must follow.
switch(s) { ... case 11: // .L11 ans = y-z // fall through case 12: // .L12 ans += 7; break;
.L11: movq %rsi, %rax # y subq %rdx, %rax # y - z jmp .Merge1112 .L12: movq $1, %rax # ans = 1 .Merge1112: addq $7, %rax ret # return
Note that when case 12 is selected we don't want the effect of case 11. Hence we re-initialize ans.
There is no case 13 so we use the default.
switch(s) { ... // multiple cases case 14: // .L1415 case 15: ans = z-y; break;
.L14: .L15: movq %rdx, %rax # z subq %rsi, %rax # z - y ret # return
Since these cases are identical we just use two labels for the same section.
switch(s) { ... default: // .Ldefault ans = 2; break; }
.Ldefault: movq $2, %rax # ans = 2 ret
The default case: ans = 2;
Slides on home page.
right afterthe call point in f().
We treat these three issues in turn
Memory allocation/de-allocation for variables local to a procedure follows a stack-like discipline. That is
oldvalues they had are not restored). So the call of g() by f() necessitates space be set aside for g()'s local variables.
give backthe space used for them and reuse this space later for other purposes.
The basics from 101/102 will be enough.
When I think of a (non-linked) stack, I visualize a column that gets
taller on pushes and shorter on pops.
That is, I view the stack a being
grounded at a low place and
growing and shrinking at its highest address.
The x86-64 run-time stack has the opposite properties: As indicated
in diagram on the upper right, the stack's fixed
bottom is at
a very high address.
It is sort of fixed in the sky; it grows and shrinks by having
its other end, its
top, get lower and higher.
When implementing a stack, the designer must also decide whether top points to the location containing the last element inserted, or the space where the next element will go.
The x86-64 run-time stack uses the first technique. Hence a pop() first retrieves the value and then increments top; whereas a push() first decrements top.
Another choice made for the Intel stack is to dedicate a register (a precious commodity) to holding the top-of-stack pointer. Specifically, %rsp (register-stack-pointer) is used for this purpose.
So to push onto the stack the value currently in %rdx one could write.
subq $8, %rsp movq %rdx, (%rsp) // assume %rdx contains 0x12FA
This results in the picture on the bottom right showing a bigger stack and a smaller value in the stack pointer %rsp. Register %rdx itself unchanged, but its value is now at the top of the stack. A pop would retrieve that value.
In fact there is a single instruction pushq SRC that
both decrements %rsp and inserts SRC on the top of
the stack
In our case
pushq %rdx
accomplishes the desired stack push.
Naturally there is also a popq. Both pushq and popq require that %rsp be used as the stack pointer. This last comment leads to the following table.
Start Lecture #17
The table on the right gives the conventional use of the 16 registers in the I86-64 architecture. With a very few exceptions, these are not enforced (or used) by the hardware but more by compilers.
In most cases it would not matter which registers were assigned to which purpose, providing it was done consistently. It would not work if, for example, compilers put the first argument in %r11 but looked for the first parameter to be in %r13.
Two examples where it does matter to the hardware are (the 128-bit multiply and divide) and (the pushq/popq pair) where specific registers have specific hardware functions.
Assume we are studying the assembler code of g(), which was called by f(). If a register is labeled callee saved, and it is altered by g() (the callee), g() must save the register and restore it before returning since f() (the caller) is permitted to assume this behavior.
In contrast, if the register is labeled caller saved, g() can alter it and not restore the original value since that was the responsibility of f(), the caller.
Note that caller-saves implies callee-(might)-destroy.
In addition to the registers explicitly listed as caller saved, all 6 registers used for arguments and the one register used for the return value may be altered by the callee, g() in our example. Hence these seven should also be considered caller save.
Consider writing the function g() in the situation where f() calls g() and g() calls h(). In this (common) case g() may need to save every register that it modifies, both caller saved and callee saved. Explain why.
long sum2(long x, long y) { return x+y; } long sum3(long x, long y, long z) { return x+y+z; }
sum2: leaq (%rdi,%rsi), %rax ret sum3: addq %rsi, %rdi leaq (%rdi,%rdx), %rax ret
The assembler was obtained via
cc -Og -S simple.c
Without the -Og the assembly language produced would have been much more complicated.
Notes:
long add2(long, long); void addStore(long x, long y, long *dest) { long t = add2(x,y); *dest = t; }
addstore: pushq %rbx movq %rdx, %rbx call add2 movq %rax, (%rbx) popq %rbx ret
Assume add2() is compiled separately and like sum2() calculates the sum of 2 integers. The trick in addstore() is that it needs to put that sum in the memory location given in dest. This seems easy, dest is given in register %rdx.
The trouble is that, since %rdx is caller-saved, add2() might change its value, hence addstore() must save %rdx before calling add2() and must restore it after that call. The simplest way would be to use the stack. The compiler chose instead to save %rdx in %rbx (a callee-saved register) and save/restore %rbx on the stack.
The third argument (in %rdx, naturally) is a address. Notice how it is enclosed in () to access memory. (Actually, as just mentioned, %rdx is copied into %rbx, which is then placed in parentheses).
addstore() is an example of a
middle
function
g in the triple
f()->g()->h(). See how it treats the caller-saved %rdx and the callee-saved %rbx.
// return sum; set diff long sumDiff(long a, long b, long *diff) { *diff = a - b; return a + b; }
sumDiff: movq %rdi, %rax subq %rsi, %rax movq %rax, (%rdx) leaq (%rdi,%rsi), %rax ret
We know SumDiff will need to set %rax (which is caller-saved) to the returned value. But before calculating the returned value, it can use that register as a temporary.
long mult2 (long, long); void multStore(long x, long y, long *dest) { long t = mult2(x, y); *dest = t; } long mult2 (long a, long b) { long ans = a * b; return ans; }
<multStore>: pushq %rdx # caller save callq mult2 # multq(x,y) popd %rdx # restore reg movq %rax,(%rdx) # store at dest ret # return <mult2>: movq %rdi,%rax # a imulq %rsi,%rax # a * b ret # return
The C program on the far right is simple and so is the assembly.
One point to note is the first two parameters multStore() receives are the same (and in the same order) as the first two arguments multStore() passes to mult2(). Were they in the reverse order, some movq's would be needed.
Another point is that, since %rdx is caller-saved, mult2() can destroy it and hence multstore() must save and restore it.
A third point is that, since %rdx contains dest (an address), the assembly syntax (%rdx) corresponds to *dest.
Homework:
Assume the C call mult2(x,y) was
instead mult2(y,x).
What would the assembly language for multStore() look like?
Do not make use of the mathematical identity
x * y = y * x
The diagram on the far right shows the run-time stack just before f() calls g(). The diagram on the near right shows the stack after g() has begun execution.
The green region is the portion of the stack associated with the current invocation of f(). (If f is recursive there can be several stack frames for f(), but ignore that for now).
The blue region is for functions that are higher in the call chain leading to f().
When f() actually calls g(), the first thing that happens is that the return address (the address in f() where g() is to return when finished) is pushed on the stack and is momentarily the top-of-stack. The return address is considered part of f()'s stack frame.
The stack frame for g() (or any other function) typically contains three groups of items.
callee saved, which means f() can depend on them containing the same values when g() returns as they contained when f() called g(). So, if g() needs to modify any of those registers (perhaps to perform some computation), it needs to save them someplace and restore them when g() returns to f().
The stack does not normally contain instructions to be executed. The instructions are stored in a different part of the memory and do not change during execution.
Note: It is possible for some parts of the stack frame for g() to be empty. Indeed, some functions g() don't need a stack frame at all. For example if g() doesn't call another function, there is no argument build area. If, furthermore, g() is simple, its local variables and computation may fit in the registers designated for its use (we shall discuss these caller-save registers, and their callee-save counterparts soon).
Transfer of control from f() to g() is
accomplished by the procedure call
callq target
which
Eventually, the called program mult2() returns by executing a retq, which
Note:
In the examples we have seen, the target has been a label.
This is the common situation and is the one we will emphasize.
Also possible, however, is an
indirect call
callq *operand
where operand is one of the address forms we have seen above (the most complicated being Disp(Rb,Ri,S)). In the callq *operand, case the jump is to the address that is the contents of operand.
Show the animation on slides 11-14, which corresponds to the multstore()/mult2() example just given.
In the x86-64 architecture, the primary method of data transfer between the calling procedure (f() above) and the called procedure (g() above) is via machine registers used to transmit arguments in the caller to the corresponding parameters in the callee. In the other direction the return value in the callee is transmitted to the function value in the caller, again using a register. As mentioned above and repeated to the right, specific registers are designated for these purposes.
Thanks to these conventions if f(), containing a call
g(x,y), is compiled on monday in LA using a California C
compiler, the values of x and y will be stored
in %rdi and %rsi respectively.
Then, if on Wednesday
g(long a, long b)
is compiled in Newark using a NJ compiler, g() will retrieve the values for a and b from %rdi and %rsi respectively.
The first choice for local variables (in g() say) is to use some of the leftover registers (since registers can be accessed much faster than stack elements). However, if g() is complex, it probably has more local variables than would fit in the available registers.
A second reason for storing local variables in memory rather than a register is that the & operator (in C) may have been used. Remember, that when &var is used in C, the compiler is required to provide the address of var.
A third reason for stack usage is for
large objects like
arrays and structures.
long incr(long *p, long val) { long x = *p; long y = x + val; *p = y; return x; }
incr: movq (%rdi), %rax addq %rax, %rsi movq %rsi, (%rdi) ret
The incr() function is like x++ in that it increases *p and returns the old (pre-incremented) value.
Note that the C code is not what you would normally write; rather it is there to help understand the assembler. In particular, C programmers would not have the variable y. Instead, they would write simply *p = x+val;
long call_incr() { long v1 = 15213; long v2 = incr(&v1, 3000); return vi + v2; }
call_incr: subq $8, %rsp movq %15213, (%rsp) movq $3000, %rsi leaq (%rsp), %rdi call incr addq (%rsp), %rax addq $8, %rsp ret
See slides 20-24 for diagrams illustrating the execution of call_incr. In those slides the lines in red have just been executed.
manually, i.e, we decrement the stack pointer in one instruction and store the constant in the second.
As mentioned a common method of transferring values from the calling program (say f()) to the called program (say g()) is for f() to put a value in an agreed upon register and for g() to access that register. For constants this works without issues.
But what if we want to share a variable x that both f() and g() use? If f() puts x into a register and calls g(), is g() permitted to say increment the register?
The answer is
yes and
no.
Consider the situation where f() calls g(x) which in turn calls h(x). g() must answer two questions.
Some registers e.g. %r12 are designated as callee-save. This means that, answering question 1 above, g() (the callee) must restore the restore the register to the value it had when f() called g(). It also means that, answering question 2, g() can assume that h() restores the register to the value it had when g() called h().
Other registers are designated as caller-save. For these registers, answering question 1 above, g() need not restore the register before returning to f(). It also means that in answering question 2 g() cannot assume that h() restores the register when h() returns to g().
Our table of registers lists 6 registers as callee-save and only 2 as caller-save, but this is misleading. The 6 registers designated for arguments and the one register designated for the return value are also caller-save, for a total of 2+6+1=9 caller-save registers.
long call_incr2(long x) { long v1 = 15213; long v2 = incr(&v1, 3000); return x+v2; }
call_incr2: pushq %rbx subq 8, %rsp movq %rdi, %rbx movq $15213, (%rsp) movl $3000, %rsi leaq (%rsp), %rdi call incr addq %rbx, %rax addq $8, %rsp popq %rbx ret
Look at the recursive routine pcount() below. It counts the total number of 1 bits in x. I realize that pcount() can be easily written without recursion.
When pcount calls pcount, we have two different x's. In particular the x in the child is a right-shifted version of the x in the parent. We need both and cannot overwrite one with the other.
If all the bits of x are 1, the recursion will go on for 64 levels and we must keep all that information around using only 16 registers.
/* Recursive popcount */ long pcount (unsigned long x) { if (x == 0) return 0; else return (x & 1) + pcount(x >> 1); }
pcount: movq $0, %rax testq %rdi, %rdi je .L6 pushq %rbx movq %rdi, %rbx andq $1, %rbx shrq %rdi call pcount addq %rbx, %rax popq %rbx .L6: ret
The assembly code is only about a dozen instructions and uses only 3 registers.
The stack.
Do the computation on the board with x=00...00101 binary (= 0x000000000000005).
Imagine redoing it with x=0xFFFFFFFFFFFFFFFF.
There would still be only about a dozen instructions in the program (several executed many times) and still only 3 registers would be used. However, many different values of %rbx would be pushed on to the stack and subsequently popped off and used. At one point (when all the calls are done, but none of the returns) there would be about 64 values on the stack.
The register saving conventions (caller/callee) prevent one invocation of the function from altering registers that another invocation still is using.
long A[5]; // 8B each char *B[5]; // 8B each double C[5]; // 8B each int D[5]; // 4B each float E[5]; // 4B each short F[5]; // 2B each char G[5]; // 1B each
Consider the declarations
long A[10], *p, i;
If we reference A[i] and increment i, we reference the next element of A, which is 8 bytes further in memory.
In C, the same thing occurs with pointers. If we write
p = &A[2]; p++;
again p advances not by one, but by eight, the space (in bytes) used for one long.
// Array access long arrElt(long z[], long idx) { return z[idx]; }
movq (%rdi,%rsi,8), %rax ret
Notes:
// Array addition void arrAdd(long A[], long B[], long C[]) { long i; for (i=0; i<10; i++) A[i] = B[i] + C[i]; }
arrAdd: xorl %rax, %rax .L2: movq (%rdx,%rax,8), %r8 addq (%rsi,%rax,8), %r8 movq %r8, (%rdi,%rax,8) incq %rax cmpq $10, %rax jne .L2 ret
Notes:
Start Lecture #18
Consider the declaration of twoD on the right. It is a two dimensional array of longs. As we saw earlier in the course, it can be viewed as a matrix with 2 rows and three columns. It is stored contiguously as shown in the bottom of the diagram.
In C as in most, if not all, modern languages a 2D array is stored
in
row major order; each row is stored contiguously.
When viewed as a matrix it is stored the way a book is read in
Let's do the general case where twoD
has R rows and C columns, and each element of the
array requires K bytes.
Let A be the address of twoD[0,0].
Then the address of twoD[i,j] is
A + i * (C * K) + j * K = A + (i*C + j) * K
A key to understanding the somewhat cryptic-looking formula is that C * K is the space required to store one complete row of the matrix.
long oneD1[3] = {1, 5, 7}, oneD2[3] = {2, 4, 6}; *twoDv2[2] = {oneD1, oneD2}; // = {&oneD1[0], oneD2[0]
We did this when learning C. See section K&R-5.9. C hides much of the details; now they will all come out.
The idea is that, instead of using a 2D matrix of elements, we can implement a 2D array as a 1D array of pointers to 1D arrays of elements.
To access the entry with value 7 in the nested array we would write twoD[0][2]
To access the entry with value 7 in the multi-level array we would write twoDv2[0][2], i.e., we write the same thing.
But they are implemented quite differently in assembler. The first version first does some arithmetic (see above) to calculate the needed address and then accesses that address. The second version access one element of twoDv2 to find (a pointer to) the correct oneD array and then accesses the appropriate entry in that array. The summary is that the first requires more arithmetic; the second an extra memory access.
See slide set
machine-level-5.pptx slides number 11-13 for
a larger example and more details
Skipped
Skipped
struct st { long a; long b[10]; long c[10]; }; void f(struct st *s) { long i; for (i=0; i<10; i++) s->b[i] = s->c[i]; }
f: movq $0, %rax jmp .L2 .L3: movq 88(%rdi,%rax,8), %rdx movq %rdx, 8(%rdi,%rax,8) addq $1, %rax .L2: cmpq $9, %rax jle .L3 ret
The C code on the right is a simple loop copying one array to another, each of which happens to be part of the same structure. A pointer to this structure is the sole parameter of f().
Note that the address IN s (not of s) is the address of s->a. Also s->b[0] is located 8 bytes after the address in s and s->c[0] is 80 bytes after that.
Admire the last two movq's in the assembly code, which contain the most complicated memory address form.
Notes:
struct rec { long a[3]; long i; struct rec *next; }; void set_val (struct rec *r, long val) { while (r) { long i = r->i; r->a[i] = val; r = r->next; } }
.L3: # loop: movq 24(%rdi), %rax # i = Mem[r+18] movq %rsi, (%rdi,%rax,8) # Mem[r+8*i] = val movq 32(%rdi), %rdi # r = Mem[r+32] testq %rdi, %rdi # test r jne .L3 # loop if r != 0 ret
On the right we see a small program involving a strange data
structure.
We have a C struct, containing an array a[] of three longs
and another long i, indicating which of the 3 elements
of the array is the
active element.
These structs are linked together via a next pointer.
The set_val() function is given a pointer to one of these
structures (presumably the head of a list) and another
long val.
The goal is to set all the
active elements in the list to
val.
Try not to be confused by the pink diagram using hex addresses; whereas, the assembler uses decimal. I used hex for the diagrams since all entries are multiples of 8, which is most readily expressed in hex.
The assembly program is simple but studying the addressing is worthwhile. The first line grabs i (remember decimal 24 is hex 18); the second line updates the ith entry of a; finally we loop if next is not NULL.
Skipped.
struct stt { char c1; long l1; char c2; long l2; } ss, *pstt
How do we align ss, which is a struct stt? First we look at the components: c1 and c2 are each 1 byte and can be aligned on any byte. However, l1 and l2 are each 8 bytes and hence must be aligned on an 8-byte boundary. That is the address of each one must be a multiple of 8.
The four components of the structure have 2 different alignment requirements. The rule employed is that the structure itself must be aligned to conform to the strictest alignment of its components, which in this case says that every variable of type struct stt must be aligned on an 8-byte boundary.
So ss begins on an 8-byte boundary. c1 can begin anywhere; so far so good. But l1 must be aligned on a 8-byte boundary and that means we need 7 bytes of (wasted) padding. This repeats for c2 and l2
Look how much better it lays out if we put first the bigger components (with the more stringent alignment requirements). The compiler is not permitted to change the order of components; the programmer must do it.
skipped
skipped
Most programming is done in high level languages. In this chapter we looked under the covers at the level of instructions that the computer actually executes.
The diagram on the right gives an overview of the memory regions in a running program under Linux. We will study this in more detail next chapter. For now we just want to mention examples of each region type.
These regions can grow during during execution, which requires extra attention by the system so that one such region does not overwrite another. They contain read/write data. The heap grows when malloc() is called; the stack grows when register %rsp is decremented, which occurs on most procedure invocations. We will see shared libraries soon.
These regions, which contain executable/non-writable instructions, do not grow and do not change their contents during execution. The text region contains the compiled assembler instructions we have studied. Shared libraries are a memory-saving idea that permits many running programs to share, for example, the library routine printf().
Statically allocated data including global variables, static variables, and string constants. Unlike local variables inside C functions that come and go when procedures are called and return, these variables remain for the lifetime fof the execution.
The program cc (or gcc) is technically more than
a compiler.
It invokes a series of programs that step by step transform
your C source program into an form executable by the
computer hardware.
Since cc
controls or
drives the compilation
process it is sometimes called a
compiler driver.
We have already seen that cc includes a compiler translating C code to assembly language and an assembler translating assembly to actual computer instructions. Those are the first two arrows in the diagram on the right.
Now we want to go further in the diagram.
We first study static linking, which is the vertical line at the
right of the diagram.
Static linking is performed by a program normally called the linker,
which is invoked automatically by the compiler driver cc.
In linux/unix the linker program is unfortunately called
load file, which can be executed.).
.
Linking is the process of collecting and combining various pieces of code and data into a single file that can be loaded into memory and executed.
Start Lecture #19
Remarks on Midterm Exam and Midterm Grades
file main.c
#include <stdio.h> int x = 10; void f(void); int main(int argc, char *argv[]) { printf("main says x is %d\n", x); f(); }
file f.c
#include <stdio.h> extern int x; void f(void) { int y = 20; printf("f says x is %d\n", x); printf("f says y is %d\n", y); }
For a simple example of what the linker needs to do, consider the small example on the right consisting of two files main.c and f.c, which are compiled separately.
The diagram on the far fight illustrates relocating relative addresses. Specifically, it shows that the relocation constant is calculated as the sum of the lengths of the preceding modules. Once the relocation constant C is known, each absolute address in the modulated is calculated simply as the corresponding relative address + C.
The diagram on the near right illustrates resolving external references. In this case the reference is to f(). Note that the Base of M4 is the same as its relocation constant, i.e., the sum of the lengths of the preceding modules.
Note from the diagram on the near right, that the linker
encounters the required address
jump f before it knows the
necessary relocation constant.
The simplest solution (but not the fastest) is for the linker to make two passes over the modules. During pass 1 the relocation constants for each module are determined and a symbol table is produced giving the absolute address for each global symbol. During pass 2, references to external addresses are resolved using the symbol table constructed during pass 1.
The table on the right tabulates the information contained in an ELF binary. The various fields contain the following information.
The linker symbols (which are stored in the linker symbol table) come in three flavors.
The left figure above shows a very simple C program that nonetheless exercises many of the linker's abilities.
Note the distinction between global references, which define symbols, and external references that need to be resolved by the linker to equate to the corresponding global.
Also note that bufp1, although known only to swap.c, must be given a slot by the linker. Since it is static, bufp1 must maintain its value across multiple calls to swap() and hence, unlike tmp, cannot be stored on the stack.
As an optimization the executable file does not contain space for bufp1, just a count of how much space to reserve when the executable is loaded.
We see in the right figure that the linker combines corresponding segments from each object file when producing the executable that is the linker's output.
Recall that declarations give just the type of an identifier. This tells the compiler how to interpret the identifier, but does not necessarily reserve space for the identifier. Declarations that reserve storage are called definitions.
file f1.c: int svar1=5; int sfun1(int x) { code }
file f2.c: int wvar1; int wfun1(int z); int sfun2(void) { int igsym1=3; }
Looking at the code on the right
The linker obeys the following rules.
multiply defined symbolerror.
int x; int x=7; Both x's are the same; the second is strong. f1() {...} f2() {...} The second x is chosen. int x; Two strong symbols have the same name, f1. f1() {...} f1() {...} Link time error. int x; int x; Both x's are the same; each is weak. f1() {...} f2() {...} *Either* could become the location for x. int x=7; double x; The first x is strong and is chosen. int y=5; f2() {...} Writes to x in f2() WILL overwrite y! f1() {...} Scary! int x; double x; Both x's weak; either might be chosen int y; f2() {...} Writes to x in f2() MIGHT overwrite y! f1() {...} Maximum terror!!
The figure in 7.1 contains two kinds of libraries: statically-linked libraries that are processed by the linker and dynamically-linked libraries (DLLs) processed by the loader. How do they differ?
You know well that when your programs run, some functions are executed that you did not write (e.g. printf()). Many common routines are placed in libraries that the linker searches by default.
Too coarse or too fine
One (or a few) big file(s), but (each) with an index stating which functions are contained and where in the file they are located, a so-called static library. A static library has a .a suffix because it is an archive of several .o files.
When linking a program the linker (ld) automatically searches (the index in) the standard C archive libc.a. If a function referenced by the user program is found in the index, the corresponding .o file is extracted from the archive and linked with the program.
So 1500+ functions are available in just one archive file, but only the dozen or so actually referenced are linked in.
Since cc
knows to search libc.a, no
further user action is needed.
Actually cc
knows to search a number of standard
archives.
For other library routines, the user must tell the linker to search
the corresponding archive by giving the appropriate option to
the cc command.
Assume you (as system administrator) have compiled all the .c files you want to put in the standard library libc.a. Then you would execute one very long archive (ar) command, which concatenates the files and constructs the index. Specifically, you would write.
ar rs libc.a atoi.o printf.o strcpy.o random.o ... (1500+ entries)
The linker makes one pass over the .a and .o files in the order that the names were given on the command line. During this process, the linker maintains a list of currently unresolved references.
When a .a file is encountered in the list, the linker searches the archive's index and links in any file containing a definition for a currently unresolved reference.
If any references remain after the linker has completed its scan of the files mentioned on the command line, the linker prints an error message and the command fails.
For this reason the compiler drivers cc and gcc search the standard libraries after processing the files you mentioned explicitly on the command line.
// f.c void f(void) { return; }
// callf.c void f(void); int main (int argc, char *argv[]) { f(); }
$ cc -c f.c $ ar rs libf.a f.o $ cc -c callf.c $ cc libf.a callf.o callf.o: In function `main': callf.c:(.text+0x10): undefined reference to `f' collect2: error: ld returned 1 exit status $ cc callf.o libf.a $ ./a.out
So far so good, we compiled a main program and a utility program and have placed the latter in an archive. You might want to think of f() as printf(), and the archive as libc.a.
Now look at the fourth command, which attempts to link the two functions f() and callf().
This command fails asserting that it cannot resolve the reference to f().
But why? We just put f() in libf.a and included the latter in our link command (remember that the compiler driver cc invokes the linker, ld).
The trouble is that the linker looks at libf.a before it looks at callf() and thus sees no need to link in f().
When the compiler driver is given the archive at the end of the command, the linker detects the need for f() before processing the archive and hence extracts f.o and all is well.
Now that we have shown how the linker resolves external references, we turn to its other main function, relocation. Recall that the linker receives as input a number of independently produced object files. Each of these object files likely contains a .text section (containing the executable code), a .data section (containing the read/write initialized data, and several others.
The linker combines all the .text sections into one, and
similarly for the other sections.
We gave an overview of this action when we discussed
relocating relative addresses in section O'H-7.2.A.
We are skipping the detailed explanation given in O'H-7.7.1 and
O'H-7.7.2.
Skipped.
Skipped.
As indicated in section 7.6 the linker combines the .text (compiled code) sections from all the .o files into one such section for the executable file. Similarly all the .rodata sections are combined. The linker also generates a .init section containing system code run to start the program.
The diagram on the right shows all three in yellow.
They are often referred to as
read-only code, even though they
contain data.
The unifying property is that, when the program executes, the yellow
section is read-only.
Similarly, the pink region is produced containing all the .data (initialized data) sections as well as all the .bss (uninitialized data) sections. These two sections are in pink and represent data that may be rewritten during execution.
Note: The .bss data are actually initialized to zero by the loader. They are called uninitialized since the initial value itself is not in the elf file (since it is known to be zero).
The yellow and pink segments remain their original size during execution. This makes their run-time placement in RAM easy; just put one after the other. But we know programs grow (and shrink) in size during execution: We can call malloc()/free() and we have seen stack pushs and pops during function invocation and exits. Loading regions that grow comes next.
Unlike the previous diagrams, the figure on the right, does not show the contents of a file, but rather the contents of memory during execution of the user program.
As mentioned the yellow and pink sections are easy to load since they don't change size and don't need to move. We use green to indicate regions that can grow.
We have already met two of the green regions (the stack and heap), and have seen them grow during execution. We will discuss shared libraries next section.
If you ignore the middle green region, we have a great situation: one region is at high addresses, the other is at low addresses, and they grow toward each other. If they grow into each other the program aborts, but for a good reason: it needs more memory than we have.
But with three growing regions, no matter where we place them, a situation can arise where two of them collide, but there is still space available; the free space is in the wrong place.
The previous comment about three region is correct, but seems of no
practical importance since the amount of initially free memory is
enormous.
After all, 248 = 281,474,976,710,656 is almost 300 terabytes, which vastly exceeds the total RAM on all our laptops combined. Who can afford this much memory?
The answer comes in chapter 9.
If two jobs are running at the same time how can both of their yellow sections start at 0x400000? Why don't their stacks collide?
The answers come in chapter 9.
Start Lecture #20
Again referring to the figure in section 7.1, we have just discussed static libraries (.a files) used by the linker and now wish to go one step further and discuss shared libraries used by the loader.
#include <dlfcn.h> int x[2] = {1,2}, y[2] = {3,4}, z[2]; int main(int argc; char *argv[]) { void *handle; void (*addvec)(int *, int *, int *, int); // dynamically load the shared lib containing addvec() handle = dlopen("./libvector.so", RTLD_LAZY); if (!handle) { fprintf(stderr, ...); exit(1); } // get ptr to addvec() addvec = dlsym(handle, "addvec"); if ((error = dlerror()) != NULL) { fprintf(stderr, ...); exit(1); } // now *addvec is a "normal" function *addvec(x, y, z, 2); }
The book gives two real world uses of run-time dynamic linking and loading, specifically software distribution and high performance web servers.
On the right is a skeleton program that dynamically links during run time.
The function addvec() adds two 1D-arrays producing a third array (the 4th argument is the dimensionality of the arrays). We assume addvec() has been previously compiled and placed in the shared library ./libvector.so.
Multiple steps are involved in executing the addvec() function contained in the libvector library.
lazylinking).
We have seen that an advantage of dynamic shared libraries is that, when many processes are all using the same library (e.g., most C programs use printf() from the standard C library, libc.a), we only want one version of the code in memory.
A difficulty arises with jump instructions. If the jump instruction includes the target address explicitly, all programs linking that shared function would need to put it in the same place so that the given address will jump to the same instruction in all copies. This is not practical; it would essentially require a fixed starting memory address for every program in all the shared libraries.
Instead PIC is employed, i.e., every instructions can be in any location. Here is a simple example (making the simplifying assumptions that every instruction is 4 bytes in length and that the argument of the jump instruction is the address to jump to
0x100000 addl %rsi, %rdi 0x100004 subl $5, %rsi 0x100008 cmpl %rsi, %rdi 0x10000C jg $0x100000
This program assumes that the addl instruction is loaded into location 0x100000 and would not work if all the instructions were loaded starting in 0x200000. We say the code sequence is not position independent.
Now consider consider a fictitious instruction picj that jumps to the current location plus the argument. Then the following code
0x100000 addl %rsi, %rdi 0x100004 subl $5, %rsi 0x100008 cmpl %rsi, %rdi 0x10000C picjg $-0xCworks the same as the above but also works if loaded in 0x200000 or any place else. It is called position independent code and is just what we needed at the end of section 7.10.
There is more to PIC, but the idea remains the same: have the compiler generate code that works correctly no matter where the code is loaded. See the book for more complicated examples.
Skipped
List of useful utilities.
Now we understand all the boxes and arrows in the diagram that began our study of linkers and that is repeated on the right.
However, we will get a better understanding of the advantages of dynamic linking when we study virtual memory in chapter 9.
What do we want from an ideal memory?
leakdata)
We will emphasize the first two, skip the third, and mention the last two.
Laws of Hardware: The Basic Memory Trade-off
We can get/buy/build
small and fast and
big and slow.
Our goal is to mix the two and get a good approximation to the impossible big and fast.
Two varieties: Static RAM (SRAM) and Dynamic RAM (DRAM).
RAM constitutes the memory in most computer systems. Unlike tapes or CDs they are not limited to sequential access. The table on the right compares them.
SRAM is much faster but (for the same cost) has much lower capacity. Specifically, trans per bit gives the number of transistors needed to implement one bit of each memory type. The 4-transistor SRAM is harder to manufacture than the 6-transistor version.
Both SRAM and DRAM are volatile, which means that, if the power is turned off, the memory contents are lost. Due to the volatility of both RAM varieties, when a computer is powered on, its first accesses are to some other memory type (normally a ROM—read-only memory).
DRAM, in addition to needing power, needs to be refreshed. That is, even if power remains steady, DRAM will lose its contents if it is not accessed. Hence there is circuitry to periodically generate dummy accesses to the DRAM, even if the system is otherwise idle.
I have in my office some disks from the 1980s and 90s. Unlike modern disks, these relics are big enough to see the active components. In normal years, I drag some of these monsters to class and show their internals. For today, we will have to settle for some pictures (one picture is from Henry Muhlpfordt).
Disks are huge (~1TB).
and slow (10ms).
Unlike RAM, disks have moving parts.
Unlike tape, disks support random access.
Consider the following characteristics of a disk.
It is important to realize that a disk always transfers (reads or writes) a fixed-size sector.
Current commodity disks have (roughly) the following performance.
The above performance figures are quite extraordinary. For a large sequential transfer, in the first 10ms, no bytes are transmitted; in the next 10ms, 1,000,000 bytes are transmitted.
The OS actually reads blocks, each of which is 2k sectors. The OS sets the number of sectors per block when it creates a filesystem.
This analysis suggests using large disk blocks, 100KB or more. But much space would be wasted).
This is flash RAM (the same stuff that is in
thumb drives)
organized in sector-like blocks as is a disk.
hard diskan SSD has no moving parts (and hence is much faster).
The blocks in an SSD can be written a
large number of times
(thousands or tens of thousands).
However, this
large number is not large enough to be
ignored.
Instead frequently accessed data is moved to previously unused
portions of the device.
Summary: Everything is getting better but the rates of improvement are quite different for different technologies..
SRAM: factor of 100 DRAM: factor of 50,000 DISK: factor of 3,000,000
SRAM: factor of 100 DRAM: factor of 10 DISK: factor of 25 CPU: factor of 2,000 (includes multiprocessor effect)
The hierarchy is needed to close the processor-memory performance gap, i.e., the gap between processor speed improvement and DRAM speed improvement.
Alternately said it is the gap between the processors need for data and the (DRAM) memory's ability to supply data.
Remember we want to cleverly mix some small/fast memory with a large pile of big/slow memory and get a result that approximates well the performance of the impossible big/fast memory.
The idea will be to put the
important stuff is the
small/fast and the rest in big/slow.
But what stuff is important?
The answer is that we want to put into small/fast the data and instructions that are likely to be accessed in the near future and leave the rest in big/slow. Unfortunately this involves knowing the future, which is impossible.
We need heuristics for predicting what memory addresses will likely be accessed in the near future. The heuristic used is the principle of locality: programs will likely access in the near future addresses close to those they accessed in the near past.
The principle of locality is not a law of nature: One can write programs that violate the principle, but normally the principle works very well. Unless you want your programs to run slower, there is no reason to deliberately violate the principle. Indeed, programmers seeking high performance, try hard to increase the locality of their programs.
We often use the term temporal locality for the tendency that referenced locations are likely to be re-referenced soon and use the term spacial locality for the tendency that locations near referenced locations are themselves likely to be referenced soon.
We will have more to say about locality when we study caches.
In fact there is more than just small/fast vs big/slow. We have minuscule/blitz-speed, tiny/super-fast, ..., enormous/tortoise-like. Starting from the fastest/smallest, a modern system will have.
Today a register is typically 8 bytes in size and a computer will have a few dozen of them, all located in the CPU. A register can be accessed in well under a nanosecond and modern processors access at least one register for most operations.
In modern microprocessor designs (think phones, not laptops), arithmetic and many other operations are performed on values currently in registers. Values not in registers must be moved there prior to operating on them.
Registers are a very precious resource and the decision which data to place in registers and when to do so (which normally entails evicting some other data currently in that register) is a difficult and well studied problem. The effective utilization of registers is an important component of compiler design—we will not study it in this course.
For the moment ignore the various levels of caches and think of a single cache as an intermediary between the main memory, which (conceptually, but not in practice) contains the entire program, and the registers, which contains only the currently most important few dozen values.
In this course we will study the high-level design of caches and the performance impact of successful caching.
A memory reference that is satisfied by the cache requires much less time (say one tenth to one hundredth the time) than a reference satisfied by main memory.
Our primary study of the memory hierarchy will be at the
cache/main-memory boundary.
(In 202, we emphasize the main-memory/local-disk boundary.)
In 201 we will see the performance effects of various
hit ratios, i.e., the percentage of memory references
satisfied in the cache vs satisfied by the main memory.
When first introduced, a cache was the small and fast storage class and main memory was the big and slow. Later the performance gap widened between main memory and caches so intermediate memories were introduced to bridge the gap. The original cache became the L1 cache, and the gap bridgers became the L2 and L3.
The fundamental ideas remained the same: if we make it smaller it can be faster; it we let it be slower, it can be bigger.
For now, we shall pretend that the entire program including its
data resides in main memory.
Later in this course and again in 202, operating systems, we will
study the effect of
demand paging, in which the main memory
acts as a cache for the disk system that actually contains the
program.
We know that the disk subsystem holds all our files and thus is much larger than main memory, which holds only the currently executing programs. It is also much slower: a disk access requires several MILLIseconds; whereas a main memory access is a fraction of a MICROsecond. The time ratio is about 100,000.
One possibility is robot controlled storage, where the robot automatically fetches the requested media and mounts it. Tertiary Storage is sometimes called nearline storage because it is nearly online.
Other possibilities are web servers and local-area-network-accessible disks. We shall not discuss these possibilities.
Requires some human action to mount the device (e.g., inserting a cd). Hence the data is not always available.
In the hierarchy diagram above, we see three levels of caches. But in a sense every level, except the bottom, is a cache of the level below it. The main memory in your laptop is a cache of your disk. Compared to disk, the main memory is small and fast. We will see in chapter 9 another sense in which main memory is a cache of the disk.
If a program chose to reference random locations, the hierarchy would not work since we would have no clue what portion of the big/slow memory we should place in the small/fast memory. But in practice programs do not reference random locations; rather references exhibit locality.
Start Lecture #21
In this chapter, we will concentrate on the cache-to-main-memory interface. That is, for us the cache will be the small/fast memory and the main (DRAM) memory will be big/slow. between a cache and main
memory
Definitions
Let m be the cache hit time. Let M be the miss penalty, i.e., the additional time for a cache miss. Let p be the probability that a memory access is a cache hit.
Then the average time is
Avg access time = p*m + (1-p)(m+M) = m + (1-p) M
The goal is to run fast, i.e. to have the average access time small. So we want to
Assume the following (somewhat reasonable) data.
What is the average access time?
First note that 0.1μs = 100ns. Then the above equation tells us that the average access time is
1ns + (1-0.9) * 100ns = 1ns + 0.1(100ns) = 11ns
Lets spend more and get double speed SRAM (i.e., m=0.5ns), but save
money and get half speed DRAM (i.e., M=200ns.).
Then the average access time is
0.5ns + (1-0.9) * 200ns = 0.5ns + 0.1(200ns) = 20.5ns
Bad idea.
Let's try again.
Forget the double speed SRAM.
Instead, spend the money saved on half speed DRAM and get a cache
with one quarter the miss rate.
Then the average access time is
1ns + (1-0.975) * 200ns = 1ns + 0.025(200ns) = 6ns.
Good. But how do we lower the miss rate? Stay tuned. in this section are byte addressed. Thus the 32-bit number references a byte. So far, so good.
We will assume in our study of caches that each word is four bytes. That is, we assume the computer has 32-bit words. This is not always true (many old machines had 16-bit, or smaller, words; and many new machines have 64-bit words), but to repeat, in our study of caches, we will always assume 32-bit words.
Since 32 bits is 4 bytes, each word contains 4 bytes. We assume aligned accesses, which. The four consecutive bytes 6-9 do NOT form a word.
Question: What word includes the byte address given above,
10101010_11110000_00001111_11001010?
Answer: 10101010_11110000_00001111_110010, i.e, the address divided by 4.
Question: What are the other bytes in this word?
Answer: 10101010_11110000_00001111_11001000, 10101010_11110000_00001111_11001001, and 10101010_11110000_00001111_11001011
Question: What is the byte offset of the original
byte in its word?
Answer: 10 (i.e., two), the address mod 4..
Question: What are the byte-offsets of the other three bytes in that same word?
Answer: 00, 01, and 11 (i.e, zero, one, and three).
Blocks vary in size. We will not make any assumption about the block size, other than that it is a power of two number of bytes. For the examples in this subsection, assume that each block is 32 bytes.
Since we assume aligned accesses, each 32-byte block has a byte address that is a multiple of 32. So block 0 is bytes 0-31, which is words 0-7. Block n is bytes 32n, 32n+1, ..., 32n+31.
Question: tiny cache having a very simple cache organization, one that was used on the Decstation 3100, a 1980s workstation. In this design, cache lines (and hence memory blocks) are one word long.
Also in this Decstation 3100 design each memory block can only go in one specific cache line.
cache block number) is the memory block number modulo the number of blocks in the cache.
set associative cacheswe will also study.
We shall assume that each memory reference issued by the processor is for a single, complete word.
On the right is a diagram representing a direct mapped cache with C=4 blocks and a memory with M=16 blocks.
Let's assume each cache block and each memory block is one word long and to keep it simple we assume that each reference is to a single aligned word.).
Referring to the diagram we have 16 memory blocks and 4 cache blocks so we will have to assign N/C=16/4 memory blocks to each cache block. (In this example N=C2, but that is a coincidence.)
In fact we assign memory block N to cache block N mod C
For example, in the upper diagram to the right all the green blocks in memory are assigned to the one green block in the cache.
Contrast this diagram with bad design immediately below it.
The good design has the important property that consecutive memory blocks are assigned to different cache blocks. Consider an important array. Its elements will be spread out in the cache and will not fight with each other for the same cache slot. For example in the picture any 4 consecutive memory slots will be assigned to 4 different cache slots.
So the first question reduces to:
Is memory block N present in cache block
N mod C?
Referring to the diagram we note that, since only a green memory
block can appear in the green cache block, we know that the
rightmost. (Again, don't confused by the coincidence that in this example N/C = N mod C).
When the system is first powered on, all the cache blocks are invalid so all the valid bits are off.
On the right is a table giving a larger example, with C=8 (rather than 4, as above) and M=32 (rather than 16).
We still have M/C=4 memory blocks eligible to be stored in each cache block. Thus there are two tag bits for each cache block.
Shown on the right is a eight entry, direct-mapped cache with block size one word. As usual all references are for a single word (blksize=refsize=1). In order to make the diagram and arithmetic smaller, the machine has only 10-bit addressing (i.e., the memory has only 210=1024 bytes), instead of more realistic 32- or 64-bit addressing.
Above the cache we see a 10-bit address issued by the processor.
There are several points to note.
The circuitry needed for a simple cache (direct mapped, blksize=refsize=1) is shown on the right. The only difference between this cache and the example above is size. This cache holds 1024 blocks (not just 8) and the memory holds 230 = 210*3 = (210)3 ∼1,000,000,000 blocks (not just 256)..
The action required for a read hit is clear, namely return to the processor the data found in the cache.
For a read miss, the best action is fairly clear, but requires some thought.
The simplest write policy is write-through, write-allocate (see below for definitions). The decstation 3100 discussed above adopted these policies and performed the following actions for any write, hit OR miss. (The 3100 was a personal workstation not a fancy supercomputer costing millions of dollars, so simplicity of the design was important. This desire for simplicity also explains why, for the 3100, block size = reference size = 1 word and the cache is direct mapped.)
Although the above policy has the advantage of simplicity (it performs the same actions for all writes, hits or misses; and simplifies the handling of read misses), it is out of favor due to its poor performance (other designs make few requests to main memory).
Divide the memory block number by the number of cache blocks. The quotient is the tag and the remainder is the cache block number.
MBN --- = tag MBN % NCB = CBN NCB
Analogy: If you have N numerical address but only n<N mailboxes available, one possibility (the one we use in caches) is to put mail for address M in mailbox M%n. Then to distinguish addresses assigned to the same mailbox you need the quotient M/n. In caches we call the mailbox assigned the cache index (or cache line or cache block number) and we call the quotient needed for disambiguation the tag.
The key principle is the
Fundamental Theorem of Fifth Grade
Dividend = Quotient * Divisor + Remainder
We divide the dividend (the memory block number) by the divisor (the number of cache blocks) and look in the cache slot whose number is the Remainder (the cache index or line number). We check whether the Quotient (the tag) matches the stored value.
Homework: Consider a cache with the following properties, which are essentially the ones we have been using to date:
The cache is initially
empty, i.e. all the valid bits are 0.
Then the references on the right are issued in the order given.
Remind me to do this one in class next time.
The setup we have described does not take much advantage of spatial locality. The idea of having a multiword blocks is to bring into the cache words near the referenced word since, by spatial locality, they are likely to be referenced in the near future.
We continue to assume that all references are for one word and that all memory address are 32-bits and reference a byte. For a while, we will continue to assume that the cache is direct mapped.
The figure on the right shows a 64KB direct mapped cache with
4-word (16-byte) blocks.
Questions: For this cache, when the memory word referenced is in a given block, where in the cache does the block go, and how do we find that block in the cache?
Answers:
Show from the diagram how this gives the pink valid 154 true.
The cache size or cache capacity is the size of the data portion of the cache (normally measured in bytes).
For the caches we have seen so far this is the block size times the number of entries. For the diagram above this is 16B * 4K = 64KB. For the simpler direct mapped caches block size = word size. So the cache size is the word size times the number of entries. Specifically the cache in the previous diagram has size 4B * 4K = 16KB.
You should not be surprised to hear that a bigger cache has a higher hit rate. The interesting comparison is between the last cache and an enlarged version of the previous cache with 16K entries i.e. comparing two caches of size 64KB. Experiments have shown that spacial locality does occur and real programs have higher hit rates on caches with 4-word blocks than they do on caches with 1-word blocks.
For a simple example imagine a program that sweeps once through an array of one million entries, each one word in size. For our simple cache, the hit rate is zero! For the last cache, the hit rate is .75.
Note that the total size of the cache includes all
the bits.
Everything except for the data portion is considered overhead since
it is not part of the running program.
For the caches we have seen so far the total size is
(block size + tag size + 1) * the number of entries
We shall not emphasize total size in this class, but we do in 436, computer architecture.
write allocate): Fetch the needed line from memory, return the referenced word to the processor.
write allocateand
store through): Read the new line from memory replacing the old line in the cache and return the referenced word to the processor.
store through).
Start Lecture #22
write-backpolicy is harder but has the advantage that if we update the variable z 100 times before evicting its cache block, we only send the last value of z to memory. Is this reduction of memory accesses worth the extra complexity?
Homework: Consider two 256KB direct-mapped caches (i.e., each cache contains 256KB of data). As always, a memory (byte) address is 32 bits and all references are for a 4-byte word. The first cache has a block size of one word, the second has a block size of 32 words.
Consider the following sad story. Jane's computer has a cache that holds 1000 blocks and Jane has a program that only references 4 (memory) blocks, namely blocks 13, 1013, 113013, and 7013. In fact the references occur in order: 13, 1013, 113013, 7013, 13, 1013, 113013, 7013, 13, 1013, 113013, 7013, 13, 1013, 113013, 7013, etc. Referencing only 4 memory blocks and having room for 1000 blocks in her cache, Jane expected an extremely high hit rate for her program. In fact, the hit rate was zero. She was so sad, she gave up her job as a web-mistress, went to medical school, and is now a brain surgeon at the mayo clinic in Rochester MN.
So far we have studied only direct mapped caches, i.e., those for which the block number in the cache is determined by the memory block number, i.e., there is only one possible location in the cache for any block. In Jane's sad story I picked four memory blocks so that they were all assigned to the same cache block and hence kept evicting each other. The rest of the cache was unused and essentially wasted.
Although this direct-mapped organization is no longer used because it gives poor performance, it does have one performance advantage: To check for a hit we need compare only one tag with the high-order bits of the addr.
The direct-mapped organization, in which a given memory block can be placed in only one possible cache block, is one extreme. The other extreme is called a fully associative cache in which a memory block can be placed in any cache block. Since any memory block can be in any cache block, the cache index tells us nothing about which memory block is stored there. Hence the tag must be the entire memory block number. Moreover, we don't know which cache block to check so we must check all cache blocks to see if we have a hit.
Most common for caches is an intermediate configuration called set associative or n-way associative (e.g., 4-way associative). The value of n is typically a small power of 2. number K mod (the number of sets), which equals K mod (B/n).
The picture on the right shows a system storing memory block 12 in three cache, each cache having 8 blocks. The left cache is direct mapped; the middle one is 2-way set associative; and the right one is fully associative.
We have already done direct mapped caches but to repeat:
The middle picture shows a 2-way set associative cache also called a set size 2 cache. A set is a group of consecutive cache blocks.
The right picture shows a fully associative cache, i.e. a cache where there is only one set and it is the entire cache.
For a cache holding n blocks, a set-size n cache is fully associative. Any set-size 1 cache is direct mapped.
When the cache was organized by blocks and we wanted to find a given memory word we first converted the word address to the MemoryBlockNumber (by dividing by the #words/block) and then formed the division
MemoryBlockNumber / NumberOfCacheBlocks
The remainder gave the index in the cache and the quotient gave the tag. We then referenced the cache using the index just calculated. If this entry is valid and its tag matches the tag in the memory reference, that means the value in the cache has the right quotient and the right remainder. Hence the cache entry has the right dividend, i.e., the correct memory block.
Recall that for the a direct-mapped cache, the cache index is the cache block number (i.e., the cache is indexed by cache block number). For a set-associative cache, the cache index is the set number.
Just as the cache block number for a direct-mapped cache is the memory block number mod the number of blocks in the cache, the set number for a set-associative cache is the (memory) block number mod the number of sets.
Just as the tag for a direct mapped cache is the memory block number divided by the number of blocks in the cache, the tag for a set-associative cache is the memory block number divided by the number of sets in the cache.
Summary: Divide the memory block number by the number of sets in the cache. The quotient is the tag and the remainder is the set number. (The remainder is normally referred to as the memory block number mod the number of sets.)
Do NOT make the mistake of thinking that a set size 2 cache has 2 sets, it has NCB/2 sets each set containing 2 blocks.
Ask in class.
Question: Why is set associativity good?
For example, why is 2-way set associativity better than direct
mapped?
Answer: Consider referencing two arrays of size 50KB that start at location 1MB and 2MB.
Question: What is the advantage of
associativity?
Answer: The advantage of increased associativity is normally an increased hit ratio.
Question: What are the disadvantages?
Answer: It is slower, bigger, and uses more energy due to the extra logic.
Remark: Go over the homework from last time. -- Note that an absolute memory address say location 0x0 does not -- have ().
We know that a cache is smaller than a central memory and realize that at any one time only a subset of the central memory can be cache resident. Given a central memory address A we want to know
Actually we answered them in reverse order. We first determined where A must be in the cache if it's there at all, and then we look to see if it is there.
We started with a simple cache (direct mapped, blocksize one word). This cache contained only 4 words (2-bit addresses) and the central memory had only 16 words (4-bit addresses). Given a word in memory, we divided its MBN (its memory word) address by the NCB (# words in the cache) and examined the quotient and remainder (div and mod). By coincidence 16 = 42 so both the div and mod are 2-bit numbers; in general they are not the same size. We used the remainder (mod) to specify the index, i.e., the cache location for this word (in the diagram the index is the color) and used the quotient (called the tag) to determine, for example, if the green cache entry is the particular green memory block we desire.
The basic idea is to first number the units in the big and small memories, second divide the number given to a unit in the big memory by the number of units in the small. In the simplest example above, the number of a memory unit was its word address and the number of cache units was the number of words in the cache.
In reality, memory is composed of blocks (each several words) and caches are composed of sets (each several blocks). Specifically, given the cache parameters and memory byte address (32-bits) we proceed as follows.
To divide 134782993 by 100, you reach for a pencil not a calculator! You draw a vertical line with the pencil; the left part is the div (aka quotient) and the right part is the mod (aka remainder). You use the same vertical line technique to divide a binary number by 8=23 (or by 4K=212).
Start Lecture #23
Question: How do we find a memory block in a 4KB
4-way set associative cache with block size 1 word?
Answer: This is more complicated than for the simple direct mapped caches we started with. The three macro steps are:
We proceeds as follows. (Do on the board an example: address 0x000A0A08 = 00000000_00001010_00001010_00001000)
Done on The Board
In 2020-21, the class is taught remotely and the zoom whiteboard crashes. So we fake it. The example has address hex 0x000A0A08 = 00000000_00001010_00001010_00001000 in binary. The cache remains 256KB 4-way set associative, with blocksize one word.
This is a fairly simple combination of the two ideas and is illustrated by the diagram on the right.
datacoming out of the multiplexor at the bottom right of the previous diagram is now a block. In the diagram on the right, the block is 4 words.
Our description and picture of multi-word block, direct-mapped caches is here, and our description and picture of single-word block, set-associative caches is just above. It is useful to compare those two picture with the one on the right to see how the concepts are combined.
Below we give a more detailed discussion of which bits of the memory address are used for which purpose in all the various caches.
When an existing block must be replaced, which victim should we choose? The victim must be in the same set (i.e., have the same index) as the new block. With direct mapped (a.k.a 1-way associative) caches, this determines the victim so the question doesn't arise.
With a fully associative cache all resident blocks are candidate victims. For an n-way associative cache there are n candidates. Victim selection in the fully-associative case is covered extensively in 202. We will only mention some possible algorithms.
When you write a C language assignment statement
y = x+1; the processor must first read the value
of x from the memory.
This is called a
load instruction.
The processor also must write the new value of y into memory.
This is called a "store" instruction.
For a direct mapped cache with 1-word blocks we know how to do everything (we assume Store-Allocate and Write-Through).
If a block contains multiple words the only difference for us is that on a miss the rest of the block must be obtained from memory and stored in the cache.
An extra complication arises on a cache miss (either a load or a store). If the set is full (i.e., all blocks are valid) we must replace one of the existing blocks in the set and we are not learning which one to replace. As mentioned previously, in 202 you will learn how operating systems deal with a similar problem. However, caches are all hardware and hence must be fast so cannot adopt the complicated OS solutions.
We will not deal with this replacement question seriously in 201.
BigIs a Cache?
There are two notions of size.
Definition: The cache size is the capacity of the cache.
Another size of interest is the total number of bits in the cache, which includes tags and valid bits. For the 4-way associative, 1-word per block cache shown above, this size is computed as follows.
Question: For this cache, what fraction of the
bits are user data?
Answer: 4KB / 55Kb = 32Kb / 55Kb = 32/55.
Calculate in class the equivalent fraction for the last diagrammed cache, having 4-word blocks (and still 4-way set associative).
As always we assume a byte addressed machines with all references to a 4-byte word.
The 2 LOBs are not used (they specify the byte within the word, but all our references are for a complete word). We show these two bits in white. = 214 bytes = 212 words.
This is the simplest cache.
Modestly increasing the block size is an easy way to take advantage of spacial locality.
Increasing associativity improves the hit rate but only a modest associativity is practical.
The two previous improvements are often combined.
On the board calculate, for each of the four caches, the memory overhead percentage. For all four, the cache size is 16KB.
Homework: Redo the four caches above with the size of the cache increased from 16KB to 64KB determining the number of bits in each portion of the address as well as the overhead percentages.
Given the cache parameters and memory byte address (32-bits).
The memory blksize is 1 word. The cache is 64KB direct mapped. To which set is each of the following 32-bit memory addresses (given in hex) assigned and what are the associated tags?
Answer. Let's follow the three step procedure above for each address.
The block size 64B. The cache is 64KB, 2-way set associative. To which set is each of the following 32-bit memory addresses (given in hex) assigned and what are the associated tags?
Answer. Same 3-step procedure.
Homework: Redo the second example just above for a 2MB set size 16 cache with a block size of 64B (these are the sizes of one of the caches on at intel i7 processors). What is the total size of this cache.
We have already (briefly) discussed both these choices.
The issue of unified vs split I and D caches is covered in 436, Computer Architecture. We are not covering it in 201.
Similarly we are leaving the analysis of multilevel-caches to 436.
Trade-offs, Trade-offs, and more Trade-offs
For compute-intensive programs with significant run times, the programmer can often speed up execution by making the program cache-friendly, i.e., by increasing locality.
This is a much-studied problem, especially by programmers of numerical algorithms on supercomputers. Often one can reorder operations to improve spacial locality. Specifically (in say C) one can try to reference a 2D matrix by rows (i.e., the second subscript varies faster) rather than by columns
double A[100][200], sum // by rows for (int i=0; i<1024; i++) for int j=0; j<2048; j++) sum += A[i][j];
double A[100][200], sum // by columns for (int j=0; j<1024; j++) for int i=0; ij<2048; i++) sum += A[i][j];
For example consider the trivial example on the far right and
assume a cache block is the size of 8 C doubles and the cache holds
128 blocks.
The elements of matrix A are referenced in the order
A[0,0], A[0,1], A[0,2], ... A[0,2047], A[1,0], ..., A[1023,2047]
A[0,0] will be a fault, but (since the elements are stored consecutively), the next 7 references will be to the same block and hence will be hits. This pattern repeats and the hit rate is 7/8.
In contrast the similar example on the near right references the
same elements but in the order
A[0,0], A[1,0], ...
Consecutive references are to far apart memory locations and hence target distinct cache blocks so we do not get any hits at all.
Please don't let the above trivial example give you the
misimpression that improving cache performance just involves
interchanging the order of nested loops.
Just do a google search for
high performance matrix multiply to get an idea of the
serious effort that is involved.
Skipped
The desire for memory to be big and fast (and other properties)
meets the reality that memory is ether
big and slow or
small and fast.
Real systems contain a multilevel memory hierarchy where higher
levels are smaller and faster than lower levels.
Successful designs put the important data in higher levels. Choosing which data to put in higher levels is guided by the locality principles.
In this chapter we emphasized the cache (sram) / main memory (dram) boundary. Chapter 9 will look at the lower boundary between main memory (now considered small and fast) and local disk (big and slow).
We have been a little casual about memory addresses. When you write a program you view the memory addresses as starting at a fixed location, probably 0. But there are often several programs running at once. They can't all start at 0! In OS we study this topic extensively.
Way back when (say 1950s), the picture on the right was representative of computer memory. Each tall box is the memory of the system. Three variants of the OS location are shown, but we can just use the one on the left.
Note that there is only one user program in the system so, we can imagine that it starts at a fixed location (we use zero for convenience).
Using the appropriate technical terms we note that the virtual address, i.e., the addresses in the program, are equal to the physical addresses, i.e., the address in the actual memory (i.e., the RAM). The virtual address is also called the logical address and the physical address is also called the real address.
The diagram on the right illustrates the memory layout for multiple jobs running on a very early IBM multiprogramming system entitled MFT (multiprogramming with a fixed number of tasks).
When the system was booted (which took a number of minutes) the division of the memory into a few partitions was established. One job at a time was run in each partition, so the diagrammed configuration would permit 3 jobs to be running at once. That is it supported a multiprogramming level of 3.
If we ignore the OS or move it to the top of memory instead of the bottom, we can say that the job in partition 1 has its memory starting in location 0 of the RAM, i.e., it logical addresses (the addresses in the program) are equal to its physical addresses (the addresses in the RAM).
However, for the other partitions, this situation does not hold. For example assume two copies of job J are running, one copy in partition 1 and another copy in partition 2. Since the jobs are the same, all the logical addresses are the same. However, every physical address in partition 2 is greater than every physical address in partition 1.
Specifically, equal logical addresses in the two copies have physical addresses that differ by exactly the size of partition 1.
The picture on the right shows a swapping system. Each tall box represents the entire memory at a given point in time. The leftmost box represents boot time when only the OS is resident (blue shading represent free memory). Subsequent boxes represent successively later points in time.
The first snapshot after boot time shows three processes A, B, and
C running.
Then B finishes and D starts.
Note the blue
hole where B used to be.
The system needs to run E but each of the two holes is too small.
In response the system moves C and D so that E can fit.
Then F temporarily preempts C (C is
swapped out then swapped
back in).
Finally D shrinks and E expands.
In summary, not only does each process have its own set of physical addresses, but, even for a given unchanging process, the physical addresses change over time.
However, each process stays consistent, i.e., the physical address space remains contiguous for each process. The processes are not interleaved with each other. When you are seated in a plane that is climbing, your waist stays the same distance below your shoulders.
Now it gets crazy.
Moving a processes is an expensive operation. Part of the cause for this movement is that, in a swapping system, the process must be contiguous in physical memory.
As a remedy the (virtual) memory of the process is divided into fixed size regions called virtual pages and the physical memory is divided into fixed sized regions called physical pages. Virtual pages are often called pages and physical pages are often called frames
All pages are the same size; all frames are the same size; and the page size equals the frame size. So every page fits perfectly in any frame.
The pages are indiscriminately placed in frames without trying to keep consecutive pages in consecutive frames. The mapping from pages to frames is indicated in the diagram by the arrows.
But this can't work! Programs are written under the assumption that, in the absent of branches, consecutive instructions are executed consecutively. In particular, after executing the last instruction in page 4, we should execute the first instruction in page 5. But page 4 is in frame 0 and the last instruction in frame 0 is followed immediately by the first instruction in frame 1, which is the first instruction in page 3.
In summary the program needs to be executed in the order given by its pages, not by its frames.
This is where the page table is used. Before fetching the next instruction or data item, its virtual address is converted into the corresponding physical address as follows. Similar to the procedure with caches, we divide the virtual address by the page size and look at the quotient and remainder. (The former is the page number and the latter the offset in the page.) We look up the page number p# in the page table to find the corresponding entry, called the page table entry or PTE. The PTE contains the associated frame number f#. The offset in the frame is the same as the offset in the page. (Since the page size is always a power of 2, the division is done using a pencil).
Start Lecture #24
We see in the paging example directly above that there are two kinds of address: virtual addresses and physical addresses. The set of virtual address in the program is called its virtual address space. We also call it the virtual memory of the process. Similarly we have the physical address space (or physical memory), which is composed of all the physical addresses. These two address spaces will help answer some previously raised questions.
The program contains virtual address. The real memory is accessed via physical addresses. Hence we must convert (on the fly) each virtual address to a physical address. This requires separating the page# from the offset, which is easy and fast, and reading the page table, which is easy but slow. We are essentially saying that each memory access in the program requires two access, first the page table and then the memory itself.
We must eliminate this two-to-one slowdown, or the whole idea is doomed.
Translating each virtual address into the corresponding physical address is the job of the Memory Management Unit or MMU. So far this looks to be a simple task.
As noted this simplicity belies the performance penalty of converting each memory reference into two references. We will fix this with a caching-like approach, specifically we shall cache the page table in a structure called a TLB (translation lookaside buffer).
In addition we shall make the scheme more complicated in order to permit modern computers to concurrently execute programs whose total memory requirement exceed the memory present on the system.
In section 9.1 we have seen a series of historical advances enabling computer systems to fit more and more jobs in memory simultaneously.
Modern systems have gone a step beyond the simple paging scheme just described and and use instead demand paging in which it is no longer true that the entire program is in memory all the time it runs. Instead all the program's virtual pages are on disk. Only some pages are, in addition, in physical pages as in the figures above. For other pages the page table simply lists that the physical page is not resident in memory, i.e. there is no physical page containing this virtual page.
A program reference to a non-resident virtual page is called a page fault and triggers much activity. Specifically, an unused physical page must be found (often by evicting the virtual page currently residing in this physical page) and the referenced virtual page must be read from the disk into this newly available physical page.
If the above sounds similar to caching, you are right!
For caching, the SRAM acts as a small/fast
cache of the big/slow DRAM.
For the demand paging scheme just described the DRAM acts as a
small/fast
cache of the big/slow disk.
This question comes up in caching as well (how big should a cache block be?).
Because of the differing speed characteristics of disks and RAM, the typical page size is a few thousand bytes (4K and 8K are common) instead of the tens of bytes common for cache blocks.
Skipped.
The separation of Virtual Memory from Physical Memory offers several advantages.
Our treatment emphasizes one process running on one CPU.
Today, even simple systems have multiple
processors running
many processes.
We will ignore multiple processors in this course,
but must acknowledge multiple processes.
In 202 Operating Systems we consider multiple processes and
multiple processors in more detail.
One change needed is that my pictures indicate one page table, which maps virtual page numbers to physical page numbers. In reality there is a separate page table for each process. For 201, we simply note that when the OS switches from running one process to running another it must determine the location of the new process's page table. Our diagrams ignore this detail.
Since the page table is read for each memory reference, we can use it to hold protection information. (A similar technique is used to have some files belonging to one user not writable by other users.)
The diagram on the right shows the first three entries in the page tables for two processes i and j with three permission bits per virtual page.
supervisor mode required. Must the process be in supervisor mode to access this page. We will not emphasize the Sup protection bit in 201; supervisor mode will be discussed in 202 Operating Systems.
We see that each process can read each of it's first three virtual pages.
Process j cannot write physical page 9, perhaps its virtual page 0 contains text or read-only data.
Most interesting is physical page 6, which is shared between the two processes. Each process can read the physical page, but using different virtual addresses in the two processes: Physical page 6 is virtual page 0 in process i, but it is virtual page 1 in process j. Process j can write the shared page, but process i cannot. Perhaps this page contains data produced by process j and consumed by process i, the so called producer-consumer problem, which you will study in 202.
Note that the diagram is definitely not drawn to scale. Each blue box is a physical page, which is a few thousand bytes in size. The protection flags are a single bit each. The physical page number boxes are pointers to physical memory, these pointers need log(N) bits where N is the maximum possible number of physical pages. Since log(N) will probably not exceed 64 in our lifetime, 8 bytes is a good estimate for the size of these boxes.
Recall that in section 9.2 when describing demand paging we wrote
all the program's virtual pages are on disk.
Only some pages are, in addition, in physical
pages.
To distinguish those virtual pages that have corresponding physical pages from those that do not, we define another bit stored in each PTE, the valid bit. When the valid bit is true (the good case) there is a copy of the virtual page in memory and the PTE contains the number of that physical page, as in the example just above in section 9.5. Copying the terminology from caches, we call this case a page hit.
If there is no physical page for a given virtual page, we have the bad case, commonly called a page fault (or a page miss). In this case the physical page number field contains junk
The diagram on the right illustrates the steps performed in the good case. Although the terminology used is most appropriate for a load, the steps are very similar for a store as well.
When a process runs, the MMU references that process's page table, which we treat as a simple 1-dimensional array of PTEs.
Then to load a data item in the good case involves the 5 steps shown on the right
The full story for the bad case involves the operating system and we cover it in more detail in 202. The first three steps from the previous diagram remain intact. Steps 4 and 5 differ.
If the virtual address in invalid (for example outside the range of virtual addresses for this process), the OS kills the process.
If the virtual address is valid, but there is no physical address (the valid bit is off), a page fault has occurred and again the OS (and 202) are involved. Very briefly (and grossly over-simplified) an existing virtual page is evicted from its physical page and this now-empty physical is used to hold the requested virtual address.
Now that we understand the difference between virtual and physical address, we can discuss the trade-off between caching based on each.
An address from the program itself is the virtual address, the system then translates it to the physical address using the page table, as described above. Thus, with a virtual address based cache, the cache lookup can begin right away; whereas, with a physical address based cache, the cache lookup must be delayed until the translation to physical address has completed.
Many concurrently running processes will have the same virtual addresses (for example, all processes have their stacks starting at the same virtual address). However, all these virtual addresses are different physical address and represent parts of different programs. Hence they must be cached separately. But with a straightforward virtual address cache, all the virtual address for the base of the stack would be assigned to the same cache slot. Instead, the virtual address caching scheme adds complexity to the cache hardware to distinguish identical virtual address issued by different processes.
We have two remaining problems to solve.
Modern systems dedicate a small virtually addressed cache to hold a small number of PTEs. This cache is very fast because it is small and located within the MMU (hence on the CPU chip itself). Thus, when the PTE is found in the TLB (a TLB hit), transmissions numbered 2 and 3 in the above diagram are avoided.
TLB misses (fortunately a rare occurrence) proceed as above: all 5 steps are performed
Often the TLB has a high degree of associativity, which improves its hit rate.
At first glance the size of a page table does not seem significant: For each physical page, we need 1 PTE. Since each physical page is several KB and each PTE is just several bytes, the later is merely about 0.1% overhead.
The problem is that, as described so far, the entire page table must be stored in physical memory, even though the vast majority of the entries indicate that the corresponding virtual page is invalid (and hence there is no corresponding physical page).
For the Intel architecture we have used, the virtual address space (see this previous diagram) starts at 0x400000 and ends at 248-1. Since the machine is byte addressable, the virtual size of the process includes nearly 248 bytes.
If the page table were just 0.1% of this number, it would require physical memory (the kind you must buy) of about 248-10 Bytes or 256GB for each process!
Recall that most of the virtual space in the process diagram is in gray, i.e., is unused.
I hope the diagram on the right is helpful.
Everything in the diagram is in physical memory.
For example, the label
virt page 96 abbreviates
the physical page containing virtual page 96.
Consider a process with exactly 100=102 virtual pages. Instead of defining a single page table with 100 entries, we partition those 100 entries into 10 groups of 10.
For the moment imagine all 10 of these tables exist: The first table points to the first 10 pages (0-9). The second table points to pages 10-19, etc. These are called level 2 tables. We then need a level 1 table pointing to the level 2 tables. The level 1 table has a blue border in the diagram.
Now look for the physical page corresponding to virtual page 11. Yesterday we would have referenced the single page table, selected its 11th (or 12th) entry, and followed the pointer to the physical page containing virtual page 11.
Today it is harder. We first go to the blue (i.e. level 1) table and follow entry number 1 (because 11 starts with 1), which takes us to the correct level 2 table. From there we follow entry number 1 (because 11 ends with 1) and arrive finally at the physical page corresponding to virtual page 11.
I suggest you try try to find the physical page containing virtual page 57.
Pages are commonly 4KB or 8KB as are level 2 tables; let's say each is 8KB to be definite. Each PTE is typically 8B so each level 2 table contains 1K=1024 PTEs. These 1K PTEs, refer to 1K pages or 8MB of the process's virtual address space.
The big advantage occurs because most virtual pages are not in physical memory. For example, none of the virtual pages 20-29 have physical pages. Hence slot 2 in the blue table is null and there is no corresponding level 2 table.
Even including the overhead of the blue table, the diagram shows an improvement: There are only 40 total table entries; whereas yesterday's simple page table would have had 100 entries.
The advantage is greater for bigger examples and especially when you consider 3- and 4-level tables.
Additionally, note that only the level 1 table needs to be permanently memory resident. The level 2 tables can be created as needed and can be paged in and out as needed.
Start Lecture #25
Remark: A practice final is on classes (resources tab). IMPORTANT: The practice final only covers the material since the midterm. Don't be mislead. The real final will be cumulative, i.e. will cover all the material in the course.
The book does a lengthy example including caches and a TLB. We will concentrate on just the paging aspect.
The page table contains one PTE for each virtual page. A PTE contains several components; for simplicity we consider only the valid bit and the PPN.
For simplicity we assume a 1-level page table. Since there are 256 virtual pages, the table has 256 rows. On the right, we show only the first 16 (0-15) decimal. Speaking in hex we show the first 1016 (016-0F16).
Question: What are the physical address, the PPN,
and the PPO for the virtual address 023416?
Answer: 023416 = 00_0010_0011_01002. The low order 6 bits of the address are the VPO and the high order 8 bits are the VPN. (Throughout this example binary page numbers are red and binary page offsets are green). So VPO = 11_01002 = 3416 = 52.
Similarly, VPN = 0000_10002 = 0816 = 8. As always PPO = VPO = 11_01002 = 3416 = 52.
The page table tells us that virtual page number 0816 is valid and can be found in physical page 1316 = 01_00112. Hence the physical address is 01_0011_11_01002 = 0100_1111_01002 = 4F416.
Skipped
Skipped
For many programs, the (maximum) size of each data structure is known to the programmer at the time the program is written. That is the easy case. Sometimes (in C) we use #define to make explicit these maximums and thereby ease the burden of changing the maximums when needed.
Other times we want to let the running program itself determine the amount of memory used. For example we have malloc() and its variants in C and new in java.
Moreover, sometimes we want to return no-longer-needed memory prior to program termination. This is free() in C. What about Java?
The malloc()/free() team deals with allocating virtual memory from (and returning virtual memory to) a region that grows and shrinks dynamically called the heap (see this previous diagram).
In the diagram on the right (which is from the book) we see malloc()/free in action. Each small square represents 4B (the size of an int) and we will insure alignment on an 8B boundary. Initially, malloc's internal pointer P points to the beginning of the heap, which we assume is properly aligned. You should imagine the diagram extending to the right with many, many more white boxes.
waists4B in darker green and gives the user p2, which points to 24 available bytes.
Note that achieving the above semantics is not trivial. That is, malloc() and free() are not trivial programs like the alloc()/afree() pair we did earlier in the semester. In particular, the alloc()/afree() pair required that the user could afree() only the most recent block obtained from alloc(). That is, obtaining and returning blocks required a stack-like LIFO ordering.
When the size of a data structure depend on parameter known only at
run time, the
best / most natural response is to
first read (or compute) the parameter and pass it
to malloc() to obtain the right size data structure.
Also you may wish to return some of the memory before termination.
overhead, i.e., memory used but not specifically requested.
This is wasted space within an allocated region. One example was the padding used to maintain alignment. Some allocators only dispense blocks of certain sizes (e.g. buddy system and powers of 2).
p1 = malloc(100); p2 = malloc(100); p3 = malloc(100); free(p1); free(p3); p4 = malloc(200);
This is wasted space outside any allocated region. It occurs when there is enough free memory but not in one piece.
For example, consider the code on the right and assume that after the third malloc(), no space remained. Then the two end blocks are freed giving a total of 200B free, but split in two 100B pieces. Hence the fourth malloc() fails.
This is difficult problem whose occurrence is impossible to predict since it depends on knowing the future. Common heuristics employed try to keep free memory is few large pieces rather than a large number of small pieces.
The format of a block on an implicit free list is shown on the right. It consists of a header followed by the payload and any padding needed to ensure proper alignment. The header contains the size of the block (including the header and any padding) as well as a flag indicating whether the block is free or has been allocated The block size can be thought of as a pointer to the next block and that is how we show it it in the diagrams below.
The name implicit free list is a little funny. It is really a list of all blocks, both free and allocated. All blocks have a header, which contains the block length and status (free or allocated).
In the diagram that follow, we have four (green) free blocks and one (red) allocated block.
The block given to the user includes the payload and any possible padding. It does not include the size and allocated bit. Indeed, the user must not alter that word.
Note that with an implicit list we must also keep a
color bit
with each block stating if the block is free.
Recall that the green portion includes any padding needed for alignment (or other) purpose.
Finding a Free Block)
Three methods have been considered.
best fittingblock (i.e., the smallest one that is big enough).
Often the free block chosen by the given algorithm is bigger than the user requested. There are two possible continuations.
What happens if malloc() cannot satisfy the current request for a free block of size 5? If the state is as in the first diagram, the only free blocks are of size 4,2,4,2 and we cannot satisfy the request. In this case, the next section will show us how to coalesce the first two free blocks into one of size 7. But if the request was for a size 20 free block, we would fail: there simply isn't enough space available.
On the right is a familiar diagram showing the (virtual) memory allocated for a running process. Note that the green heap section has an arrow indicating that it can grow.
What happens is that malloc() executes an sbrk() system call and poof the line moves up and the heap gets bigger. (You will learn more about system calls in 202).
The simplest implementation is to just change the color of the returned block from red to green. But there is a problem. The previous free list diagram ends with two adjacent green blocks of size 4 and 2, but we cannot satisfy a request for a size 5 block. This is called false fragmentation. Fortunately, a simple coalescing of the last two blocks gives a size 7 free block.
So a linear traversal of the free blocks would enable us to coalesce adjacent free blocks. In an implicit free list we would have to traverse the entire (i.e., free plus allocated) list.
Instead of scanning the list to find blocks to coalesce, we might want to coalesce when we free a block. Since the block header in not given to the user, it still points to the next block. If the latter is free, we can easily coalesce. Therefore, if blocks are freed in reverse order, then checking the successor will accomplish all possible coalescing.
As just noted, when a block is freed, free() can check if the successor is free and and, if so, coalesce. The boundary tag method of Knuth extends this to the predecessor as well. The difficulty was that although the block's header points to the successor, there is no pointer to the predecessor. The boundary tag method adds a footer at the end of the block that contains a reverse pointer (making the list doubly linked).
Assume that the first red block is freed by the user and is merged
with the first green block to give a free (i.e., green) block
containing four boxes.
If we were using boundary tags (i.e. double link the list) the
implicit free list would becomes:
Remember that the allocated/free bit, which I color-code red/green
is actually stored in both the header and footer.
The block being freed has two neighbors and, assuming an implicit free list, each can be allocated or free. this gives the four cases shown on the upper right.
In all four cases the middle (white) block becomes green and is then is merged with any adjacent green block, which can be located above, below or both.
The four possible results are shown in the lower diagram. The lengths in the header and footer, must be updated to reflect any merges that occur,
Largely skipped; just a few comments about an implicit list allocator.
Have multiple free lists each holding free blocks of roughly the same size.
buddysystem uses blocksizes that are powers of 2. It is very fast but can have significant internal fragmentation.
As mentioned previously, Java has analogue of malloc() namely new, but has no analogue of free(). What is going on?
Java systems (and others) automatically determine when dynamically
allocated memory can no longer be referenced by the user's program
(such memory is called
garbage).
The system then automatically frees such memory.
This procedure is called garbage collection and is a serious subject that we will not treat in depth.
Skipped.
Skipped.
Start Lecture #26
Remark: Two cheat sheets are in the resources tab of classes: one has some C library names the other is for assembler. Both may be used on the final.
The normal (and simplest) instruction ordering is one after the other in order of their address. Now we consider more complex changes to the basic sequential control flow. Much of this involves the OS and you will see more in 202.
Some alterations of control flow are familiar and do not involve the OS, e.g., jump, call, return.
"Exceptional control flow" occurs in reaction to changes in system state: e.g., keyboard Ctrl-C, divide-by-zero.
interruptedor
preemptedby the OS so that the latter can switch to another user program.
Exceptions are a form of exceptional control flow that are implemented partly in hardware and partly in software (typically in the OS).
Well prior to the exception occurring, the OS sets up a jump table. This is similar to the jump table we saw when implementing a switch statement. Each exception type has a number and the system branches into the jump table indexed by that number and from there branches to the handler for that exception. Again this is similar to our implementation of the switch statement in the assembler section of the course.
The entries are sometimes called interrupt vectors. The table is established during OS boot time.
The memory used for the table is is not accessible to user programs, which is important since the jump table is executed in supervisor (privileged) mode
segmentation fault.
Note: We are drifting into OS from CSO. I cover this material in much more detail when teaching 202.
A process is a program in execution.
Note:
Process is a software concept.
Do not confuse it with the hardware concept
processor.
The OS works hard to provide the following illusions to each running process.
Skipped.
zombiestate until its parent reaps it.
To show how the four stars enable much of process management, consider the following highly simplified shell (the Unix command interpreter).
while (true) display_prompt() read_command(command) if (fork() != 0) waitpid(...) <--Omit this line; get a background job.
Thus, the parent and child execute different branches of the if-then-else in the code above.
Removing the waitpid(...) lets the child run in the background
while the parent (the
shell) can start another job.
Remark: Next class I shall go over the practice final. I suggest you work on it this weekend. The last class will be devoted to answering any questions you have.
Start Lecture #27Remarks
Review of practice final
Start Lecture #28
General question answer session
Redo 8.4.A with ipad to draw diagram dynamically.
Remark: End of material eligible for the CSO final exam.
A clock on a computer is an electronic signal. If you plot a clock with the horizontal axis time and the vertical axis voltage, the result is a square wave as shown on the right.
A cycle is the period of the square wave generated by the clock.
You can think of the computer doing one instruction during one cycle. That is not correct: The truth is that instructions take several cycles but they are pipelines so in the ideal one instruction finishes each clock cycle.
We shall assume the clock is a perfect square wave with all periods equal.
Note: I added interludes because I realize that CS students have little experience in these performance calculations.
Modern processors have several caches. We shall study just two, the instruction cache and the data cache, normally called the I-Cache and D-Cache.
Every instruction that the computer executes has to be fetched from memory and the I-Cache is used for such references. So the I-cache is accessed once for every instruction.
In contrast only some instructions access the memory for data.
The most common instructions making such accesses are
the load and store instructions.
For example the C assignment statement
y = x + 1;
generates a load to fetch the value of x and a store to update the value of y. There is also an add that does not reference memory. The diagram on the right shows all the possibilities If both caches have a miss, the misses are processed one at a time because there is only one central memory.
We assume separate instruction and data caches.
Do the following performance example on the board. It would be an appropriate final exam question.
double speedmachine? It would be double speed if there was a 0% miss rate.
A lower base (i.e. miss-free) CPI makes misses appear more expensive since waiting a fixed amount of cycles for the memory corresponds to losing more instructions if the CPI is lower.
A faster CPU (i.e., a faster clock) makes misses appear more expensive since waiting a fixed amount of time for the memory corresponds to more cycles if the clock is faster (and hence more instructions since the base CPI is the same).
Homework: Consider a system that has a miss-free CPI of 2, a D-cache miss rate of 5%, an I-cache miss rate of 2%, has 1/3 of the instructions referencing memory, and has a memory that gives a miss penalty of 20 cycles.
Note: Larger caches typically have higher hit rates but longer hit times.
Reviewed caches again and answer students' questions.
As requested I wrote out another example. Here it is.
At the end of the last class I was asked to do another problem with
sizes.
In particular finding which address bits are the tag and which are
the cache index.
In this class we will always make the following assumptions with regard to caches.
One conclusion is that the low-order (i.e., the rightmost) two bits of the 32 bit address specifies the byte in the word and hence are not used by the cache (which always supplies the entire word).
We will use the following cache.
I use a three step procedure.
Memory Block Number.
For the cache just described
We will use the three step procedure mentioned in Extra.2.
The top picture shows the 32-bit address.
The rightmost 2 bits give the byte in word, which we don't use since we are interested only in the entire word not a specific byte in the word. That is shown in the second picture. Note that there are 4 = 22 bytes in the word. The exponent 2 is why we need 2 address bits.
The next 3 bits from the right give the word-in-block. There are 8 words in the block (see Extra.2) and 8=23 so we need 3 bits.The remaining 27 bits are the MBN.
So NCS = 212, which answers question 3 of Extra.4
The MBN is 27 bits and NCS is 212.
Dividing a 27-bit number by a 12-bit number gives a (27-12)-bit quotient and a 12-bit remainder.
(This last statement is analogous to the more familiar statement that dividing a 5-digit number by 100=102 gives a (5-2)-digit quotient and a 2-digit remainder. To divide a 5 digit number by 100, you don't use a calculator, you just chop off the rightmost 2 digits as the remainder and remaining (5-2) digits form the quotient. Example 54321/100 equals 543 with a remainder of 21.)
The remainder is the cache set (the row in a diagram of the cache). It is shown in green. In blue we see the quotient, which is the tag.
So to answer questions 1 and 2. The high-order 15 (blue) bits form the 15-bit tag.
In the cache each 8-word block comes with a 15-bit tag and a 1-bit
valid flag.
Each of these
cells (I don't know if they have a name) thus
contains 8 32-bit words + 16 bits.
(I realize 16 bits is 2 bytes but often the number of bits is not
always a
multiple of 8.)
So each cell is 8*32+16 bits.
There are 2 cells in each set and 212 sets in the cache
so the total size of the cache is.
212 × 2 × (8×32 + 16) bits | https://cs.nyu.edu/~gottlieb/courses/cso/class-notes.html | CC-MAIN-2021-31 | refinedweb | 53,585 | 72.36 |
So this seems like it will make scope/namespaces a bit interesting... Any good references on why this is this way? I.e., why assignment passes across scopes instead of copy. Or is it just explicit versus implicit? On Jan 21, 2008 9:32 PM, John Fouhy <john at fouhy.net> wrote: > On 22/01/2008, John Morris <jrmorrisnc at gmail.com> wrote: > > I thought each class got it's own namespace and this sharing of mutable > > objects is confusing me. > > Each class gets its own namespace, but names are different from > objects. For example: > > >>> x = [1, 2, 3] > >>> y = x > >>> y.append(4) > >>> x > [1, 2, 3, 4] > > In this case, x and y are both different names for the same object. > Classes increase the name space, but they don't change the fact that > in python, assignment is just giving something a new name. > > -- > John. > -- John Morris jrmorrisnc at gmail.com "Do nothing which is of no use." -- Miyamoto Musashi <a href=""> <img src="" border=0> </a> -------------- next part -------------- An HTML attachment was scrubbed... URL: | https://mail.python.org/pipermail/tutor/2008-January/059742.html | CC-MAIN-2016-44 | refinedweb | 177 | 84.98 |
Google Cloud Big Data and Machine Learning Blog
Innovation in data processing and machine learning technology
Classifying text content with the Natural Language API
If you work in the media industry, chances are you’ve spent more hours than you’d like manually tagging text content like blogposts, news articles or marketing copy. With the Natural Language API, you can now tag all of this content with a single API call.
Using the new
classify_text endpoint, the Natural Language API will return a content category for your text. The content categories includes a set of Tier 1 high level categories (like “Arts & Entertainment”) along with a set of Tier 2 categories that provide more granularity (like “Visual Art & Design”), with around 700 categories in total.
To try it out, I wrote a Python script that uses data provided by the New York Times API to get the top stories for each section. Then, I combined the title and abstract for each article and sent it to the
classify_text endpoint for categorization. For example, the following title and abstract from this article:
Rafael Montero Shines in Mets’ Victory Over the Reds. Montero, who was demoted at midseason, took a one-hitter into the ninth inning as the Mets continued to dominate Cincinnati with a win at Great American Ball Park.
Results in this JSON response from the NL API:
{ categories: [ { name: '/Sports/Team Sports/Baseball', confidence: 0.99 } ] }
Each response includes a Tier 1 and Tier 2 category, and we can look at the original article to confirm that these categories are correct.
Once I get the article title and abstract text from the NYT API, calling the Natural Language API is just a few lines of code. Here’s an example using Python:
The API can also return multiple categories. Here’s an example of an article from the food section of The New York Times that fits more than one category:The API can also return multiple categories. Here’s an example of an article from the food section of The New York Times that fits more than one category:
from google.cloud import language_v1beta2 from google.cloud.language_v1beta2 import enums from google.cloud.language_v1beta2 import types language_client = language_v1beta2.LanguageServiceClient() document = types.Document( content="Your text to classify here", type=enums.Document.Type.PLAIN_TEXT ) result = language_client.classify_text(document) for category in result.categories: print('category name: ', category.name) print('category confidence: ', category.confidence, '\n')
A Smoky Lobster Salad With a Tapa Twist. This spin on the Spanish pulpo a la gallega skips the octopus, but keeps the sea salt, olive oil, pimentón and boiled potatoes.
And here’s the NL API’s response:
{ categories: [ { name: '/Food & Drink/Cooking & Recipes', confidence: 0.85 }, { name: '/Food & Drink/Food/Meat & Seafood', confidence: 0.63 } ] }
Start classifying your own text by diving into the docs here. We’d love to hear what you build with the NL API. Let us know what you think in the comments or find me on Twitter @SRobTweets. | https://cloud.google.com/blog/big-data/2017/09/classifying-text-content-with-the-natural-language-api | CC-MAIN-2018-05 | refinedweb | 500 | 55.84 |
#include <SOCK_IO.h>
Inheritance diagram for ACE_SOCK_IO:
If <timeout> == 0, then the call behaves as a normal send/recv call, i.e., for blocking sockets, the call will block until action is possible; for non-blocking sockets, EWOULDBLOCK will be returned if no action is immediately possible. If <timeout> != 0, the call will wait until the relative time specified in *<timeout> elapses. Errors are reported by -1 and 0 return values. If the operation times out, -1 is returned with <errno == ETIME>. If it succeeds the number of bytes transferred is returned. Methods with the extra <flags> argument will always result in <send> getting called. Methods without the extra <flags> argument will result in <send> getting called on Win32 platforms, and <write> getting called on non-Win32 platforms. | http://www.theaceorb.com/1.4a/doxygen/ace/classACE__SOCK__IO.html | CC-MAIN-2017-51 | refinedweb | 128 | 55.84 |
Tracking an output file during execution¶
You can track the last few lines of file during Firework execution. For example, you can monitor an output file to make sure the run is progressing as expected. Setting one or more such trackers is simple.
Adding a tracker (via files)¶
To add a tracker, set a key called
_tracker in your fw_spec to be an array of objects with
filename and
nlines keys. Each tracker will track the desired number of final lines of a particular file. The example below has two trackers, one for
inputs.txt and another for
words.txt (see the
_trackers section at the bottom):
name: Tracker FW spec: _tasks: - _fw_name: TemplateWriterTask context: opt1: 5.0 opt2: fast method output_file: inputs.txt template_file: simple_template.txt - _fw_name: ScriptTask script: wc -w < inputs.txt > words.txt use_shell: true _trackers: - filename: words.txt nlines: 25 - filename: inputs.txt nlines: 25
You can see this example in <INSTALL_DIR>/fw_tutorials/tracker.
Adding a tracker (via code)¶
The following code example creates the Firework above with two trackers:
from fireworks import Firework, Tracker, ScriptTask, TemplateWriterTask #') # define the trackers tracker1 = Tracker('words.txt', nlines=25) tracker2 = Tracker('inputs.txt', nlines=25) fw = Firework([firetask1, firetask2], spec={"_trackers": [tracker1, tracker2]}) fw.to_file('fw_tracker.yaml')
Viewing the tracked file¶
You can view the tracked files for all FireWorks (during or after execution) with the command:
lpad track_fws
which will print out data like:
# FW id: 1 ## Launch id: 1 ### Filename: words.txt 7 ### Filename: inputs.txt option1 = 5.0 option2 = fast method
Choosing the Firework(s) for which to view the tracked files¶
Besides for the <FW_ID>, there are additional options for specifying the Firework(s) that you want to get the tracked data for. For example, you can search for the tracker data of a particular Firework id or of all FIZZLED FireWorks via:
lpad track_fws -i <FW_ID> lpad track_fws -s FIZZLED
Type
lpad track_fws -h to see all the options, including advanced queries.
Choosing the tracked files to display¶
The
--include (or
-c) and
--exclude (or
-x) options can be used to filter what files are displayed in the Tracker Report. The
--include option means to only display those files, whereas the
--exclude option means to filter out those files from the report:
lpad track_fws --include words.txt lpad track_fws --exclude words.txt
You can separate multiple filenames by spaces.
Frequency of monitoring¶
The output file is monitored for changes at every update ping interval, as well as at the beginning and completion of execution. By default, the ping interval is set to be every hour; this is to avoid overloading the database with pings if tens of thousands of runs are happening simultaneously. You can change the ping interval (
PING_TIME_SECS) in the FW config.
A note about nlines¶
The tracker is meant to give basic debug information about a job, not to permanently store output files. There is a limit of 1000 lines to keep the Mongo document size reasonable, and to keep FireWorks performing well. We suggest you leave nlines to be less than 100 lines and only use this feature for basic debugging. | https://pythonhosted.org/FireWorks/tracker_tutorial.html | CC-MAIN-2017-13 | refinedweb | 521 | 64.3 |
Short version: the MVC T4 template (now named T4MVC) is now available on CodePlex, as one of the downloads in the ASP.NET MVC v1.0 Source page.
Go to T4MVC home page
Poll verdict: it’s ok for T4MVC to make small changes
Yesterday, I posted asking how people felt about having the template modify their code in small ways. Thanks to all those who commented! The fact that Scott Hanselman blogged it certainly helped get traffic there 🙂
The majority of people thought that it was fine as long as
- It’s just those small changes: make classes partial and action methods virtual. Don’t mess with ‘real’ code!
- It asks for permission, or at least tells you what it’s doing.
What’s new in this version?
The template on CodePlex (version 2.0.01 at the top of the file) supports what I described in my previous post, plus some new goodies.
Refactoring support for action methods:
- It extends the controller class
- It overrides the action method (hence the need for it to be virtual!)
- The override never calls the base (that would be very wrong), but instead returns a special ActionResult which captures the call (controller name, action name, parameter value).
- The template emit a new RedirectToAction (or ActionLink, …) overload which understands this special ActionResults, and turns the call data into a ‘regular’ RedirectToAction call..
The T4 file automatically runs whenever you build:
- As part of its execution, the T4 file finds itself in the VS project system (it had to do that anyway)
- It then runs the magic instruction ‘projectItem.Document.Saved = false;’, which causes it to become dirty.
- It then proceeds to do its code generation, leaving its file in an unsaved state
- Next time you Build your project, VS first saves all the files
- This causes the ‘dirty’ T4 template to execute, mark itself as dirty again, and redo its code generation
- You get the idea! If you feel like the lab rats, this may help.
One caveat is that you have to initiate the cycle by opening and saving T4MVC.tt once. After you do that, you don’t need to worry about it.
Support for strongly typed links to static resources
Credit for this idea goes to Jaco Pretorius, who blogged something similar.
The template generates static helpers for your content files and script files. So instead of writing:
<img src="/Content/nerd.jpg" />
<img src="<%= Links.Content.nerd_jpg %>" />
<script src="/Scripts/Map.js" type="text/javascript"></script>
.
More consistent short form to refer to a View from a Controller class.
Many bug fixes
I also fixed a number of bugs that people reported and that I ran into myself, e.g.
- It supports controllers that are in sub-folders of the Controllers folder and not directly in there
- It works better with nested solution folder
I’m sure there are still quite a few little bugs, and we’ll work through them as we encounter them
Thanks for this! I did uncover one bug (kind of). When creating copies of files in the project the template doesn’t deal with spaces in file names. Also, some of Rob Connery’s subsonic mvc templates use a ‘ui-lightness’ directory under the Scripts folder. Here is my change, it was just a quick hack to get it done 🙂 I’ve just posted it here for reference, do with it what you wish…
void ProcessStaticFilesRecursive(ProjectItem projectItem, string path) {
if (IsFolder(projectItem)) {
WriteLine("[CompilerGenerated]");
string _projectCleanName = projectItem.Name.Replace(" ", "");
_projectCleanName = projectItem.Name.Replace("-", "_dash_");
WriteLine(String.Format("public static class {0} {{", _projectCleanName));
PushIndent(" ");
// Recurse into all the items in the folder
foreach (ProjectItem item in projectItem.ProjectItems) {
ProcessStaticFilesRecursive(item, path + "/" + projectItem.Name);
}
PopIndent();
WriteLine("}");
WriteLine("");
}
else {
WriteLine(String.Format("public static string {0} {{ get {{ return VirtualPathUtility.ToAbsolute("{1}"); }} }}",
GetConstantNameFromFileName(projectItem.Name),
path + "/" + projectItem.Name));
}
}
Ok, there are a few other issues I didn’t see before my last post. To replicate, add a file with a space in it (such as Copy of site.css) and a folder with a dash in it (such as Contentui-lightness). Then re-run the T4 template and you’ll see the errors generated. Thanks!
Chad: I fixed that character issue and posted the update on CodePlex (now version 2.0.01)
Thanks Dave! Do you have any suggestions for what Scott H talked about in his blog, looking for the Views.Textbox(), Views.Label and Views.Validation stuff? I am using the latest version of Subsonic 3.0 and love the idea of automatically generating the validation, as well as the label and control options for it but don’t want to reinvent the wheel if someone has even a good starting off point.
Thanks again!
Chad, take a look et Eric Hexter’s solution:. It looks promising, and may be better than a T4 based solution for input builders.
Awesome – great work on this so far. I haven’t had a chance to play with this latest version, but one thing to watch out for on the content & script links is that the application may not be deployed at the root. For example, "/Scripts/Map.js" will break if you deploy your app to. You may already handle this, but I thought I’d mention…
Daniel, the template does correctly handles non-root sites. I added a paragraph in the post to mention this. Thanks!
Dave,
This is great stuff! May I suggest you also add the following method to your T4Extensions class in order to support object htmlAttributes for ActionLink the same way as the standard html helper does:
public static string ActionLink(this HtmlHelper htmlHelper, string linkText, ActionResult result, object htmlAttributes) {
return ActionLink(htmlHelper, linkText, result, new RouteValueDictionary(htmlAttributes));
}
One more thing: Strongly typed links usually work great, but in this instance they didn’t and I was unable to find out what’s going wrong. I tried to replace the following:
<link rel="STYLESHEET" type="text/css" href="~/Content/CustDatabase.css" />
with this:
<link rel="STYLESHEET" type="text/css" href="<%= Links.Content.CustDatabase_css%>" />
There’s no error when compiling the view, but the generated link is strangely messed up:
<link href="Views/Shared/%3C%25=%20Links.Content.CustDatabase_css%25%3E" type="text/css" rel="STYLESHEET"/>
The code generated for the stylesheet in T4MVC.cs is
public static string CustDatabase_css { get { return VirtualPathUtility.ToAbsolute("~/Content/CustDatabase.css"); } }
I am completely clueless why comes up as Views/Shared/%3C%25=%20Links.Content.CustDatabase_css%25%3E in the generated html. Perhaps you have an idea?
One last question: Is it possible to strong typing in Html.BeginForm?
Adrian:
– ActionLink: I added the suggested ActionLink overload (now version 2.0.02 on CodePlex)
– CSS link: I think this happens because code is simply not allowed in this context, so you just can’t use a <%= %> here at all. I can’t think of a great way around this.
– BeginForm: in most cases, you don’t want this to look like a method call to the Action, because the param values come from the form. But note that you can get some strong typing for the action and controller names by using:
Html.BeginForm(MVC.Dinners.Actions.Delete, MVC.Dinners.Name)
I get the message "The Views folder has a sub-folder named ‘{0}’, but there is no matching controller". My web project contains the mvc t4 template, but my controllers are stored in a different project. It that not supported?
@Adrian Grigore: I removed the runat="server" from the head and it then ran perfectly.
<head >
<link href="<%= Links.Shared.CSS.Website_Default_css %>" rel="stylesheet" type="text/css" />
</head>
This is an excellent template! One question, though – is there a reason the s_actions, s_views, and the public fields of the _Actions and _Views classes are not read-only?
For the CSS link, you’ll need to use something like Dave Reed’s CodeExpressionBuilder:
<link rel="stylesheet" type="text/css" href='<%$ Code: Links.Content.CustDatabase_css %>’ />
New build 2.1.00 is now on CodePlex.
– New BeginForm helpers (thanks Michael Hart and Adrian)
– Various strings changed to readonly, as suggested by Richard
– Misc fixes (see history in .tt file for details)
Marco: indeed, that’s not currently supported. I think it could work by:
– Putting the .tt in your controllers project, not the web project
– Changing the .tt logic to find the views in the web project
If someone gets to try this and has success, please send me your updates. Thanks! 🙂
Hey Dave, great template!
How do you see this fitting in with MVC Futures? I sense planets will collide very soon… I personally enjoy the approach you have given us, perhaps over some of the "Future" constructs… As Hanselman said, would be nice for this to be put through QA and baked in =)
Graham: I think this and the Futures can live together, though they do intersect in some aspects. One thing the Futures can’t do is the View name and static file support, because that’s based on physical file existence and not code constructs. On the other hand, the Futures have some View render helpers (e.g. TextBoxFor) which I’m not sure we can easily match with the T4 approach.
Just thought that I would let you know I have started using this and with great success. Not sure if you mentioned it or not but another place you can reference the generated code is when registering routs, i.e.
this._Routes.MapRoute("Default", "{controller}/{action}", new { controller = MVC.Home.Name, action = MVC.Home.Actions.Index });
>>-Putting the .tt in your controllers project, not the web project
No, does not work…
>>- Changing the .tt logic to find the views in the web project
I think the controllers should be loaded from all the projects in the current solution.Anyone tried to change the logic?
Hi,
If your Content folder is empty, you get a build error.
ProcessStaticFiles writes out this line without wrapping in a class.
public static readonly string Content = Url("Content");
The current implementation doesn’t work well with Dependency Injection!
You automatically generate a default constructor and one with your dummy parameter.
StructureMap for example now calls the wrong constructor.
Any ideas on how to work around this without explicitly decorating the real constructor with an DI-specific attribute?
despite this issue, i really like this approach.
best regards, christian
I have controllers that inherit from a base controller, and the code generated in t4mvc.cs produces ‘warnings’ at build time. i.e.
"…Controllers.MyController.RedirectToAction() hides inherited member ‘…Controllers.MyBaseController.RedirectToAction(). Use the new keyword if hiding was intended."
I’m not really sure if this is a problem, other than making me wade through warnings to find any I’m really interested in, but I thought I’d pass it along. Mostly likely I’m just doing something wrong.
Thanks … jim
New version 2.2.00 is up on CodePlex
Richard Kimber: fixed the issue with empty Content folder. Good catch!
Christian: made a change which *should* fix your issue with Dependency Injection. If it doesn’t please email me and we’ll take it offline.
Jim: Fixed issue with Controller base class. For the fix to work, please make sure you make your base Controller abstract. Thanks!
Marco: I didn’t mean that just putting the .tt in teh Controllers project was enough. It’s only a piece of a solution which also involves changing the .tt logic. Hopefully, I can look at that at some point, though if someone else gets to it first, all the better! 🙂
Anthony: indeed, this is a great use of it in the routes, I hadn’t thought of it. I actually just added some better support for this in 2.2. Now you can write:
routes.MapRoute(
"Default",
"{controller}/{action}
MVC.Home.Index()
);
Am I right in thinking this could be used to generate static reflection helper classes with things like property names and attributes? I’m going to have a play with the template ASAP!
I’ve encountered several bugs in 2.2.00:
BUG 1:
If any controller is already declared partial then there will be an entry in the Controllers list for each constituent file. This triggers am exception on template execution on line 478, because the call to ‘SingleOrDefault’ will return multiple results.
FIX:
Change the type of the ‘Controllers’ list to ‘HashSet<ControllerInfo>’ for an implicit ‘distinct’, slap an IEquatable<ControllerInfo>’ interface in there and modify the ProcessControllerTypesInNamespace method slightly:
line 287:
static List<ControllerInfo> Controllers;
=>
static HashSet<ControllerInfo> Controllers;
line 295:
Controllers = new List<ControllerInfo>();
=>
Controllers = new HashSet<ControllerInfo>();
Expanded ControllerInfo definition:
class ControllerInfo : IEquatable<ControllerInfo> {
…
public bool Equals(ControllerInfo obj) {
return obj != null && FullClassName == obj.FullClassName;
}
public override int GetHashCode() {
return FullClassName.GetHashCode();
}
}
Modification to ‘ProcessControllerTypesInNamespace’ method:
—–
Controllers.Add(controllerInfo);
controllerInfo.HasExplicitConstructor = HasExplicitConstructor(type);
// Process all the action methods in the controller
ProcessControllerActionMethods(controllerInfo, type);
—–
=>
—–
// either process new controllerinfo or integrate results into existing object for partially defined controllers
var target = Controllers.Add(controllerInfo) ? controllerInfo : Controllers.First(c => c.Equals(controllerInfo));
target.HasExplicitConstructor |= HasExplicitConstructor(type);
// Process all the action methods in the controller
ProcessControllerActionMethods(target, type);
—–
BUG 2:
The static file access generation (and possibly other) code yields incorrect paths on (T4 or designer) generated files:
Imagine an ‘Output.tt’ file in Project/Content that in turn generates some files called ‘foo.css’ & ‘bar.css’.
The solution explorer will show these files as
Project
+- Content
+- Output.tt
+- foo.css
+- bar.css
The physical path for ‘foo.css’ is ‘Project/Content/foo.css’, but the T4MVC template generates a path of ‘~/Project/Content/Output.tt/foo.css’ which is clearly wrong 🙁
FIX:
Don’t rely on the IsFolder method. In it’s current implementation the results have more of a ‘HasChildren’ meaning.
Instead use ProjectItem.Kind () to check if the item with children is actually a physical folder and only then construct an inner class with modified path.
Since IsFolder is used in several places, I didn’t touch it but instead refactored the ProcessStaticFilesRecursive method:
—-
void ProcessStaticFilesRecursive(ProjectItem projectItem, string path) {
bool isPhysicalFolder = projectItem.Kind == "{6BB5F8EF-4483-11D3-8BCF-00C04F8EC28C}"; // see ProjectItem.Kind documentation on msdn
if (isPhysicalFolder) {
WriteLine("[CompilerGenerated]");
WriteLine(String.Format("public static class {0} {{", SanitizeFileName(projectItem.Name)));
PushIndent(" ");
WriteLine(String.Format("public static string Url() {{ return VirtualPathUtility.ToAbsolute("{0}"); }}", path + "/" + projectItem.Name));
WriteLine(String.Format("public static string Url(string fileName) {{ return VirtualPathUtility.ToAbsolute("{0}/" + fileName); }}", path + "/" + projectItem.Name));
// Recurse into all the items in the folder
foreach (ProjectItem item in projectItem.ProjectItems) {
ProcessStaticFilesRecursive(item, path + "/" + projectItem.Name);
}
PopIndent();
WriteLine("}");
WriteLine("");
} else {
WriteLine(String.Format("public static readonly string {0} = Url("{1}");",
SanitizeFileName(projectItem.Name),
projectItem.Name));
// non folder items may also have children (virtual folders, Class.cs -> Class.Designer.cs, template output) – just register them on the same path as their parent item
foreach (ProjectItem item in projectItem.ProjectItems) {
ProcessStaticFilesRecursive(item, path );
}
}
}
—-
I’m pretty sure there is an equivalent bug for views and controllers (which could theoretically also be auto-generated and thus appear as child nodes to non folders), but I’ll leave that fix to somebody else 😉
BUG 3:
If a controller is derived from a base class that already implements some action methods, those methods are never discovered:
abstract class FooBaseController : Controller {
ActionResult SkippedMethod(…);
}
class FooController : FooBaseController {
ActionResult FoundMethod(…);
}
There is no code generated for ‘SkippedMethod’.
FIX:
Haven’t looked into this one yet
Nice! I updated and it broke everything.
Why the new requirement to only support ActionResult return types? My actions return strongly types like ViewResult, etc.
Why can’t it keep track of what the action method return type is and carry that over into the helpers?
So I was able to improve it to add support for other Action return types. Not easy. Sometimes these "helpers" take on a life of their own. Basically in the get action methods routine I parsed out the return type name and stored it in the ActionMethodInfo class. Then I turned ControllerActionCallInfo into an interface and added new classes like ControllerActionResultCallInfo, ControllerViewResultCallInfo, etc that implemented that interface and fixed up various other parts to reference the interface and return the correct type instead of the hardcoded ActionResult and everything is working again!
Great stuff….
Alex M: thanks a lot for working through those issues! New build 2.2.01 on CodePlex has the fixes for issues 1 and 2. I mostly used your code, with minor changes. I actually fixed the issue you bring up for controllers as well.
BTW, they have constants for all the GUIDs, so you can write Constants.vsProjectItemKindPhysicalFolder. I know, not very discoverable, but once you know where they are, they’re all there!
Bug #3: definitely a bug, but I haven’t had a chance to look into it. Anyone? 🙂
Hien Khieu: strange, I don’t know what could cause that. Is this with VS2008 SP1? I’ll ask the Visual Studio team if they can make sense of the stack. It’s dying when the code tries to change the class to be partial.
Maybe you can work around by making it partial yourself.
Pat: I went ahead and fixed this in 2.2.01 (now on CodePlex). My fix is similar to what you described. One difference is that I made it generic, by auto-generating derived classes for all the Result types it encounters, instead of hard coding a few (or maybe that’s what you did too?).
Anyway, please make sure it works for you. BTW, feel free to email me your changes next time, to give me a starting point for the fix 🙂
Hien Khieu: it would appear that your issue is related to running VisualSVN. Please see this thread where a similar thing was reported:
David,
Thank you for looking into my issue. VisualSVN is what I am thinking when I read the stack trace. One think I don’t understand that I was able to use your old MVC T4 template (the one that I download sometimes last week) with no problem. Thank you anyway.
Hi,
thanks for this great work. I wana bring
one issue in your attention. I have a controller method
public FileContentResult GetSmallImage(long photoID)
{
Return somefile;
}
It makes the method as virtual(no prblem).
It doesnt compile and says Error ‘System.Web.Mvc.FileContentResult’ does not contain a constructor that takes ‘0’ arguments
This is line where the error lies.
public T4MVC_FileContentResult(string controller, string action) {
this.InitMVCT4Result(controller, action);
}
It seems it didnt created controller method name properly.
Well David, I took another look at my Bug #3 and came up with a partial solution – which of course in turned lead to another problem.
Since I don’t know the EnvDTE classes well enough to tell if my new problem is simple or easy to workaround, I’ll just throw in what I got this far:
Modify ‘ProcessControllerActionMethods’ to:
void ProcessControllerActionMethods(ControllerInfo controllerInfo, CodeClass2 current) {
// walk up the controller inheritance chain until we arrive at the mvc default controller
for (CodeClass2 type = current; current != null && current.FullName != "System.Web.Mvc.Controller"; current = current.Bases.Item(1) as CodeClass2) {
foreach (CodeFunction2 method in GetMethods(type)) {
…
// keep existing unmodified code here (skipped for brevity)
…
// Make sure the method is virtual
if (!method.CanOverride) {
method.CanOverride = true; // *** THIS NEEDS SOME MORE CHECKS, SEE REMARKS
Warning(String.Format("{0} changed the action method {1}.{2} to be virtual", T4FileName, type.Name, method.Name));
}
…
// more code that needs no change
…
}
}
}
Explanation:
The modified code simply walks up the inheritance chain and looks at all the intermediate types’ methods until it stops at the stock mvc Controller base class. Choosing Bases.Item(1) shouldn’t cause any problem since no .NET language actually supports multiple inheritance.
Now there are some potential problems with the ‘method.CanOverride = true’ call:
A) The current controller code class ‘type’ is part of the same project as the template
=> everything should be okay, but might still fail for external reasons (source code control lock on the code file). The current implementation fails on those errors anyway – with a try/catch block you could just skip those elements, emit a warning and ‘continue’ with the next method.
B) The current controller code class ‘type’ is defined in an external source, either referenced project or assembly:
1) The code class ‘type’ is defined in another project which is part of the same solution as the template
=> switching the method to virtual should succeed (same caveats as in #A apply), but on my machine this always failed (the stacktrace contained some code from jetbrain’s resharper, so that component might be to blame for it).
2) The code class ‘type’ is defined in a referenced assembly.
=> making the method virtual will always fail
So as I see it, there needs to be a check (and try/catch block) around the ‘make virtual’ functionality for cases A & B.1 to work correctly.
For B.1 it might also be necessary to walk the solution and find the actual code class from the base classes’ definition to change properties, since code classes with an ‘external’ storage are generated from metadata and that might have been why the ‘CanOverrid=true’ failed – There’s already very similar code in the template to find the actual ProjectItem for the template file.
I cannot see any way to fix B.2, the best way here would probably just be to skip those methods with a warning, or at least process them with reduced output (and functionality) that doesn’t need virtual methods (you could still expose the action names).
Just noticed that I made a mistake in that demo code (wrote if from memory, since I removed those template changes). That new outer for loop should be:
for (CodeClass2 type = current; type != null && type.FullName != "System.Web.Mvc.Controller"; type = type.Bases.Item(1) as CodeClass2) {
I have the same problem as Hien with VisualSVN. The thread has a reply where there seems to be a workaround. Can you implement that in your t4 template? Thanks
Follow up with VisualSVN…
Guidance from the VisualSVN team:
"It turns out that problem is caused by ActiveWriter using DTE from temporary AppDomain. To fix this all calls to DTE should be marshaled to default AppDomain. As a simple workaround you can do code generation on separate working thread."
parminder: just fixed this issue in build 2.2.02 on CodePlex
Alex M: thanks for looking into this. I haven’t had a chance to look into your code yet, but I plan to early next week!
Bob (and Hien): VisualSVN issue: I’m not very sure how I could do this from a T4 file. The T4 file is executed by VS in a different AppDomain, and I don’t think this can be changed. If someone understands the issue better and has a fix, please let me know.
Alex M: I have integrated your fix to deal with base class action methods. I also added exception handling on the code that tries to make methods virtual (and make controllers partial). When that happens, it skips the method and gives a warning suggesting that the user makes that change themselves if possible. Obviously, when dealing with a true binary you don’t control, it won’t be possible, but I think that’s an edge case. Thanks again!
Alex M: forgot to mention that the new build is 2.2.03 on CodePlex.
Bob (and Hien): VisualSVN issue: 2.2.03 deals with those errors more gracefully. The workaround for you is to make the methods partial yourself (see previous comment).
I am using T4MVC.tt in my project. I am facing the problem that I have sub folders in the controllers folder.
e.g. (ControllersMemberAccountActivationController.cs)
There is no compile time error. But when I run the project it creates the object of those Controllers which are on the root of "Controllers folder" e.g "Home" = T4MVC.T4MVC_HomeController
but Activation(ActivationControllers) is null
And also all other controllers are null because they all are in the sub folders.
Please tell me the way out as I have more then 200 controllers and I want to keep them in sub folders.
Thanks in advance
Ramandeep: T4MVC is supposed to support controllers that are in sub folders of the Controllers folder. Please see the code in ProcessControllersRecursive. Not sure why it wouldn’t work for you. Please look through the tt file and the generated file to try to figure out what’s going on. Make sure you use the latest version (2.2.03).
Or if you can put together a small repro, you can email it to me and I’ll take a look.
Nice changes there david – I’ve got some more bugs & fixes 😉
1)
The controller inheritance chain analysis works great now. But it still fails the ‘CanOverride=true’ call on base types from external project.
If you insert the following code in ‘ProcessControllerActionMethods’ before the ‘foreach (CodeFunction2 method …’ loop you’ll get a better refactoring experience:
—-
// if the type is defined in another project, try getting a direct reference which might give more access for modifications
if (type.InfoLocation != vsCMInfoLocation.vsCMInfoLocationProject) {
var dte = type.DTE;
foreach (Project prj in dte.Solution.Projects) {
if (prj != Project && prj.CodeModel != null) {
var prjCodeType = prj.CodeModel.CodeTypeFromFullName(type.FullName);
if (prjCodeType != null && prjCodeType.InfoLocation == vsCMInfoLocation.vsCMInfoLocationProject) {
type = (CodeClass2) prjCodeType;
break;
}
}
}
}
—-
With this modification I could successfully refactor any base class method from *non-generic* controllers. Generic base controllers still don’t work, but since the exception is as specific as ‘unexpected error (hresult 0x8004005)’ I don’t have much hope for those.
2)
I completed the ‘// TODO: Make the base type check more reliable’ task inside ‘ProcessControllerActionMethods’:
—-
// We only support action methods that return an ActionResult derived type
if (!method.Type.CodeType.get_IsDerivedFrom("System.Web.Mvc.ActionResult"))
continue;
—-
3)
Thanks to the finally working base class methods I found a bug in one of our usage scenarios. For the following definition:
public class FooBarController {
[ActionName("Bar")]
public ActionResult Foo() {
…
}
}
the template will output code like
public partial class FooBarController {
public class _Actions {
public readonly string Foo = "Foo";
…
which is wrong, since the action (and the url where it can be invoked) is actually ‘FooBar/Bar’, not ‘FooBar/Foo’.
You’ll probably have to check each controller action method for attributes deriving from System.Web.Mvc.ActionNameSelectorAttribute, but I’d argue it should be enough to only check for the derived ‘ActionName’ attribute.
Though anybody could easily define their own ActionNameSelectorAttribute derived implementation (imagine an attribute that accepts every action which contains the letter ‘a’ at least three times), there might be no clearly defined reverse lookup from the attribute instance to the action name, and even if there is one, the template could never know it.
‘ActionName’ does both ship with the MVC Framework and has a clearly defined value-to-action relationship, so this would always work.
If you intend to implement this functionality, be sure to also add the code from #1, since the ‘Attributes’ collection is always empty for code with an InfoLocation other than vsCMInfoLocationProject, so you need the direct type references to the defining project here.
Alex M: would you mind contacting me be email (david.ebbo [@ microsoft.com]). It’ll be easier to continue discussing this. Thanks!
Alex M: please check out v2.3 on CodePlex.
Thanks! I tried the latest version in place of my custom fixed version and everything still works! I had hardcoded classes for the return types so your fix is definitely better. Thanks again! Nice to know I’m back in sync with the latest online version. | https://blogs.msdn.microsoft.com/davidebb/2009/06/26/the-mvc-t4-template-is-now-up-on-codeplex-and-it-does-change-your-code-a-bit/ | CC-MAIN-2018-05 | refinedweb | 4,619 | 56.55 |
Answered by:
Additional information: No connection could be made because the target machine actively refused it
- Hello,
I am currently getting this error when attempting to instantiate and remote object.
Currently, everything is located locally.
Here is a code snippet to further explain.
The highlighted code is where the error occurs.
Can anyone help?
Thanks
===========================================
namespaceResumeClient
{public class ResumeClient
{public static void Main(string[] args)
{
ChannelServices.RegisterChannel(new TcpClientChannel());
ResumeLoader loader = (ResumeLoader)Activator.GetObject(typeof(ResumeLoader), "tcp://localhost:9932/ResumeLoader"); if(loader==null)
{ Console.WriteLine("Unable to get remote referance"); }else
{
Resume resume = loader.GetResumeByUserID(1);
Console.WriteLine("ResumeID:"+ resume.ResumeID);
Console.WriteLine("UserID:"+ resume.UserID);
Console.WriteLine("Title:"+ resume.Title);
Console.WriteLine("Body:"+ resume.Body);
}
Console.ReadLine();//Keep the window from closing before we can read the result.
}//END OF MAIN METHOD
}//END OF ResumeClient Object
}//END OF ResumeClientNamespaceFriday, April 29, 2005 8:21 PM
Question
Answers
- Could you send the stack trace?Wednesday, July 13, 2005 11:07 PMModerator
All replies
- Hi stronghold,
Can u add the details of the Remoting config file as well ?
Regards,
VikramSaturday, April 30, 2005 4:13 AMModerator
- Hello Stronghold,
Did you get rid of the problem? I have the same problem right now. Would you like to help me if you did?
Regards,
LaurenceThursday, May 19, 2005 6:54 PM
- The client side code looks fine. Can you post the snippet of the server side as well. Also check using netstat command if Port 9932 is already in use by some other application.
Regards,
VikramFriday, May 20, 2005 10:27 PMModerator
Please let me know if the problem has been identified and fixed. I am getting the same error, and mine is a test application with the client and the server existing in the same machine.
Thanks,
BhupathyWednesday, June 29, 2005 3:58 PM
- Did you find the solution to the problem? Please let me know
Monday, July 11, 2005 6:23 PM
- Could you send the stack trace?Wednesday, July 13, 2005 11:07 PMModerator
- Hi,
I am getting the same error, and mine is a test application with the client and the server existing in the same machine.
if any one gets solution for this, please let me know.
Thanks.
Km. ArumugamFriday, November 04, 2005 12:16 PM
I got the same error and I fixed it, but, the circumstances might not be the same as yours. I host the remote object in a Windows service app. I noticed the it wasn't listening to the port I expected to. First I defined and registered the channel in the Main() sub section, then I switch to OnStart() section and it worked.
So, you make sure that the server is really listening. I assumed that you guys configured an published the remote object correctly and declaring and invoking in the same manner on the client side.
RegardsMonday, January 09, 2006 8:32 PM
I was also facing this error but since my application uses multithreading. In my case a new thread began just before server started listening and there was an error in the method being called in Threadstart. So it never reached till TcpListener, hence the error.
Hope your server side code also is listening to the same port. And this port is not already being used by some other service.Thursday, June 22, 2006 5:38 AM
- In my case, I was getting this error message because I configured my remote object in a configuration file, but forgot the call to RemotingConfiguration.Configure().Wednesday, November 22, 2006 2:22 PM | https://social.msdn.microsoft.com/Forums/en-US/53049386-4cb8-4d41-a137-9cbcf38e50f3/additional-information-no-connection-could-be-made-because-the-target-machine-actively-refused-it?forum=netfxremoting | CC-MAIN-2017-22 | refinedweb | 591 | 56.55 |
Now..
Making components dynamic with props
Our
ToDoItem component is still not very useful because we can only really include this once on a page (IDs need to be unique), and we have no way to set the label text. Nothing about this is dynamic.
What we need is some component state. This can be achieved by adding props to our component. You can think of props as being similar to inputs in a function. The value of a prop gives components an initial state that affects their display.
Registering props
In Vue, there are two ways to register props:
- The first way is to just list props out as an array of strings. Each entry in the array corresponds to the name of a prop.
- The second way is to define props as an object, with each key corresponding to the prop name. Listing props as an object allows you to specify default values, mark props as required, perform basic object typing (specifically around JavaScript primitive types), and perform simple prop validation.
Note: Prop validation only happens in development mode, so you can't strictly rely on it in production. Additionally, prop validation functions are invoked before the component instance is created, so they do not have access to the component state (or other props).
For this component, we’ll use the object registration method.
- Go back to your
ToDoItem.vuefile.
- Add a
propsproperty inside the export
default {}object, which contains an empty object.
- Inside this object, add two properties with the keys
labeland
done.
- The
labelkey's value should be an object with 2 properties (or props, as they are called in the context of being available to the components).
- The first is a
requiredproperty, which will have a value of
true. This will tell Vue that we expect every instance of this component to have a label field. Vue will warn us if a
ToDoItemcomponent does not have a label field.
- The second property we'll add is a
typeproperty. Set the value for this property as the JavaScript
Stringtype (note the capital "S"). This tells Vue that we expect the value of this property to be a string.
- Now on to the
doneprop.
- First add a
defaultfield, with a value of
false. This means that when no
doneprop is passed to a
ToDoItemcomponent, the
doneprop will will have a value of false (bear in mind that this is not required — we only need
defaulton non-required props).
- Next add a
typefield with a value of
Boolean. This tells Vue we expect the value prop to be a JavaScript boolean type.
Your component object should now look like this:
<script> export default { props: { label: { required: true, type: String }, done: { default: false, type: Boolean } } }; </script>
Using registered props
With these props defined inside the component object, we can now use these variable values inside our template. Let's start by adding the
label prop to the component template.
In your
<template>, replace the contents of the
<label> element with
{{label}}.
{{}} is a special template syntax in Vue, which lets us print the result of JavaScript expressions defined in our class, inside our template, including values and methods. It’s important to know that content inside
{{}} is displayed as text and not HTML. In this case, we’re printing the value of the
label prop.
Your component’s template section should now look like this:
<template> <div> <input type="checkbox" id="todo-item" checked="false" /> <label for="todo-item">{{label}}</label> </div> </template>
Go back to your browser and you'll see the todo item rendered as before, but without a label (oh no!). Go to your browser's DevTools and you’ll see a warning along these lines in the console:
[Vue warn]: Missing required prop: "label" found in ---> <ToDoItem> at src/components/ToDoItem.vue <App> at src/App.vue <Root>
This is because we marked the
label as a required prop, but we never gave the component that prop — we've defined where inside the template we want it used, but we haven't passed it into the component when calling it. Let’s fix that.
Inside your
App.vue file, add a
label prop to the
<to-do-item></to-do-item> component, just like a regular HTML attribute:
<to-do-item</to-do-item>
Now you'll see the label in your app, and the warning won't be spat out in the console again.
So that's props in a nutshell. Next we'll move on to how Vue persists data state.
Vue's data object
If you change the value of the
label prop passed into the
<to-do-item></to-do-item> call in your App component, you should see it update. This is great. We have a checkbox, with an updatable label. However, we're currently not doing anything with the "done" prop — we can check the checkboxes in the UI, but nowhere in the app are we recording whether a todo item is actually done.
To achieve this, we want to bind the component's
done prop to the
checked attribute on the
<input> element, so that it can serve as a record of whether the checkbox is checked or not. However, it's important that props serve as one-way data binding — a component should never alter the value of its own props. There are a lot of reasons for this. In part, components editing props can make debugging a challenge. If a value is passed to multiple children, it could be hard to track where the changes to that value were coming from. In addition, changing props can cause components to re-render. So mutating props in a component would trigger the component to rerender, which may in-turn trigger the mutation again.
To work around this, we can manage the
done state using Vue’s
data property. The
data property is where you can manage local state in a component, it lives inside the component object alongside the
props property and has the following structure:
data() { return { key: value } }
You'll note that the
data property is a function. This is to keep the data values unique for each instance of a component at runtime — the function is invoked separately for each component instance. If you declared data as just an object, all instances of that component would share the same values. This is a side-effect of the way Vue registers components and something you do not want.
You use
this to access a component's props and other properties from inside data, as you may expect. We'll see an example of this shortly.
Note: Because of the way that
this works in arrow functions (binding to the parent’s context), you wouldn’t be able to access any of the necessary attributes from inside
data if you used an arrow function. So don’t use an arrow function for the
data property.
So let's add a
data property to our
ToDoItem component. This will return an object containing a single property that we'll call
isDone, whose value is
this.done.
Update the component object like so:
export default { props: { label: { required: true, type: String }, done: { default: false, type: Boolean } }, data() { return { isDone: this.done }; } };
Vue does a little magic here — it binds all of your props directly to the component instance, so we don’t have to call
this.props.done. It also binds other attributes (
data, which you’ve already seen, and others like
methods,
computed, etc.) directly to the instance. This is, in part, to make them available to your template. The down-side to this is that you need to keep the keys unique across these attributes. This is why we called our
data attribute
isDone instead of
done.
So now we need to attach the
isDone property to our component. In a similar fashion to how Vue uses
{{}} expressions to display JavaScript expressions inside templates, Vue has a special syntax to bind JavaScript expressions to HTML elements and components:
v-bind. The
v-bind expression looks like this:
v-bind:attribute="expression"
In other words, you prefix whatever attribute/prop you want to bind to with
v-bind:. In most cases, you can use a shorthand for the
v-bind property, which is to just prefix the attribute/prop with a colon. So
:attribute="expression" works the same as
v-bind:attribute="expression".
So in the case of the checkbox in our
ToDoItem component, we can use
v-bind to map the
isDone property to the
checked attribute on the
<input> element. Both of the following are equivalent:
<input type="checkbox" id="todo-item" v-bind: <input type="checkbox" id="todo-item" :
You're free to use whichever pattern you would like. It's best to keep it consistent though. Because the shorthand syntax is more commonly used, this tutorial will stick to that pattern.
So let's do this. Update your
<input> element now to replace
checked="false" with
:checked="isDone".
Test out your component by passing
:done="true" to the
ToDoItem call in
App.vue. Note that you need to use the
v-bind syntax, because otherwise
true is passed as a string. The displayed checkbox should be checked.
<template> <div id="app"> <h1>My To-Do List</h1> <ul> <li> <to-do-item</to-do-item> </li> </ul> </div> </template>
Try changing
true to
false and back again, reloading your app in between to see how the state changes.
Giving Todos a unique id
Great! We now have a working checkbox where we can set the state programmatically. However, we can currently only add one
ToDoList component to the page because the
id is hardcoded. This would result in errors with assistive technology since the
id is needed to correctly map labels to their checkboxes. To fix this, we can programmatically set the
id in the component data.
We can use the lodash package's
uniqueid() method to help keep the index unique. This package exports a function that takes in a string and appends a unique integer to the end of the prefix. This will be sufficient for keeping component
ids unique.
Let’s add the package to our project with npm; stop your server and enter the following command into your terminal:
npm install --save lodash.uniqueid
Note: If you prefer yarn, you could instead use
yarn add lodash.uniqueid.
We can now import this package into our
ToDoItem component. Add the following line at the top of
ToDoItem.vue’s
<script> element:
import uniqueId from 'lodash.uniqueid';
Next, add add an
id field to our data property, so the component object ends up looking like so (
uniqueId() returns the specified prefix —
todo- — with a unique string appended to it):
import uniqueId from 'lodash.uniqueid'; export default { props: { label: { required: true, type: String }, done: { default: false, type: Boolean } }, data() { return { isDone: this.done, id: uniqueId('todo-') }; } };
Next, bind the
id to both our checkbox’s
id attribute and the label’s
for attribute, updating the existing
id and
for attributes as shown:
<template> <div> <input type="checkbox" : <label :{{label}}</label> </div> </template>
Summary
And that will do for this article. At this point we have a nicely-working
ToDoItem component that can be passed a label to display, will store its checked state, and will be rendered with a unique
id each time it is called. You can check if the unique
ids are working by temporarily adding more
<to-do-item></to-do-item> calls into
App.vue, and then checking their rendered output with your browser's DevTools. | https://developer.mozilla.org/pt-PT/docs/Learn/Tools_and_testing/Client-side_JavaScript_frameworks/Vue_first_component | CC-MAIN-2020-40 | refinedweb | 1,937 | 71.14 |
the Crawl Performance" and "Tuning Search Performance"
The Oracle Secure Enterprise Search tutorials at
The Oracle Secure Enterprise Search (Oracle SES) crawler is a Java process activated by a set schedule. When activated, the crawler spawns processor threads that fetch documents from sources. The crawler caches the documents, and when the cache reaches the maximum batch size of 250 MB, the crawler indexes the cached files. This index is used for searching.
The document cache, called Secure Cache, is stored in Oracle Database in a compressed SecureFile LOB. Oracle Database provides excellent security and compact storage.
In the Oracle SES Administration GUI, you can create schedules with one or more sources attached to them. Schedules define the frequency at which the Oracle SES index is kept up to date with existing information in the associated sources.
In the process of crawling, the crawler maintains a list of URLs of the discovered documents that are fetched and indexed in an internal URL queue. The queue is persistently stored, so that crawls can be resumed after the Oracle SES instance is restarted.
A display URL is a URL string used for search result display. This is the URL used when users click the search result link. An access URL is an optional URL string used by the crawler for crawling and indexing. If it does not exist, then the crawler uses the display URL for crawling and indexing. If it does exist, then it is used by the crawler instead of the display URL for crawling. For regular Web crawling, only display URLs are available. But in some situations, the crawler needs an access URL for crawling the internal site while keeping a display URL for the external use. For every internal URL, there is an external mirrored URL.
For example, for file sources with display URLs, end users can access the original document with the HTTP or HTTPS protocols. These provide the appropriate authentication and personalization and result in better user experience.
Display URLs can be provided using the URL Rewriter API. Or, they can be generated by specifying the mapping between the prefix of the original file URL and the prefix of the display URL. Oracle SES replaces the prefix of the file URL with the prefix of the display URL.
For example, if the file URL is
and the display URL is
then specify the file URL prefix as
and the display URL prefix as
You can alter the crawler's operating parameters at two levels:
At the global level for all sources
At the source level for a particular defined source
Global parameters include the default values for language, crawling depth, and other crawling parameters, and the settings that control the crawler log and cache.
To configure the crawler:
Click the Global Settings tab.
Under Sources, click Crawler Configuration.
Make the desired changes on the Crawler Configuration page. Click Help for more information about the configuration settings.
Click Apply.
To configure the crawling parameters for a specific source:
From the Home page, click the Sources secondary tab to see a list of sources you have created.
Click the edit icon for the source whose crawler you want to configure, to display the Edit Source page.
Click the Crawling Parameters subtab.
Make the desired changes. Click Help for more information about the crawling parameters.
Click Apply.
Note that the parameter values for a particular source can override the default values set at the global level. For example, for Web sources, Oracle SES sets a default crawling depth of 2, irrespective of the crawling depth you set at the global level.
Also note that some parameters are specific to a particular source type. For example, Web sources include parameters for HTTP cookies.
This section describes crawler settings and other mechanisms to control the scope of Web crawling:
See Also:"Tuning the Crawl Performance" for more detailed information on these settings and other issues affecting crawl performance
For initial planning purposes, you might want the crawler to collect URLs without indexing. After crawling is finished, examine the document URLs and status, remove unwanted documents, and start indexing. The crawling mode is set on the Home - Schedules - Edit Schedules page.
See Also:Appendix B, "URL Crawler Status Codes"
Note:If you are using a custom crawler created with the Crawler Plug-in API, then the crawling mode set here does not apply. The implemented plug-in controls the crawling mode.
These are the crawling mode options:
Automatically Accept All URLs for Indexing: This crawls and indexes all URLs in the source. For Web sources, it also extracts and indexes any links found in those URLs. If the URL has been crawled before, then it is reindexed only if it has changed.
Examine URLs Before Indexing: This crawls but does not index any URLs in the source. It also crawls any links found in those URLs.
Index Only: This crawls and indexes all URLs in the source. It does not extract any links from those URLs. In general, select this option for a source that has been crawled previously under "Examine URLs Before Indexing".
URL boundary rules limit the crawling space. When boundary rules are added, the crawler is restricted to URLs that match the indicated rules. The order in which rules are specified has no impact, but exclusion rules always override inclusion rules.
This is set on the Home - Sources - Boundary Rules page.
Specify an inclusion rule that a URL contain, start with, or end with a term. Use an asterisk (*) to represents a wildcard. For example,
www.*.example.com. Simple inclusion rules are case-insensitive. For case-sensitivity, use regular expression rules.
An inclusion rule ending with example.com limits the search to URLs ending with the string
example.com. Anything ending with
example.com is crawled, but is not crawled.
If the URL Submission functionality is enabled on the Global Settings - Query Configuration page, then URLs that are submitted by end users are added to the inclusion rules list. You can delete URLs that you do not want to index.
Oracle SES supports the regular expression syntax used in Java JDK 1.4.2 Pattern class (
java.util.regex.Pattern). Regular expression rules use special characters. The following is a summary of some basic regular expression constructs.
A caret (^) denotes the beginning of a URL and a dollar sign ($) denotes the end of a URL.
A period (.) matches any one character.
A question mark (?) matches zero or one occurrence of the character that it follows.
An asterisk (*) matches zero or more occurrences of the pattern that it follows. You can use an asterisk in the starts with, ends with, and contains rules.
A backslash (\) escapes any special characters, such as periods (\.), question marks (\?), or asterisks (\*).
See Also: a complete description in the Sun Microsystems Java documentation
You can specify an exclusion rule that a URL contains, starts with, or ends with a term.
An exclusion of
uk.example.com prevents the crawling of Example hosts in the United Kingdom.
Default Exclusion Rules
The crawler contains a default exclusion rule to exclude non-textual files. The following file extensions are included in the default exclusion rule.
Image: jpg, gif, tif, bmp, png
Audio: wav, mp3, wma
Video: avi, mpg, mpeg, wmv
Binary: bin, exe, so, dll, iso, jar, war, ear, tar, wmv, scm, cab, dmp
To crawl a file with these extensions, modify the following section in the ORACLE_HOME
/search/data/config/crawler.dat file, removing any file type suffix from the exclusion list.
# default file name suffix exclusion list RX_BOUNDARY (?i:(?:\.jar)|(?:\.bmp)|(?:\.war)|(?:\.ear)|(?:\.mpg)|(?:\.wmv)|(?:\.mpeg)|(?:\.scm)|(?:\.iso)|(?:\.dmp)|(?:\.dll)|(?:\.cab)|(?:\.so)|(?:\.avi)|(?:\.wav)|(?:\.mp3)|(?:\.wma)|(?:\.bin)|(?:\.exe)|(?:\.iso)|(?:\.tar)|(?:\.png))$
Then add the MIMEINCLUDE parameter to the
crawler.dat file to crawl any multimedia file type, and the file name is indexed as title.
For example, to crawl any audio files, remove .wav, .mp3, and .wma, and add the MIMEINCLUDE parameter:
RX_BOUNDARY (?i:(?:\.gif)|(?:\.jpg)|(?:\.jar)|(?:\.tif)|(?:\.bmp)|(?:\.war)|(?:\.ear)|(?:\.mpg)|(?:\.wmv)|(?:\.mpeg)|(?:\.scm)|(?:\.iso)| (?:\.dmp)|(?:\.dll)|(?:\.cab)|(?:\.so)|(?:\.avi)|(?:\.bin)|(?:\.exe)|(?:\.iso)|(?:\.tar)|(?:\.png))$ MIMEINCLUDE audio/x-wav audio/mpeg
Note:Only the file name is indexed when crawling multimedia files, unless the file is crawled using a crawler plug-in that provides a richer set of attributes, such as the Image Document Service plug-in.
The following example uses several regular expression constructs that are not described earlier, including range quantifiers, non-grouping parentheses, and mode switches. For a complete description, see the Sun Microsystems Java documentation.
To crawl only HTTPS URLs in the
example.com and
examplecorp.com domains, and to exclude files ending in .doc and .ppt:
Inclusion: URL regular expression
^https://.*\.example(?:corp){0,1}\.com
Exclusion: URL regular expression
(?i:\.doc|\.ppt)$
You can customize which document types are processed for each source. By default, PDF, Microsoft Excel, Microsoft PowerPoint, Microsoft Word, HTML and plain text are always processed.
To add or remove document types:
On the Home page, click the Sources secondary tab.
Choose a source from the list and select Edit to display the Customize Source page.
Select the Document Types subtab.
The listed document types are supported for the source type.
Move the types to process to the Processed list and the others to the Not Processed list.
Click Apply.
Keep the following in mind about graphics file formats:
For graphics format files (JPEG, JPEG 2000, GIF, TIFF, DICOM), only the file name is searchable. The crawler does not extract any metadata from graphics files or make any attempt to convert graphical text into indexable text, unless you enable a document service plug-in. See "Configuring Support for Image Metadata".
Oracle SES allows up to 1000 files in zip files and LHA files. If there are more than 1000 files, then an error is raised and the file is ignored. See "Crawling Zip Files Containing Non-UTF8 File Names".
See Also:Oracle Text Reference Appendix B for supported document types
Crawling depth is the number of levels to crawl Web and file sources. A Web document can contain links to other Web documents, which can contain more links. Specify the maximum number of nested links for the crawler to follow. Crawling depth starts at 0; that is, if you specify 1, then the crawler gathers the starting (seed) URL plus any document that is linked directly from the starting URL. For file crawling, this is the number of directory levels from the starting URL.
Set the crawling depth on the Home - Sources - Crawling Parameters page.
You can control which parts of your sites can be visited by robots. If robots exclusion is enabled (default), then the Web crawler traverses the pages based on the access policy specified in the Web server robots.txt file. The crawler also respects the page-level robot exclusion specified in HTML metatags.
For example, when a robot visits, it checks for. If it finds it, then the crawler checks to see if it is allowed to retrieve the document. If you own the Web sites, then you can disable robots exclusions. However, when crawling other Web sites, always comply with robots.txt by enabling robots exclusion.
Set the robots parameter on the Home - Sources - Crawling Parameters page.
By default, Oracle SES processes dynamic pages. Dynamic pages are generally served from a database application and have a URL that contains a question mark (?). Oracle SES identifies URLs with question marks as dynamic pages.
Some dynamic pages appear as multiple search results for the same page, and you might not want them all indexed. Other dynamic pages are each different and must be indexed. You must distinguish between these two kinds of dynamic pages. In general, dynamic pages that only change in menu expansion without affecting its contents should not be indexed.
Consider the following three URLs:
The question marks (?) in two URLs indicate that the rest of the strings are input parameters. The three results are essentially the same page with different side menu expansion. Ideally, the search yields only one result:
Note:The crawler cannot crawl and index dynamic Web pages written in Javascript.
Set the dynamic pages parameter on the Home - Sources - Crawling Parameters page.
The URL Rewriter is a user-supplied Java module for implementing the Oracle SES
UrlRewriter interface. The crawler uses it to filter or rewrite extracted URL links before they are put into the URL queue. The API enables ultimate control over which links extracted from a Web page are allowed and which ones should be discarded.
URL filtering removes unwanted links, and URL rewriting transforms the URL link. This transformation is necessary when access URLs are used and alternate display URLs must be presented to the user in the search results.
Set the URL rewriter on the Home - Sources - Crawling Parameters page.
You can override a default document title with a meaningful title if the default title is irrelevant. For example, suppose that the result list shows numerous documents with the title "Daily Memo". The documents had been created with the same template file, but the document properties had not been changed. Overriding this title in Oracle SES can help users better understand their search results.
Title fallback can be used for any source type. Oracle SES uses different logic for each document type to determine which fallback title to use. For example, for HTML documents, Oracle SES looks for the first heading, such as
<h1>. For Microsoft Word documents, Oracle SES looks for text with the largest font.
If the default title was collected in the initial crawl, then the fallback title is only used after the document is reindexed during a re-crawl. This means if there is no change to the document, then you must force the change by setting the re-crawl policy to Process All Documents on the Home - Schedules - Edit Schedule page.
This feature is not currently supported in the Oracle SES Administration GUI. Override a default document title with a meaningful title by adding the keyword
BAD_TITLE to the
ORACLE_HOME
/search/data/config/crawler.dat file. For example:
BAD_TITLE Daily Memo
Where
Daily Memo is the title string to be overridden. The title string is case-insensitive and can use multibyte characters in UTF8 character set.
You can specify multiple bad titles, each one on a separate line.
Special considerations with title fallback
With Microsoft Office documents:
Font sizes 14 and 16 in Microsoft Word correspond to normalized font sizes 4 and 5 (respectively) in converted HTML. The Oracle SES crawler only picks up strings with normalized font size greater than 4 as the fallback title.
Titles should contain more than five characters.
When a title is null, Oracle SES automatically indexes the fallback title for all binary documents (for example, .doc, .ppt, .pdf). For HTML and text documents, Oracle SES does not automatically index the fallback title. This means that the replaced title on HTML or text documents cannot be searched with the title attribute on the Advanced Search page. You can turn on indexing for HTML and text documents in the crawler.dat file. For example, set
NULL_TITLE_FALLBACK_INDEX ALL.
The
crawler.dat file is not included in the backup available on the Global Settings - Configuration Data Backup and Recovery page. Ensure you manually back up the
crawler.dat file.
See Also:"Crawler Configuration File"
This feature enables the crawler to automatically detect character set information for HTML, plain text, and XML files. Character set detection allows the crawler to properly cache files during crawls, index text, and display files for queries. This is important when crawling multibyte files (such as files in Japanese or Chinese).
This feature is not currently supported in the Oracle SES Administration GUI, and by default, it is turned off. Enable automatic character set detection by adding a line in the crawler configuration file:
ORACLE_HOME
/search/data/config/crawler.dat. For example, add the following as a new line:
AUTO_CHARSET_DETECTION
You can check whether this is turned on or off in the crawler log under the "Crawling Settings" section.
To crawl XML files for a source, be sure to add XML to the list of processed document types on the Home - Source - Document Types page. XML files are currently treated as HTML format, and detection for XML files may not be as accurate as for other file formats.
The
crawler.dat file is not included in the backup available on the Global Settings - Configuration Data Backup and Recovery page. Ensure that you manually back up the
crawler.dat file.
See Also:"Crawler Configuration File"
With multibyte files, besides turning on character set detection, be sure to set the Default Language parameter. For example, if the files are all in Japanese, select Japanese as the default language for that source. If automatic language detection is disabled, or if the crawler cannot determine the document language, then the crawler assumes that the document is written in the default language. This default language is used only if the crawler cannot determine the document language during crawling.
If your files are in multiple languages, then turn on the Enable Language Detection parameter. Not all documents retrieved by the crawler specify the language. For documents with no language specification, the crawler attempts to automatically detect language. source. If it cannot determine the language, then it takes the following steps:
If the language recognizer is not available or if it cannot determine a language code, then the default language code is used.
If the language recognizer is available, then the output from the recognizer is used.
Oracle Text MULTI_LEXER is the only lexer used for Oracle Secure Enterprise Search.
The Default Language and the Enable Language Detection parameters are on the Global Settings - Crawler Configuration page (globally) and also the Home - Sources - Crawling Parameters page (for each source).
Note:For file sources, the individual source setting for Enable Language Detection remains false regardless of the global setting. In most cases, the language for a file source should be the same, and set from, the Default Language setting.
For sources created before Oracle SES 11g, the document cache remains in the cache directory. Sources are not stored in Secure Cache in the database until they are migrated to use Secure Cache. You can manage the cache directory for these older sources the same as in earlier releases.
You can manage the Secure Cache either on the global level or at the data source level. The data source configuration supersedes the global configuration.
The cache is preserved by default and supports the Cached link feature in the search result page. If you do not use the Cache link, then you can delete the cache, either for specific sources or globally for all of them. Without a cache, the Cached link in a search result page returns a
File not found error.
To delete the cache for all sources:
Select the Global Settings tab in the Oracle SES Administration GUI.
Choose Crawler Configuration.
Set Preserve Document Cache to No.
Click Delete Cache Now to remove the cache from all sources, except any that are currently active under an executing schedule. The cache is deleted in the background, and you do not have to wait for it to complete.
Click Apply.
To delete the cache for an individual source:
Select the Sources secondary tab on the Home page.
Click Edit for the source.
Click the Crawling Parameters subtab.
Set Preserve Document Cache to No.
Click Apply. the following.
The Oracle SES crawler initially is set to search only text files. You can change this behavior by configuring an image document service connector to search the metadata associated with image files. Image files can contain rich metadata that provide additional information about the image itself.
The Image Document Service connector integrates Oracle Multimedia (formerly Oracle interMedia) images with Oracle SES. This connector is separate from any specific data source.
The following table identifies the metadata formats (EXIF, IPTC, XMP, DICOM) that can be extracted from each supported image format (JPEG, TIFF, GIF, JPEG 2000, DICOM).
See Also:Oracle Multimedia User's Guide and Oracle Multimedia Reference for more information about image metadata
Image files can contain metadata in multiple formats, but not all of it is useful when performing searches. A configuration file in Oracle SES enables you to control the metadata that is searched and published to an Oracle SES Web application.
The default configuration file is named
attr-config.xml. Note that if you upgraded from a previous release, then the default configuration file remains
ordesima-sample.xml.
You can either modify the default configuration file or create your own file. The configuration file must be located at
ORACLE_HOME
/search/lib/plugins/doc/ordim/config/. Oracle recommends that you create a copy of the default configuration file before editing it. Note that the configuration file must conform to the XML schema
ORACLE_HOME
/search/lib/plugins/doc/ordim/xsd/ordesima.xsd.
Oracle SES indexes and searches only those image metadata tags that are defined within the
metadata element (between
<metadata>...</metadata>) in the configuration file. By default, the configuration file contains a set of the most commonly searched metadata tags for each of the file formats. You can add other metatags to the file based on your specific requirements.
Image files can contain metadata in multiple formats. For example, an image can contain metadata in the EXIF, XMP, and IPTC formats. An exception to this are DICOM images, which contain only DICOM metadata. Note that for IPTC and EXIF formats, Oracle Multimedia defines its own image metadata schemas. The metadata defined in the configuration file must conform to the Oracle Multimedia defined schemas.
Because different metadata formats use different tags to refer to the same attribute, it is necessary to map metatags and the search attributes they define. Table 4-1 lists some of the commonly used metatags and how they are mapped in Oracle SES.
Oracle SES provides this mapping in the configuration file
attr-config.xml. You can edit the file to add other metatags. Oracle recommends that you make a copy of the original configuration file before editing the settings. The configuration file defines the display name of a metatag and how it is mapped to the corresponding metadata in each of the supported formats.
This is done using the
searchAttribute tag, as shown in the example below:
<searchAttribute> <displayName>Author</displayName> <metadata> <value format="iptc">byline/author</value> <value format="exif">TiffIfd/Artist</value> <value format="xmp">dc:creator</value> <value format="xmp">tiff:Artist</value> </metadata> </searchAttribute>
For each search attribute, the value of
displayName is an Oracle SES attribute name that is displayed in the Oracle SES web application when an Advanced Search - Attribute Selection is performed. If any of the listed attributes are detected during a crawl, then Oracle SES automatically publishes the attributes to the SES web application.
For the element
value,
format must take the value of one of the supported formats such as
iptc,
exif,
xmp, or
dicom.
The value defined within the element, for example,
byline/author, is the XML path when the image format is IPTC, EXIF, or XMP. For DICOM, this value must be the standard tag number or value locator.
For IPTC and EXIF formats, the XML path must conform to the metadata schemas defined by Oracle Multimedia. These schemas are defined in the files
ordexif.xsd and
ordiptc.xsd located at
ORACLE_HOME
/search/lib/plugins/doc/ordim/xsd/.
You do not need to specify the root elements defined in these schemas (
iptcMetadata,
exifMetadata) in the configuration file. For example, you can specify
byline/author as the
xmlPath value of the author attribute in IPTC format. Oracle Multimedia does not define XML schemas for XMP metadata, so refer to the Adobe XMP specification for the
xmlPath value.
Within the
<searchAttribute> tag, you can also specify an optional
<dataType> tag if the attribute carries a date or numerical value. For example,
<searchAttribute> <displayName>AnDateAttribute</displayName> <dataType>date</dataType> <metadata> ... </metadata> </searchAttribute>
The default data type is string, so you do not have to explicitly specify a string.
Oracle SES supports both standard and custom XMP metadata searches. Because all XMP properties share the same parent elements
<rdf:rdf>
<rdf:description>, you must specify only the real property schema and property name in the configuration file. For example, specify
photoshop:category instead of
rdf:rdf/rdf:description/photoshop:category. The same rule applies to XMP custom metadata also. However, for XMP structure data, you must specify the structure element in the format parent/child 1/child 2/…child N, where child N is a leaf node. For example,
Iptc4xmpCore:CreatorContactInfo/Iptc4xmpCore:CiPerson. Note that the image plug-in does not validate the metadata value for XMP metadata.
XMP metatags consist of 2 components separated by a colon(:). For example,
photoshop:Creator, which corresponds to the
Author attribute (see Table 4-1). In this,
photoshop refers to the XMP schema namespace. The other common namespaces include
dc,
tiff, and
Iptc4xmpCore.
Before defining any XMP metadata in the configuration file, you must ensure that the namespace is defined. For example, before defining the metadata
photoshop:Creator, you must include the namespace
photoshop in the configuration file. This rule applies to both the standard and custom XMP metadata namespaces. As a best practice, Oracle recommends that you define all the namespaces at the beginning of the configuration file. If the namespace defined in the configuration file is different from the one in the image, then Oracle SES cannot find the attributes associated with this namespace. You can define namespaces as shown:
<xmpNamespaces> <namespace prefix="Iptc4xmpCore"></namespace> <namespace prefix="dc"></namespace> <namespace prefix="photoshop"></namespace> <namespace prefix="xmpRights"></namespace> <namespace prefix="tiff"></namespace> </xmpNamespaces>
Note that the Adobe XMP Specification requires that XMP namespaces end with a slash (/) or hash (#) character.
See Also:Adobe Extensible Metadata Platform (XMP) Specification for the XMP metadata schema and a list of standard XMP namespace values:
Custom XMP metadata must be explicitly added to
attr-config.xml. An example of a custom metadata is:
<xmpNamespaces> <namespace prefix="hm"></namespace> </xmpNamespaces> <searchattribute> <displayname>CardTitle</displayname> <metadata> <value format="xmp">hm:cardtitle</value> </metadata> </searchattribute>
Oracle SES 11g supports DICOM metatags, and these metatags are available in the default configuration file
attr-config.xml. Note that the configuration file
ordesima-sample.xml, which is the default configuration file if you upgraded from a previous release, does not contain DICOM metatags. Therefore, you must manually add DICOM metatags to the
ordesima-sample.xml file. To do this, you can copy the DICOM metatags from
attr-config.xml, which is available in the same directory. You can also reference the DICOM standard and add additional DICOM tags.
DICOM metatags are either DICOM standard tags or DICOM value locators.
DICOM standard tags are 8-digit hexadecimal numbers, represented in the format
ggggeeee where
gggg specifies the group number and
eeee specifies the element number. For example, the DICOM standard tag for the attribute
performing physician's name is represented using the hexadecimal value 00081050.
Note that the group number
gggg must take an Even value, excepting 0000, 0002, 0004, and 0006, which are reserved group numbers.
The DICOM standard defines over 2000 standard tags.
The file
attr-config.xml contains a list of predefined DICOM standard metatags. You can add new metatags to the file as shown in the following example:
<searchAttribute> <displayName>PerformingPhysicianName</displayName> <metadata> <value format="dicom">00081050</value> </metadata> </searchAttribute>
Note:The image connector does not support SQ, UN, OW, OB, and OF data type tags. Therefore, do not define such tags in the configuration file.
See Also: more information about the standard tags defined in DICOM images, and the rules for defining metatags
Value locators identify an attribute in the DICOM content, either at the root level or from the root level down.
A value locator contains one or more sublocators and a tag field (optional). A typical value locator is of the format:
sublocator#tag_field
Or of the format:
sublocator
Each sublocator represents a level in the tree hierarchy. DICOM value locators can include multiple sublocators, depending on the level of the attribute in the DICOM hierarchy. Multiple sublocators are separated by the dot character (.). For example, value locators can be of the format:
sublocator1.sublocator2.sublocator3#tag_field
Or of the format:
sublocator1.sublocator2.sublocator3
A
tag_field is an optional string that identifies a derived value within an attribute. A tag that contains this string must be the last tag of a DICOM value locator. The default is
NONE.
A sublocator consists of a
tag element and can contain other optional elements. These optional elements include
definer and
item_num. Thus, a sublocator can be of the format:
tag
Or it can be of the format
tag(definer)[item_num)
The following example shows how to add a value locator to the
attr-config.xml file:
<searchAttribute> <displayName>PatientFamilyName</displayName> <metadata> <value format="dicom">00100010#UnibyteFamily</value> </metadata> </searchAttribute>
where
UnibyteFamily is a tag_field of person name.
The following example shows how to define a value locator from the root level.
<searchAttribute> <displayName>AdmittingDiagnosisCode</displayName> <metadata> <value format="dicom">00081084.00080100</value> </metadata> </searchAttribute> <searchAttribute> <displayName>AdmittingDiagnosis</displayName> <metadata> <value format="dicom">00081084.00080104</value> </metadata> </searchAttribute>
In the above example, the tag 00081084 represents the root tag
Admitting Diagnoses Code Sequence. This tag includes four child tags:
code value (0008, 0100),
coding scheme designator (0008, 0102),
coding scheme version (0008, 0103) and
code meaning (0008, 0104). In this example, we define the value locators for
code value: 00081084.00080100 and
code meaning: 00081084.00080104.
Note:The image connector does not support SQ, UN, OW, OB, and OF data type value locators. Therefore, ensure that the last sublocator of a value locator does not specify such data types.
See Also:Oracle Multimedia DICOM Developer's Guide for more information about DICOM value locators
To search for information about image caption writer:
Open Oracle SES Administration GUI and create the DescriptionWriter attribute:
Specify DescriptionWriter as an Oracle SES attribute name (shown on the Advanced Search - Attribute Selection page).
Examine the following sources for information relevant to modifying the default
attr-config.xml file:
Oracle Multimedia IPTC schema at
ORACLE_HOME
/search/lib/plugins/doc/ordim/xsd/ordiptc.xsd. The IPTC metadata for image caption writer is shown as
captionWriter.
Adobe XMP Specification for XMP Metadata. The XMP path for this property is defined as
photoshop:CaptionWriter.
Oracle Multimedia EXIF schema. There is no caption writer metadata in EXIF.
Add the following section to
attr-config.xml:
<searchAttribute> <displayName>DescriptionWriter</displayName> <metadata> <xmlPath format="iptc">captionWriter</xmlPath> <xmlPath format="xmp">photoshop:CaptionWriter</xmlPath> </metadata> </searchAttribute>
If the
photoshop XMP namespace is not registered in the configuration file, then add the
namespace element to
xmpNamespaces as shown here:
<xmpNamespaces> <namespace prefix="photoshop"></namespace>
.
. existing namespaces
.
</xmpNamespaces>
A default Image Document Service connector instance is created during the installation of Oracle SES. You can configure the default connector or create a new one.
To create an Image Document Service instance:
In the Oracle SES Administration GUI, click Global Settings.
Under Sources, click Document Services to display the Global Settings - Document Services page.
To configure the default image service instance:
Click Expand All
Click Edit for the default image service instance.
or
To create a new image service instance:
Click Create to display the Create Document Service page.
For Select From Available Managers, choose ImageDocumentService. Provide a name for the instance.
Provide a value for the attributes configuration file parameter.
The default value of attributes configuration file is
attr-config.xml. The file is located at
ORACLE_HOME
/search/lib/plugins/doc/ordim/config/, where
ORACLE_HOME refers to
ORACLE_BASE
/seshome, the directory which stores the Oracle SES specific components. If you create a new configuration file, then you must place the file at the same default location.
Click Apply.
Click Document Services in the locator links to return to the Document Services page.
Add the Image Document Service plug-in to either the default pipeline or a new pipeline.
To add the default Image Document Service plug-in to the default pipeline:
Under Document Service Pipelines, click Edit for the default pipeline.
Move the Image Document Service instance from Available Services to Used in Pipeline.
Click Apply.
To create a new pipeline for the default Image Document Service plug-in:
Under Document Service Pipelines, click Create to display the Create Document Service Pipeline page.
Enter a name and description for the pipeline.
Move the Image Document Service instance from Available Services to Used in Pipeline.
Click Create.
You must either create a source to use the connector or enable the connector for an existing source.
To enable the connector for an existing source:
Click Sources on the Home page.
Click the Edit icon for the desired source.
Click Crawling Parameters.
Select the pipeline that uses the Image Document Service and enable the pipeline for this source.
Click Document Types. From the Not Processed column, select the image types to search and move them to the Processed column. The following sources are supported: JPEG, JPEG2000, GIF, TIFF, DICOM.
You can search image metadata from either the Oracle SES Basic Search page or the Advanced Search - Attribute Selection page.
For Basic Search, Oracle SES searches all the metadata defined in the configuration file for each supported image document (JPEG, TIFF, GIF, JPEG 2000, and DICOM). It returns the image document if any matching metadata is found.
Advanced Search enables you to search one or more specified attributes. It also supports basic operations for date and number attributes. Oracle SES returns only those image documents that contain the specified metadata.
Note that Oracle SES does not display the Cache link for image search results.
If the Image Document Service Connector fails, then check the following:
Is the pipeline with an Image Document Service connector instance enabled for the source?
Are the image types added to the source?
For a web source, are the correct MIME types included in the HTTP server configuration file?
For example, if you use Oracle Application Server, then check the file
ORACLE_HOME
/Apache/Apache/conf/mime.types. If the following media types are missing, then add them:
If a connection is established, and all the image files are not crawled, then check whether the recrawl policy is set to
Process Documents That Have Changed. If so, change this to
Process All Documents.
To do this, go to Home - Schedules, and under Crawler Schedules, click Edit for the specific source. This opens the Edit Schedule page. Under Update Crawler Recrawl Policy, select Process All Documents.
Note that you can change the recrawl policy back to Process Documents That Have Changed, after the crawler has finished crawling all the documents in the new source.
Each.
See Also:
"Customizing the Appearance of Search Results" for a list of Oracle internal attributes
"Searching on Date Attributes". Document attribute information is obtained differently depending on the source type. 4-3 with the question mark in the URL.
Urldepth is used internally for calculating relevance ranking, because a URL with a smaller URL depth is typically more important.
Table.
The titled .
Monitor the Crawl Performance"
The following crawler statistics are shown on the Home - Schedules - Crawler Progress Summary page. Some of them crawler log file directory and the language the crawler uses to generate the log file.
Note:On UNIX-based systems, ensure that the directory permission is set to 700 if you change the log file directory. Only the user who installed the Oracle software should have access to this directory.
A new log file is created when you restart the crawler. The location of the crawler log file can be found on the Home - Schedules - Crawler Progress Summary page. The crawler maintains the past seven versions of its log file, but only the most recent log file is shown in the Oracle SES Administration GUI. You can view the other log files in the file system.
The naming convention of the log file name is ids.MMDDhhmm.log, where ids is a system-generated ID that uniquely identifies the source, MM is the month, DD is the date, hh is the launching hour in 24-hour format, and mm is the minutes.
For example, if a schedule for a source identified as i3ds23 starts at 10:00 PM on July 8, then the log file name is i3ds23.07082200.log. Each successive schedule has a unique log file name. After a source has seven log files, the oldest log file is overwritten.
Each logging message in the log file is one line, containing the following six tab delimited columns, in order:
Timestamp
Message level
Crawler thread name
Component name. It is typically the name of the executing Java class.
Module name. It can be internal Java class method name
The crawler configuration file is
ORACLE_HOME
/search/data/config/crawler.dat. Most crawler configuration tasks are controlled in the Oracle SES Administration GUI, but certain features (like title fallback, character set detection, and indexing the title of multimedia files) are controlled only by the crawler.dat file.
Note:The
crawler.datfile is not backed up with Oracle SES backup and recovery. If you edit this file, be sure to back it up manually.
The Java library used to process zip files (java.util.zip) supports only UTF8 file names for zip entries. The content of non-UTF8 file names is not indexed.
To crawl zip files containing non-UTF8 file names, change the
ZIPFILE_PACKAGE parameter in
crawler.dat from
JDK to
APACHE. The Apache library
org.apache.tools.zip does not read the zip content in the same order as the JDK library, so the content displayed in the user interface could look different. Zip file titles also may be different, because it uses the first file as the fallback title. Also, with the Apache library, the source default character set value is used to read the zip entry file name.
Specify the crawler logging level with the parameter Doracle.search.logLevel. The defined levels are DEBUG(2), INFO(4), WARN(6), ERROR(8), and FATAL(10). The default value is 4, which means that messages of level 4 and higher are logged. DEBUG (level=2) messages are not logged by default.
For example, the following "info" message is logged at 23:10:39330. It is from thread name
crawler_2, and the message is
Processing. The component and module names are not specified.
23:10:39:330 INFO crawler_2 Processing
The crawler uses a set of codes to indicate the crawling result of the crawled URL. Besides the standard HTTP status codes, it uses its own codes for non-HTTP related situations.
See Also:Appendix B, "URL Crawler Status Codes" 4-1 End User Query Partitioning
Storage areas are used to store the partitions when the partitioning option is enabled. See "Storage Areas" for more information. needs to be searched without pruning the conditions, the end user request is broken into multiple parallel sub-queries so that the I/O and CPU resources can be utilized in parallel. After the result sets of the sub-queries are returned by the independent query processors, a merged result set is returned to the end user.
Figure 4-2 shows how the mechanism works during crawl time. The documents are partitioned and stored in different storage areas. Note that the storage areas are created on separate physical disks, so that I/O operations can be performed in parallel to improve the search turn around time.
Figure 4-2 Document Partitioning at Crawl Time
At query time, the query partition engine generates sub-queries and submits them to the storage areas, as shown in Figure 4-3.
Figure 4-3 Generation of Sub Queries at Query Time
See "Parallel Querying and Index Partitioning" for more information.
Note:In previous releases, the base path of Oracle SES was referred to as
ORACLE_HOME. In Oracle SES release 11g, the base path is referred to as
ORACLE_BASE. This represents the Software Location that you specify at the time of installing Oracle SES.
ORACLE_HOME now refers to the path
ORACLE_BASE
/seshome.
For more information about
ORACLE_BASE, see "Conventions". | http://docs.oracle.com/cd/E14507_01/admin.1112/e14130/crawler.htm | CC-MAIN-2013-20 | refinedweb | 6,697 | 56.05 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.