text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
std::strstream::freeze
From cppreference.com
If the stream is using a dynamically-allocated array for output, disables (flag == true) or enables (flag == false) automatic allocation/deallocation of the buffer. Effectively calls rdbuf()->pcount()
[edit] Notes
After a call to str(), dynamic streams become frozen automatically. A call to freeze(false) is required before exiting the scope in which this strstream object was created. otherwise the destructor will leak memory. Also, additional output to a frozen stream may be truncated once it reaches the end of the allocated buffer.
[edit] Parameters
[edit] Return value
(none)
[edit] Example
Run this code
#include <strstream> #include <iostream> int main() { std::strstream dyn; // dynamically-allocated output buffer dyn << "Test: " << 1.23; std::cout << "The output stream contains \"" << dyn.str() << "\"\n"; dyn << "Test: " << 1.23; std::cout << "The output stream contains \"" << dyn.str() << "\"\n"; // the stream is now frozen due to str() dyn << " More text"; std::cout << "The output stream contains \"" << dyn.str() << "\"\n"; dyn.freeze(false); }
Possible output:
The output stream contains "Test: 1.23" The output stream contains "Test: 1.23 More " | http://en.cppreference.com/w/cpp/io/strstream/freeze | CC-MAIN-2014-35 | refinedweb | 179 | 51.55 |
Top React Hooks — Mouse, Keyboard, and States
Hooks contains our logic code in our React app.
We can create our own hooks and use hooks provided by other people.
In this article, we’ll look at some useful React hooks.
react-hanger
The react-hanger library comes with various hooks we can use to do various things.
To install it, we run:
yarn add react-hanger
The
usePrevious hook lets us get the previous value from the
useState hook.
For instance, we can write:
import React from "react";
import { usePrevious } from "react-hanger";export default function App() {
const [count, setCount] = React.useState(0);
const prevCount = usePrevious(count);
return (
<>
<button onClick={() => setCount(count => count + 1)}>increment</button>
<p>
Now: {count}, before: {prevCount}
</p>
</>
);
}
Then we have the
count state with the
setCount function returned from
useState .
Then we pass the
count state into the
usePrevious hook to get the previous value of the
count state returned.
React Mighty Mouse
React Mighty Mouse is a library with a hook that lets us track the mouse position.
To install it, we can run:
npm install react-hook-mighty-mouse
Then we can use it by writing:
import React from "react";
import useMightyMouse from "react-hook-mighty-mouse";export default function App() {
const { position } = useMightyMouse();
return (
<div>
x:{position.client.x} y:{position.client.y}
</div>
);
}
We use the
useMightyMouse hook to get an object with the
position property.
Then we can get the mouse position as we move it with the
client.x and
client.y properties to get the x and y coordinates respectively.
react-hook-mousetrap
The react-hook-mousetrap library lets us watch for presses of a keyboard key combination.
We install it by running:
npm i react-hook-mousetrap
Then we can use it by writing:
import React from "react";
import useMousetrap from "react-hook-mousetrap";export default function App() {
useMousetrap(
"ctrl+enter", () => {
console.log("ctrl+enter pressed");
}
); return <div />;
}
We use the
useMousetrap hook with the first argument being the key combination we want to track.
And the 2nd argument is the callback to run when the key combination is pressed.
React hookedUp
React hookedUp is a library with many hooks to make our lives easier.
To install it, we run:
npm install react-hookedup --save
or:
yarn add react-hookedup
Then we can start using various hooks that it comes with.
It comes with the
useArray hook to let us manipulate arrays easily.
For instance, we can write:
import React from "react";
import { useArray } from "react-hookedup";export default function App() {
const { add, clear, removeIndex, value: currentArray } = useArray([
"cat",
"dog",
"bird"
]); return (
<>
<button onClick={() => add("moose")}>add</button>
<button onClick={() => clear()}>clear</button>
<button onClick={() => removeIndex(currentArray.length - 1)}>
remove index
</button>
<p>{currentArray.join(", ")}</p>.
</>
);
}
We have the
useArray hook with the initial array as the value.
Then it returns the
add ,
clear , and
removeIndex methods to let us manipulate the array.
value has the current value.
add lets us append an entry.
clear clears the array.
removeIndex removes an item by its index.
Conclusion
The react-hanger and React hookedUp libraries comes with many hooks to help us manage states.
React Mighty Mouse and react-hook-mousetrap lets us watch for mouse position changes and key combo presses respectively. | https://hohanga.medium.com/top-react-hooks-mouse-keyboard-and-states-e7e987d4e8cc | CC-MAIN-2021-25 | refinedweb | 544 | 55.54 |
DMA_Init_TypeDef Struct ReferenceEMLIB > DMA
DMA initialization structure.
Definition at line
303 of file
em_dma.h.
#include <
em_dma.h>
Field Documentation
Pointer to the control block in memory holding descriptors (channel control data structures). This memory must be properly aligned at a 256 bytes, i.e., the 8 least significant bits must be zero.
Refer to the reference manual, DMA chapter for more details.
It is possible to provide a smaller memory block, only covering those channels actually used, if not all available channels are used. For instance, if only using 4 channels (0-3), both primary and alternate structures, then only 16*2*4 = 128 bytes must be provided. However, this implementation has no check if later exceeding such a limit by configuring for instance channel 4, in which case memory overwrite of some other data will occur.
Definition at line
329 of file
em_dma.h.
Referenced by DMA_Init().
HPROT signal state when accessing the primary/alternate descriptors. Normally set to 0 if protection is not an issue. The following bits are available:
- bit 0 - HPROT[1] control for descriptor accesses (i.e., when the DMA controller accesses the channel control block itself), privileged/non-privileged access.
Definition at line
312 of file
em_dma.h.
Referenced by DMA_Init().
The documentation for this struct was generated from the following file:
- C:/repos/super_h1/platform/emlib/inc/
em_dma.h | https://docs.silabs.com/mcu/5.7/efm32wg/structDMA-Init-TypeDef | CC-MAIN-2022-27 | refinedweb | 227 | 51.24 |
- Author:
- dominno
- Posted:
- March 7, 2010
- Language:
- Python
- Version:
- 1.1
- django models fields json
- Score:
- 2 (after 2 ratings)
Model field that stores serialized value of model class instance and returns deserialized model instance. Example usage:
from django.db import models import SerializedObjectField class A(models.Model): object = SerializedObjectField(serialize_format='json') class B(models.Model): field = models.CharField(max_length=10) b = B(field='test') b.save() a = A() a.object = b a.save() a = A.object.get(pk=1) a.object <B: B object> a.object.__dict__ {'field': 'test', 'id': 1}
More like this
- Custom model field to store dict object in database by rudyryk 6 years, 4 months ago
- JSONField by Jasber 7 years, 4 months ago
- JSONField by deadwisdom 9 years ago
- Duplicate related objects of model instance by johnboxall 7 years, 7 months ago
- Compare objects list and get a list of object to inserted or updated by paridin 1 year, 1 month ago
Please login first before commenting. | https://djangosnippets.org/snippets/1952/ | CC-MAIN-2016-36 | refinedweb | 163 | 51.14 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Phoenix now has a lazy list implementation which is very similar but not identical to the implementation provided by FC++. This provides a set of objects defined by list<type>, for example this which defines an empty list of type int.
list<int> example;
A list can contain zero or more elements of the same type. It can also be declared using a function returning values of the correct type. Such lists are only evaluated on demand. A set of functions are defined which enable many ways of manipulating and using lists. Examples are provided for the features available.
Exceptions are provided to deal with certain cases and these can be turned off if desired. There is a check on the maximum list length which has a default of 1000 which can be changed by the user.
This is an extension to Boost Phoenix which does not change the public interface except to define new features in the namespace
boost::phoenix
It has to be explicitly included using the header
boost/phoenix/function/lazy_prelude.hpp
Boost Phoenix provides many features of functional_programming. One of the things which has been missing until now is a lazy list implementation. One is available in the library FC++ which although not part of Boost has many similarities. It has been possible to reimplement the strategy of the FC++ List Implementation using the facilties in Phoenix. This provides something which has up until now not been available anywhere in Phoenix and probably not anywhere else in Boost. This new implementation is very well integrated with other features in Phoenix as it uses the same mechanism. In turn that is well integrated with Boost Function.
There is a great deal of material in FC++ and it is not proposed to replicate all of it. A great deal has changed since FC++ was written and many things are already available in Phoenix or elsewhere. The emphasis here is to add to Phoenix in a way which will make it easier to implement functional_programming.
Progress is being made in implementing both the basic list<T> and the functions needed to manipulate lists. | https://www.boost.org/doc/libs/1_68_0/libs/phoenix/doc/html/phoenix/lazy_list.html | CC-MAIN-2021-43 | refinedweb | 377 | 62.88 |
CREATE_MODULE(2) Linux Programmer's Manual CREATE_MODULE(2)
create_module - create a loadable module entry
#include <linux/module.h> caddr_t create_module(const char *name, size_t size); Note: No declaration of this system call is provided in glibc headers; see NOTES.
Note: This system call is present only in kernels before Linux 2.6. create_module() attempts to create a loadable module entry and reserve the kernel memory that will be needed to hold the module. This system call requires privilege.
On success, returns the kernel address at which the module will reside. On error, -1 is returned and errno is set to indicate the error.).
This system call is present on Linux only up until kernel 2.4; it was removed in Linux 2.6.
create_module() is Linux-specific.).
delete_module(2), init_module(2), query_module(2)
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2021-03-22 CREATE_MODULE(2)
Pages that refer to this page: delete_module(2), get_kernel_syms(2), init_module(2), query_module(2), syscalls(2), unimplemented(2), systemd.exec(5) | https://man7.org/linux/man-pages/man2/create_module.2.html | CC-MAIN-2021-43 | refinedweb | 194 | 50.84 |
Testing REST APIs With REST-assured
Testing REST APIs With REST-assured
In this article, I will focus on REST-assured, a tool from Jayway for REST API testing. It provides a Java DSL for executing HTTP requests and making assertions on responses..
REST APIs are HTTP-based web services that adhere to REST architectural constraints. If you look up now-a-days systems that talk to each other over the web, it is highly probable that you will find REST APIs being used. In this article, I will focus on REST-assured, a tool from Jayway for REST API testing. It provides a Java DSL for executing HTTP requests and making assertions on responses. If you are planning to automate your testing of REST API and your choice of language is Java, using REST-assured will make writing the tests easy and the tests will be very readable and maintainable.
Why REST-assured?
Let’s look at what is expected from a tool that helps in writing automated tests for REST APIs and how REST-assured lives up to these expectations.
Easy HTTP Request Building and Execution
Requesting building comprises of defining many things like query params, cookies, headers, path params, and request body. Using its DSL, REST-assured hides away the complexity of building and executing an HTTP request behind a fluent interface. We will see this in action in the next section.
Response Assertions
REST-assured also handles, within its library, the parsing of responses. It provides many constructs for making assertions on cookies, response headers, and response body. For doing assertions on response body it provides JsonPath for assertions on JSON responses and XmlPath for assertions on XML responses. It also also uses Java Hamcrest Matchers to make readable assertions.
Ability to Write Clean Code
The real value of automated tests is realized when they are easy to understand and it takes minimal effort to maintain them. In the dictionary of REST-assured’s DSL, you can also find constructs for writing in Given-When-Then style. Using this style helps in specifying pre-conditions under the Given section, behavior under test under the When section, and verifications under the Then section. This helps in maintaining a clear separation of concerns within a test, thus leading to a very readable test.
REST API Testing in Action
Let’s assume that there is a REST API with the following documentation:
PATHS GET /people/{id} PARAMETERS Name Description Required Schema ---- ----------- -------- ------ id Id of the person Yes number RESPONSES Code Description Schema ---- ----------- ------ 200 Person {id: number, name: string}
An automated test for the REST API written using REST-assured will look like below. After reading the test I think that you will agree with me that the test is easily readable and self-explanatory. Also, note that the code is written in a declarative style, which means that I had to just specify the operations, and the library takes care of the mechanics of how the request is made, how the response is handled, and how assertions are made.
... // some imports hidden for brevity import static com.jayway.restassured.RestAssured.given; import static org.hamcrest.Matchers.is; public class PersonApiTest { @Test public void shouldReturnPersonForTheId() { given(). accept(ContentType.JSON). pathParam("id", 1). when(). get("/people/{id}"). then(). statusCode(200). body( "id", is(1), "name", is("Praveer") ); } }
Though what I have showcased above is a very basic test for a REST API, the library with its DSL can very easily handle the rising complexity of the tests. The reason for this is the availability of various constructs in the library for almost all of the cases that you may come across while testing a REST API. Check out the REST-Assured Guide for a detailed description on all available constructs.
The code sample for the above example can be found here on Github.
Summary
If you are planning to automate your testing of REST API and your choice of language is Java you should definitely try out REST-assured.
With SnapLogic’s integration platform you can save millions of dollars, increase integrator productivity by 5X, and reduce integration time to value by 90%. Sign up for our risk-free 30-day trial!
Published at DZone with permission of Praveer Gupta . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/testing-rest-apis-with-rest-assured | CC-MAIN-2018-30 | refinedweb | 734 | 51.58 |
This is the mail archive of the binutils@sources.redhat.com mailing list for the binutils project.
rkufrin@ncsa.uiuc.edu (Rick Kufrin) writes: > #if HAVE_BFD_H && defined(BFD_VERSION) > # if defined(BFD_VERSION_STRING) > printf("%s\n", BFD_VERSION_STRING); > # else > printf("%s\n", BFD_VERSION); > # endif > #else > printf("n/a\n"); > #endif According to the changelogs, BFD_VERSION_STRING was introduced only a couple of years ago--so I would not be surprised that it's not defined on some systems. BFD_VERSION, however, has been around since at least: Thu Mar 17 18:26:46 1994 Ken Raeburn (raeburn@cujo.cygnus.com) * bfd-in.h (BFD_VERSION): Use @VERSION@. If you're finding versions of BFD that are over a decade old, you probably shouldn't be using them! Ben | http://sourceware.org/ml/binutils/2004-02/msg00198.html | CC-MAIN-2016-50 | refinedweb | 122 | 63.7 |
My application handles graceful closing of the socket, but it does not recognize when the Ethernet cable has been disconnected.
In my packet-handling loop, I check the result of all netconn commands used, which include:
netconn_recv
looping over netconn_data and netbuf_next to assemble a packet
netbuf_delete (no error check here, returns void)
netconn_write
When I remove the ethenet cable, the loop never exits like I would expect. Outside the loop is where I call netconn_close and netconn_delete. Because the loop doesn't exit, I cannot reconnect to my device after plugging the cable back in.
I can't get any MQX task data when I pause execution, but at one time I did see that the lwIP tcpip_task was blocked on a semaphore.
Can anyone suggest ways that I should be handling cable disconnections in lwIP?
Hi all,
I'm using SDK version 2.6 for a K66 processor on a proprietary hardware, and the latest release of the mcuxpresso. We use freeRTOS, lwip and web sockets. Communication works in principle, but with the mentioned hurdles regarding the connect and disconnect of the ethernet cable.
It seems there is no applicable solution for that item, at least for the moment. I have investigated the item in several steps, and have also changed the library code in that it gets multitasking capabilites.
It was mentioned by the NXP support earlier that in SDK Version 2.6 a cable connect / disconnect would be properly handled, but in fact that is not the case in 2.6 lwip. The only thing that behaves better than before is the initial connect status of the network, i.e. if no cable is connected initially and you plug in the ethernet, the system does this connect properly, and as long as there is no task with lower priority than the "communication task" that initiates the ethernet handling, that is OK. That means the system is blocked in parts in the "communication task" in several loops as long as we don't have a connection available. That is not favourable in a multitasking system.
After all the changes and additions I made I am able to connect and disconnect, but sometimes the cleanup of the lwip system fails in that sense, that the slowtimer hangs in an assert after disconnecting (the pcb that is is still in the list of pcbs for the timer has state "CLOSED" whereas a "TIME_WAIT" is required). The question is now, how (and also where in the lwip library) to handle the disconnect and with it the cleanup properly. Is it possible to modify the slowtimer such that it accepts a CLOSED-state pcb?
Thank you for your assistance.
Hi Ben. thanks for your quick answer.
I'm already in the state to "see" the connect and disconnect, comparably to your processing.
But for me, there is no way to do a system reset on the run, neither on disconnect nor on connect.
So, the only thing that helps is to gracefully reset and delete all data structures lwip built up during the recent connect.
And to re-construct them on next cable connect. I'm pretty far in that item, but just the question of the slowtimer is a remaining problem.
Greetings Harald.
When this function is called every 2 seconds:
PHY_DRV_Read(0, enetIfPtr->phyAddr, kEnetPhySR, &phyStatus);
if ( (phyStatus & 0x04) == 0){
printf("Ethernet cable removed.\r\n");
}
It works just fine.
When the cable gets plugged back into the connector is another issue. I didn’t keep track of all the various states it could be in, so I just do a reset.
This is an older version of lwIP though, I haven’t upgraded.
Hi Ben McCormick,
I think that NXP team should give some suggestion/or solution for this bug resolution.
In my case I can not do a reset of microcontroller.
Thanks
Hi dave408, did you sucess???
thinhnguyen It has been a long time since I worked on that project, and honestly, I cannot remember if I personally solved that problem. It's possible that one of my colleagues that took over did figure out how to handle disconnections. I'll ask him.
Hi Dave did you ask him?
I have tried to use PHY_DRV_GetLinkStatus but linkstatus always return false, so it did not work for me.
bool PHY_Get_Initialized_LinkStatus() { // return true; if (!g_initialized) return false; bool linkstatus = false; int timeout = 10; uint32_t result; int count = 0; while ((count < timeout) && (!linkstatus)) { result = PHY_DRV_GetLinkStatus(g_devNumber,g_enetIfPtr->phyAddr,&linkstatus); // if (result == kStatus_ENET_Success) { // PRINTF("result == kStatus_ENET_Success, linkStatus = %d\r\n", linkstatus); // return (linkstatus); // } else { // PRINTF("result == kStatus_ENET_Failed\r\n"); // return false; // } count++; } if (count == timeout) { return false; } else { return true; } }
Hi everyone,
I have the same problem but not the solution that work right.
Any suggestion from NXP team ?
Thanks so much
Hi Dave,
You are on the right track.
I did a test using the lwip_ping_bm_frdmk64f example in KSDK_v2 using KDS_3.2.
The packet sent in this example finally ends up in ethernetif.c low_level_output() function. I made the following edits using #if 1's:
/* Send a multicast frame when the PHY is link up. */
if (kStatus_Success == PHY_GetLinkStatus(ENET, phyAddr, &link))
{
if (link)
{
#if 1 //DES 1=test, 0=default code
netif_set_link_up(&fsl_netif0);
#endif
if (kStatus_Success == ENET_SendFrame(ENET, &g_handle, pucBuffer, packetBuffer->tot_len - ETH_PAD_SIZE))
{
return ERR_OK;
}
}
#if 1 //DES 1=test, 0=default code
netif_set_link_down(&fsl_netif0);
#endif
}
My callback blinks the Blue LED fast when connected, and slow when disconnected.
#if 1 //DES 1=test, 0=default code
void delay(uint32_t loop_cnt, uint32_t blinks)
{
uint32_t i,j;
for(j=0;j<blinks*2;j++) {
LED_BLUE_TOGGLE(); //DES blink
for(i=0;i<loop_cnt;i++) //DES delay
{
__asm("nop");
}
}
}
void my_link_callback(void)
{
if(netif_is_link_up(&fsl_netif0)) { //DES link up blink fast
delay(1000000U, 8U);
}
else { //DES link down blink slow
delay(4000000U, 8U);
}
}
#endif
My main() had following:
netif_set_default(&fsl_netif0);
netif_set_up(&fsl_netif0);
#if 1 //DES 1=test, 0=default code
netif_is_link_up(&fsl_netif0);
netif_set_link_callback(&fsl_netif0, my_link_callback); //DES called when link transitions
#endif
LWIP_PLATFORM_DIAG(("\r\n************************************************"));
And I added PCR initialization to pin_mux.c BOARD_InitPins():
CLOCK_EnableClock(kCLOCK_PortB);
/* Affects PORTB_PCR16 register */
PORT_SetPinMux(PORTB, 16u, kPORT_MuxAlt3);
/* Affects PORTB_PCR17 register */
PORT_SetPinMux(PORTB, 17u, kPORT_MuxAlt3);
#if 1 //DES 1=test, 0=default code
/* Led pin mux Configuration */
PORT_SetPinMux(PORTB, 21U, kPORT_MuxAsGpio); //DES Blue LED on PTB21
#endif
Regards,
David
Hi everyone,
I have the same problem but not the solution that work right.
Any suggestion from NXP team ?
Thanks so much
Actually, even if I am able to detect the problem, I'm not sure yet what to do with it. The main issue is really that the lwip tcpip_thread remains blocked if I pull the Ethernet cable. This prevents me from reconnecting to my device.
In tcpip_thread, this looks like the only place it could be held up:
Here's what I have found -- when the cable is disconnected, sys_arch_mbox_fetch returns SYS_ARCH_TIMEOUT, which calls a handler and then jumps to a label called "again". This is what is causing the tcpip_thread to look like it's blocked on a semaphore in the TAD view -- because the cable is disconnected, there isn't any data to get from the mbox, and the code just loops back and tries again.
I have a massive hack that seems to get things going in the right direction. I replaced the "goto again" in sys_timeouts_mbox_fetch to this:
if( attempts++ < 10)
goto again;
else
return;
What this does is allow the function to return after the timeout functions (tcpip, arp, dhcp, etc) timeout a certain number of times. It's clunky, but seems to work. The return statement will result in the caller detecting an invalid message, which is logged and ignored, which I think will work for me. One issue with this solution that bothers me is the 10 attempts. I selected that because there are cases where we do need retry pulling data from the mbox even when the cable is connected. So 10 is basically just the threshold where I seem to have reliable network communications, but can also recognize a disconnected cable and recover in a timely manner when the cable is eventually reconnected.
However, there must be a better way to deal with disconnections than this. I'll keep working on a better solution, but if you have any ideas, please let me know! Thanks, DavidS
Hi Dave,
In my application, if the cable is pulled, then the Ethernet apps have nothing to talk to, so no packets are sent or received. When the cable is re-inserted, I do a System reset.
Startup:
cableStatus=0;
enet_main();
PHY_DRV_Read(0, enetIfPtr->phyAddr, kEnetPhySR, &phyStatus);
if( (phyStatus & 0x04) == 0x04)cableStatus=1; //If link up, then cable is attached.
while(1){
PHY_DRV_Read(0, enetIfPtr->phyAddr, kEnetPhySR, &phyStatus);
if ( ((phyStatus & 0x04) == 0) && (cableStatus==1) ){
printf("Ethernet cable removed.\n");
cableStatus=0;
}
if ( ((phyStatus & 0x04) == 0x04) && (cableStatus==0) ){
//Here when Ethernet cable inserted
NVIC_SystemReset();
for(;;);
}
}
benmccormick I started to look into your solution that uses PHY_DRV_Read(). What I am seeing is that my PHY status is always 0x7849, whether my ethernet cable is connected or not. However, I think that got me to dig some more and I ended up in low_level_init() in ethernetif.c -- in there, the library uses PHY_DRV_GetLinkStatus to determine whether or not there is a link with the client, which I think I'll be able to use now. I'll keep updating this post with my progress.
Thanks, benmccormick! I'll give those functions a try to see how I can make it work in my application. Resetting the firmware isn't an option, but I should be able to figure something out. My original post that has the hack in it to prevent the tcpip_thread from tight looping is flawed, so I needed something else. I'll share my approach with everyone once I get it working correctly.
I think this might be a potential start of a solution:
In ethernetif.c, low_level_init():
result = PHY_DRV_GetLinkStatus(devNumber,enetIfPtr->phyAddr,&linkstatus);
if(result == kStatus_ENET_Success)
{
if(linkstatus == true)
But I wonder if it's safe for me to call the PHY_* functions from a MQX task?
Bummer... looks like this solution won't work with KSDK 1.2. I cannot move to KSDK 2.0 yet. If you have any suggestions that might work in a similar manner for KSDK 1.2, please let me know! I'll start digging around for clues.
Thank you for your help! I will give this a try today and will let everyone know how it goes.
Ok, so it looks like this is the missing link:
/**
* Called by a driver when its link goes down
*/
void netif_set_link_down(struct netif *netif )
{
if (netif->flags & NETIF_FLAG_LINK_UP) {
netif->flags &= ~NETIF_FLAG_LINK_UP;
NETIF_LINK_CALLBACK(netif);
}
}
It looks like I have to call netif_set_link_callback and pass it a callback function to call when the cable is disconnected. However, my next question is, if tcpip_task is blocked on a semaphore and I can't figure out what semaphore that is, how can I write a callback function that will allow my packet handling loop to exit gracefully and then accept a new connection?
EDIT -- I added the callback and passed it to netif_set_link_callback. I also enabled the callback via LWIP_NETIF_LINK_CALLBACK. Unfortunately, when I removed the cable, my callback function didn't get called. Am I missing something else here?
Hi dave408,
how did you resolve this issue ? I have the same issue but not the solution.
Thanks for your help.
Regards | https://community.nxp.com/t5/Kinetis-Software-Development-Kit/How-do-you-detect-and-handle-a-lwIP-disconnection/m-p/509491/highlight/true | CC-MAIN-2021-17 | refinedweb | 1,908 | 61.67 |
By Doug Tidwell
Price: $39.95 USD
£28.50 GBP
Cover | Table of Contents | Colophon
<td>12304</td> download the latest stable build of the code. (If you're feeling brave, feel free to download last night's build instead.)
CLASSPATH. The three files include the .jar file for the Xerces parser, the .jar file for the Xalan stylesheet engine itself, and the .jar file for the Bean Scripting Framework. As of this writing, the .jar files are named xerces.jar, xalan.jar, and bsf.jar.
java org.apache.xalan.xslt.Process
java org.apache.xalan.xslt.Process =xslproc options: -IN inputXMLURL [-XSL XSLTransformationURL] [-OUT outputURL] [-LXCIN compiledStylesheetFileNameIn] [-LXCOUT compiledStylesheetFileNameOutOut]
<>
java org.apache.xalan.xslt.Process -in greeting.xml -xsl greeting.xsl -out greeting.html
<html> <body> <h1> Hello, World! </h1> </body> </html>
<xsl:output>element that specifies HTML as the output format and two
<xsl:template>elements that specify how parts of our XML document should be transformed.
<xsl:stylesheet>element is typically the root element of an XSLT stylesheet.
<xsl:stylesheet xmlns:
<xsl:stylesheet>element defines the version of XSLT we're using, along with a definition of the
xslnamespace. To be compliant with the XSLT specification, your stylesheet should always begin with this element, coded exactly as shown here. Some stylesheet processors, notably Xalan, issue a warning message if your
<xsl:stylesheet>element doesn't have these two attributes with these two values. For all examples in this book, we'll start the stylesheet with this exact element, defining other namespaces as needed.
xml,
html, and
text. We're creating an HTML document, so HTML is the output method we want to use. In addition to these three methods, an XSLT processor is free to define its own output methods, so check your XSLT processor's documentation to see if you have any other options.
<xsl:output
method="xml", you can use
doctype-publicand
doctype-systemto define the public and system identifiers to be used in the the document type declaration. If you're using
method="xml"or
method="html", you can use the
indentattribute to control whether or not the output document is indented. The discussion of the
<xsl:output>element in Appendix A has all the details.
"/", the XPath expression for the document's root element.
<?xml version="1.0"?> <xsl:stylesheet <xsl:output <xsl:template <svg width="8cm" height="4cm"> <g> <defs> <radialGradient id="MyGradient" cx="4cm" cy="2cm" r="3cm" fx="4cm" fy="2cm"> <stop offset="0%" style="stop-color:red"/> <stop offset="50%" style="stop-color:blue"/> <stop offset="100%" style="stop-color:red"/> </radialGradient> </defs> <rect style="fill:url(#MyGradient); stroke:black" x="1cm" y="1cm" width="6cm" height="2cm"/> <text x="4cm" y="2.2cm" text- <xsl:apply-templates </text> </g> </svg> </xsl:template> <xsl:template <xsl:value-of </xsl:template> </xsl:stylesheet>
<greeting>elements similarly. We've gone over the basics of what stylesheets are and how they work..
<para>element, the
quantityattribute of the
<part-number>element, all
<first-name>elements that contain the text
"Joe", and many other variations. An XSLT stylesheet uses XPath expressions in the
matchand
selectattributes of various elements to indicate how a document should be transformed. In this chapter, we'll discuss XPath in all its glory.
$x*6) and Unix-like path expressions (such as
/sonnet/author/last-name). In addition to the basic syntax, XPath provides a set of useful functions that allow you to find out various things about the document.
<?xml version="1.0"?> <?xml-stylesheet <!ELEMENT auth:author (last-name,first-name,nationality, year-of-birth?,year-of-death?)> <!ELEMENT last-name (#PCDATA)> <!ELEMENT first-name (#PCDATA)> <!ELEMENT nationality (#PCDATA)> <!ELEMENT year-of-birth (#PCDATA)> <!ELEMENT year-of-death (#PCDATA)> <!ELEMENT title (#PCDATA)> <!ELEMENT lines (line,line,line,line, line,line,line,line, line,line,line,line, line,line)> <!ELEMENT line (#PCDATA)> ]> <!-- Default sonnet type is Shakespearean, the other allowable --> <!-- type is "Petrarchan." --> <sonnet type="Shakespearean"> <auth:author xmlns: <last-name>Shakespeare</last-name> <first-name>William</first-name> <nationality>British</nationality> <year-of-birth>1564</year-of-birth> <year-of-death>1616</year-of-death> </auth:author> <!-- Is there an official title for this sonnet? They're sometimes named after the first line. --> <title>Sonnet 130</title> <lines> <line>My mistress' eyes are nothing like the sun,</line> <line>Coral is far more red than her lips red.</line> <line>If snow be white, why then her breasts are dun,</line> <line>If hairs be wires, black wires grow on her head.</line> <line>I have seen roses damasked, red and white,</line> <line>But no such roses see I in her cheeks.</line> <line>And in some perfumes is there more delight</line> <line>Than in the breath that from my mistress reeks.</line> <line>I love to hear her speak, yet well I know</line> <line>That music hath a far more pleasing sound.</line> <line>I grant I never saw a goddess go,</line> <line>My mistress when she walks, treads on the ground.</line> <line>And yet, by Heaven, I think my love as rare</line> <line>As any she belied with false compare.</line> </lines> </sonnet> <!-- The title of Sting's 1987 album "Nothing like the sun" is --> <!-- from line 1 of this sonnet. -->
matchand
selectattributes of various XSLT elements. Those location paths described the parts of the XML document we wanted to work with. Most of the XPath expressions you'll use are location paths, and most of them are pretty simple. Before we dive in to the wonders of XPath, we need to discuss the context.
sonnetis a directory at the root level of the filesystem. The
sonnetdirectory would, in turn, contain directories named
auth:author,
title, and
lines. In this example, the context would be the current directory. If I go to a command line and execute a particular command (such as
dir *.js), the results I get vary depending on the current directory. Similarly, the results of evaluating an XPath expression will probably vary based on the context.
<li>elements in a given document. The context size refers to the number of
<li>items selected by that expression, and the context position refers to the position of the
<table>element like this:
<table border="{@size}"/>
@sizeis evaluated, and its value, whatever that happens to be, is inserted into the output tree as the value of the
borderattribute. Attribute value templates can be used in any literal result elements in your stylesheet (for HTML elements and other things that aren't part of the XSLT namespace, for example). You can also use attribute value templates in the following XSLT attributes:
nameand
namespaceattributes of the
<xsl:attribute>element
nameand
namespaceattributes of the
<xsl:element>element
format,
lang,
letter-value,
grouping-separator, and
grouping-sizeattributes of the
<xsl:number>element
nameattribute of the
<xsl:processing-instruction>element
lang,
data-type,
order, and
case-orderattributes of the
<xsl:sort>element
node-set
boolean
trueor
false. Be aware that the
trueor
falsestrings have no special meaning or value in XPath; see Section 4.2.1.2 in Chapter 4 for a more detailed discussion of boolean values.
number
integer(or
int) datatype does not exist in XPath and XSLT. Specifically, all numbers are implemented as IEEE 754 floating-point numbers, the same standard used by the Java
floatand
doubleprimitive types. In addition to ordinary numbers, there are five special values for numbers: positive and negative infinity, positive and negative zero, and
NaN, the special symbol for anything that is not a number.
string
namespacenodes.
<sonnet>element. The
<sonnet>element, in turn, contains two attributes and an
<auth:author>element. The
<auth:author>element contains a namespace node and an element. Be aware that this stylesheet has its limitations; if you throw a very large XML document at it, it will generate an HTML file with many levels of nested tables—probably more levels than your browser can handle.
<xsl:template <html> <head> <title>XPath view of your document</title> <style type="text/css"> <xsl:comment> span.literal { font-family: Courier, monospace; } </xsl:comment> </style> </head> <body> <h1>XPath view of your document</h1> <p>The structure of your document (as defined by the XPath standard) is outlined below.</p> <table cellspacing="5" cellpadding="2" border="0"> <tr> <td colspan="7"> <b>Node types:</b> </td> </tr> <tr> <td bgcolor="#99CCCC"><b>root</b></td> <td bgcolor="#CCCC99"><b>element</b></td> <td bgcolor="#FFFF99"><b>attribute</b></td> <td bgcolor="#FFCC99"><b>text</b></td> <td bgcolor="#CCCCFF"><b>comment</b></td> <td bgcolor="#99FF99"><b>processing instruction</b></td> <td bgcolor="#CC99CC"><b>namespace</b></td> </tr> </table> <br />
testevaluates to
false, then the contents of the
<xsl:if>element are ignored. (If you want to implement an if-then-else statement, check out the
<xsl:choose>element described in the next section.)
>instead of
>in the attribute value. You're always safe using
>here, although some XSLT processors process the greater-than sign correctly if you use
>instead. If you need to use the less-than operator (
<), you'll have to use the
<entity. The same holds true for the less-than-or-equal operator (
<=) and the greater-than-or-equal (
>=) operators. See Section B.4.2 for more information on this topic.
<xsl:if>element is pretty simple, but it's the first time we've had to deal with boolean values. These values will come up later, so we might as well discuss them here. Attributes like the
testattribute of the
<xsl:if>element convert whatever their values happen to be into a boolean value. If that boolean value is
<xsl:apply-templates>element to invoke other templates. You can think of this as a limited form of polymorphism; a single instruction is invoked a number of times, and the XSLT processor uses each node in the node-set to determine which
<xsl:template>to invoke. Most of the time, this is what we want. However, sometimes we want to invoke a particular template. XSLT allows us to do this with the
<xsl:call-template>element.
name.
<xsl:call-template>element to invoke the named template.
<xsl:template <!-- interesting stuff that generates the masthead goes here --> </xsl:template> ... <xsl:template <html> <head> <title><xsl:value-of</title> </head> <body> <xsl:call-template ...
<xsl:call-template>to invoke those templates and create the look and feel you want.
<xsl:import>or
<xsl:include>), you can create a set of stylesheets that generate the look and feel of the web site you want. If you decide to redesign your web site, redesign the stylesheets that define the common graphical and layout elements. Change those stylesheets, regenerate your web site, and voila! You will see an instantly updated web site. (See Chapter 9 for an example.)
<xsl:param>and
<xsl:with-param>elements allow you to pass parameters to a template. You can pass templates with either the
<call-template>element or the
<apply-templates>element; we'll discuss the details in this section.
<xsl:param>element. Here's an example of a template that defines two parameters:
<xsl:template <xsl:param <xsl:param <xsl:value-of </xsl:template>
widthand
height, and outputs their product.
selectattribute on the
<xsl:param>element:
<template name="addTableCell"> <xsl:param <xsl:param <xsl:param <td width="{$width}" bgcolor="{$bgColor}"> <xsl:apply-templates </td> </template>
bgColorand
widthare
'blue'and
150, respectively. If we invoke this template without specifying values for these parameters, the default values are used. Also notice that we generated the values of the
widthand
bgcolorattributes of the HTML
<td>tag with attribute value templates, the values in curly braces. For more information, see Section 3.3 in Chapter 3.
blue, but we didn't do it around the value
150. Without the single quotes around
blue, the XSLT processor assumes we want to select all the
<blue>elements in the current context, which is probably not what we want. The XSLT processor is clever enough to realize that the value
<xsl:variable>element, which allows you to store a value and associate it with a name.
<xsl:variable>element can be used in three ways. The simplest form of the element creates a new variable whose value is an empty string (
""). Here's how it looks:
<xsl:variable
x, whose value is an empty string. (Please hold your applause until the end of the section.)
selectattribute to the
<xsl:variable>element:
<xsl:variable
blueis used as the value of the variable. If we had left out the single quotes, this would mean the value of the variable is that of all the
<blue>elements in the current context, which definitely isn't what we want here.
35, Xalan, XT, and Saxon all assume that I mean
35as a literal value, not as an element name. Although this works with many XSLT processors, you're safer to put the single quotes around the numeric values anyway. A further aside: the value here is the string "35", although it can be converted to a number easily.
<xsl:variable>element is to put content inside it. Here's a brief example:
<xsl:variable <xsl:choose> <xsl:when <xsl:text>13</xsl:text> </xsl:when> <xsl:otherwise> <xsl:text>15</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:variable>
(^)in front of all ampersands
(&)
mkdir xslt & chdir xslt | http://www.oreilly.com/catalog/9780596000530/toc.html | crawl-001 | refinedweb | 2,229 | 57.47 |
5 Points Developers Need to Know about WinRT
5 Points Developers Need to Know about WinRT
Join the DZone community and get the full member experience.Join For Free
Most of us have already started developing for WinRT, so it make sense to have a better understanding of the runtime:
1) WinRT is a new collection of COM objects, native to Windows
Some quick points about WinRT:
- WinRT is native, and all WinRT objects are unmanaged and use COM as the base.
- WinRT objects implement IUnknown and ref counting.
- WinRT objects wraps a new XAML UI system, along with Win32 APIs. It consumes tons of Win32 APIs (Do dumpbin on a WinRT library) . The new set of XAML libraries are purely native, don’t confuse them with Silverlight or WPF though the terminologies remain same.
- As WinRT is COM, and hence closer to the operating system, it is easier to write language bindings (projections) for WinRT.
- A projection is a way of expressing WinRT in each language. More about this below.
- Just in case you haven’t perceived it yet, the WinRT world is totally different from the managed .NET world.
- WinRT libraries are built ground up using these new sets of WinRT objects, and are kept in the Windows* namespaces. Eg. Windows.UI, Windows.Media, Windows.Networking, Windows.Security, Windows.Devices, etc.
2) WinRT is not exactly your mama’s COM
You don’t need to work with the
crappy
old COM style, even while developing applications in C++. WinRT provides a higher level of abstraction, based on COM. Win RT implements multiple features on top of COM, including
- A Subscriber/Publisher model implemented using .NET inspired Delegates and events. In ‘old’ COM, this was done using events/sinks
- Parameterized interfaces or PInterfaces (some what equivalent to generics), and this can be projected if the language supports them.
- WinRT components don’t implement IDispatch
3) WinRT can be accessed from multiple languages/platforms
WinRT itself is language-neutral. Also, WinRT has got a language-neutral type system.
- In some languages, you may even consume some WinRT types ‘as is.’
- In some other languages (like C#), WinRT types may be mapped to equivalent language types. For example, in C#, WinRT’s IIterable<T> is mapped to IEnumerable<T> – where CLR will take care of the mapping.
- Full list of WinRT <-> CLR mapped types table is here.
In short, you can ‘project’ WinRT to multiple languages. The language run time will take care of the Garbage collection implementation. All WinRT components implement IInspectible interface for projecting itself to other language environments.
4) Though WinRT is unmanaged, it has metadata
You might be thinking, if WinRT is so unmanaged, how we can call into WinRT from other environments, especially from the managed world without old techniques like P/Invoke?. In fact, Windows Runtime libraries are exposed using API metadata stored in .winmd files. You could find the winmd metadata corresponding to the WinRT libraries.
- The format used for exposing metadata is the same as what is used by the .NET framework or Ecma-335 spec (Secret : WinMD files follow the same format of CLR assemblies though they don’t have any IL. Winmd files are just the definitions of the API. The implementation, as discussed, may be managed).
- The underlying binary contract makes it easy for you to access the Windows Runtime APIs directly in the development language you choose.
As the metadata format is similar to the .NET format, you can open a .winmd file in ILDASM and explore, like this.
5) WinRT API has language
bindings
projections
As mentioned, a projection is a way of expressing WinRT in a specific language. You may also create WinRT components in one language, and may consume the same from another (because the metadata is available).
Presently, these are the projections available:
C++ Projections
Using the C++/CX (Component Extensions) which does compile time bindings and compiles the code to a native image. As WinRT is fully native, applications developed using C++ doesn’t need CLR/.NET to compile/run WinRT applications. C++/CX is a set of extensions from Microsoft for developing for WinRT (much like C++/CL was for developing CLR/.NET apps in C++).
C#/XAML
CLR is modified to support WinRT access from the managed world.
- Now CLR can can map WinRT types when you use C# as the language.
- When you create WinRT components in C# that can be used from other languages, you are further restricted to a minimal subset of C# (language features).
When you develop for WinRT in C#/XAML, you’ll notice multiple things.
- As WinRT applications are sandboxed, you don’t have access to a lot of .NET libraries and types like File I/O. Only a minimal set of .NET APIs targeting the metro profile will be exposed
- You don’t have access to the synchronous versions of a number of methods. You need to leverage the asynchronous versions in those cases.
- When CLR does the mapping of WinRT types to CLR types, the WinRT type definitions are made private by the CLR.
- You can access WinRT XAML library, or can use the WebView as the front end when you use C# for your metro style apps.
JavaScript Projections
Javascript projections are probably the most abstract and highest level projections for developing WinRT applications. How ever, you can’t create WinRT Components in Javascript. Also, you can’t use WinRT’s XAML library in/from JavaScript as of now. How ever, the advantage is if you are using Javascript, you could also leverage the HTML5 features for developing your applications. You can use the WinJS scripts and CSS files from Microsoft to provide the ‘metro-style’ look and feel. }} | https://dzone.com/articles/5-points-developers-need-know | CC-MAIN-2019-26 | refinedweb | 956 | 55.74 |
RE: Exch 2K and Smart Host generates NDR 5.1.1..
From: Alan Malmberg [MSFT] (alanmalm_at_online.microsoft.com)
Date: 02/07/04
- Next message: Alan Malmberg [MSFT]: "RE: Exch 5.5 on Win2k won't send outbound messages."
- Previous message: Alan Malmberg [MSFT]: "RE: Encapsulation Address Issue"
- Messages sorted by: [ date ] [ thread ]
Date: Sat, 07 Feb 2004 02:05:00 GMT
What do you mean by non-local mail? Are you sharing a namespace with the
Sendmail server? Where do you have it configured for the smarthost? Do you
have contacts in AD for those users? If not, you could use the Forward all
mail with unresolved recipients on the Messages tab of the Default SMTP
virtual server. Also look at the following knowledgebase article:
XCON: Sharing SMTP Address Spaces in Exchange 2000 WGID:383
ID: 321721
--------------------
| Content-Class: urn:content-classes:message
| From: "Jens Jensen" <inf02jje@student.lu.se>
| Sender: "Jens Jensen" <inf02jje@student.lu.se>
| Subject: Exch 2K and Smart Host generates NDR 5.1.1..
| Date: Tue, 12 Aug 2003 08:45:31 -0700
| Lines: 13
| Message-ID: <0ff001c360e8$c2e061b0Ng6MLgzrI9AQIPSnaynGkiyx5bGg==
| Newsgroups: microsoft.public.exchange.connectivity
| Path: cpmsftngxa06.phx.gbl
| Xref: cpmsftngxa06.phx.gbl microsoft.public.exchange.connectivity:88089
| NNTP-Posting-Host: TK2MSFTNGXA13 10.40.1.165
| X-Tomcat-NG: microsoft.public.exchange.connectivity
|
|
| We have an Exchange 2000 (SP3) configured to send non
| local mail to a smart host (sendmail).
|
| Sendig mail through the smart host to the internet seems
| to work fine as well as sending local mail (i.e sender and
| recipient is present on the same server), BUT as soon as
| we try to send mail anybody else within the organisation a
| NDR 5.1.1 is generated!
|
| Anybody got a clue??
|
| Regards /Jens
|
==========================================================
Alan Malmberg,.
==========================================================
- Next message: Alan Malmberg [MSFT]: "RE: Exch 5.5 on Win2k won't send outbound messages."
- Previous message: Alan Malmberg [MSFT]: "RE: Encapsulation Address Issue"
- Messages sorted by: [ date ] [ thread ] | http://www.tech-archive.net/Archive/Exchange/microsoft.public.exchange.connectivity/2004-02/0169.html | crawl-002 | refinedweb | 322 | 61.02 |
I noticed that I tried to use StopCoroutine on a coroutine that was waiting on a CustomYieldInstruction and that coroutine did not stop. The behavior I am seeing is inconsistent: most times the coroutine stops properly. However, if I am yielding on a series of nested IEnumerators, then the coroutine runs until the outer IEnumerator.
Does anyone have insights about why this is happening? It looks to me like a unity bug, but maybe I am misunderstanding something fundamental.
Here's a test monobehavior. It "should" print 3 lines "Starting Coroutine" "TestRoutine Started" "Stopping Coroutine". However, it prints 7 lines "Starting Coroutine" "TestRoutine Started" "Stopping Coroutine" "Post CustomYield" "PostCustomYield2" "PostCustomYield3" "PostCustomYield4"
using System;
using System.Collections;
using UnityEngine;
public class CustomYieldTest : MonoBehaviour
{
private bool _testStarted;
private Coroutine _coroutine;
public bool _stopMe;
public void Update()
{
if (!_testStarted)
{
_testStarted = true;
Debug.LogWarning($"Starting Coroutine {Time.frameCount}");
_coroutine = StartCoroutine(TestRoutine());
}
else if (_stopMe)
{
Debug.LogWarning($"Stopping Coroutine {Time.frameCount}");
StopCoroutine(_coroutine);
_stopMe = false;
}
}
private IEnumerator TestRoutine()
{
Debug.LogWarning($"TestRoutine Started {Time.frameCount}");
//Change this to use TestRoutineInternal2 and none of the "Post CustomYield" logs occur
yield return TestRoutineInternal(this);
Debug.LogWarning($"Post CustomYield5 {Time.frameCount}");
yield return null;
Debug.LogWarning($"Post CustomYield6 {Time.frameCount}");
}
private IEnumerator TestRoutineInternal(CustomYieldTest parent)
{
yield return TestRoutineInternal2(parent);
Debug.LogWarning($"Post CustomYield3 {Time.frameCount}");
yield return null;
Debug.LogWarning($"Post CustomYield4 {Time.frameCount}");
}
private IEnumerator TestRoutineInternal2(CustomYieldTest parent)
{
parent._stopMe = true;
//Change this to 'yield return null;' and none of the "Post CustomYield" logs occur
yield return new CustomYieldImplementation(1f);
Debug.LogWarning($"Post CustomYield {Time.frameCount}");
yield return null;
Debug.LogWarning($"Post CustomYield2 {Time.frameCount}");
}
}
public class CustomYieldImplementation : CustomYieldInstruction
{
private DateTime _timeToStop;
public CustomYieldImplementation(float seconds)
{
_timeToStop = DateTime.Now.AddSeconds(seconds);
}
public override bool keepWaiting => DateTime.Now < _timeToStop;
}
Answer by Bunny83
·
Mar 04 at 02:04 AM
Ok I just have rewritten my entire answer since there were some mistakes in it ^^.I'm not sure if it always was like this since Unity supported yielding on IEnumerators. However currently it seems that yielding an IEnumerator will not start a new nested coroutine. Instead the coroutine scheduler seems to just chain the statemachines in the same coroutine. So you still have only one coroutine. When you yield on a nested IEnumerator the coroutine probably just stores the IEnumerator internally as the current active one (they might use a stack for that).
I just ran your test with both, your custom yield instruction and just yielding null. However it doesn't change the behaviour at all. I don't get any of your "Post CustomYield" logs since in the very first run you immediately set your "_stopMe" variable to true your coroutine will be stopped the next frame when Update runs. So your coroutine will never get past the first yield statement. Once the coroutine is stopped it will just vanish.
I can not reproduce your output given your code. Maybe you had your parent._stopMe = true; line originally after your first yield statement? However in this case you could only see the first "Post CustomYield" inside your "TestRoutineInternal2". After that your coroutine would be stopped and nothing else could actually execute. Again I don't see any change when I replace your yield return new CustomYieldImplementation(1f); with yield return null;. If the _stopMe line is after that statement the only difference is that it takes 1 second before the coroutine is stopped when your custom yield instruction is used.
parent._stopMe = true;
yield return new CustomYieldImplementation(1f);
yield return null;
Note I carried out my tests in Unity 2019.0.3f6. Maybe you use a different Unity version?
Wow. I did not know that. I'm still a bit confused though.
Why if I change line "yield return new CustomYieldImplementation(1f);" to "yield return null;" does it not print any of the "Post CustomYield" lines? Shouldn't it still print the 4 "Post CustomYield" lines?
Note that I've run some tests with your code and I have re-written my answer since the behaviour might have changed in the past.
My original test was on 2018.4.3f1LTS. I confirmed once again using this code there prints 4 "Post CustomYield" statements.
The exact same code on 2019.3.3f1 prints 0 "Post CustomYield" statements.
It appears unity does not (generally) start separate coroutines for yielding on nested IEnumerators, even in 2018.4. I had to work hard to get an example that prints the "Post CustomYield" statements: if you remove one layer of nesting it does not work; if you yield return null, it does not work. To my eyes this looks like an inconsistency in 2018.4, I can not explain why it behaves like its starting separate coroutines only in this very specific.
Problem with Stopping Nested Coroutines: Control Never Returned to Parent
3
Answers
"Coroutine couldn't be started..." warning - harmless?
1
Answer
Time.time undependable in a coroutine?
1
Answer
How do I use Coroutines remotely... and correctly ?
2
Answers
yield, Coroutines, InvokeRepeating...
1
Answer | https://answers.unity.com/questions/1704588/inconsistent-stopcoroutine-behavior-with-customyie.html | CC-MAIN-2020-45 | refinedweb | 833 | 51.55 |
Telegram is a popular IM platform that is famous for its openness. A lot of applications are being discovered with their public Bot API and User API. Exposed as an HTTP interface, the Bot API is more popular on Telegram, but to interact with a bot, we still need to expose its User API, which is using an original protocol named MTProto. Below is my simple code snipped that sends a message to a bot and mark its first reply as read, using Pyrogram — a Python wrapping of MTProto.
What you need
- Python 3.6 or higher
- Telegram account
Install Pyrogram
pip3 install 'pyrogram[fast]'
[fast] here means to use the C-based cryptography module for better performance.
Telegram API key
Get your own Telegram API key from, which will be used later.
The script
from pyrogram import Client, Filters, MessageHandler, Message from threading import Event # Put your Telegram API key here api_id = 12345 api_hash = "12345678901234567890abcdefabcdef" # User to send message to user = "botfather" # Message content command = "/help" feedback_evt = Event() def mark_as_read(client: Client, message: Message): client.read_history(message.chat.id, message.message_id) feedback_evt.set() with Client("login", api_id, api_hash) as app: app.send_message(user, command) app.add_handler(MessageHandler(mark_as_read, Filters.chat(user))) feedback_evt.wait()
Change the highlighted lines accordingly.
First time use
Run the script with Python, and you should be prompted to log in with your phone number and login code. This is only needed on the first run.
Note
Your login session data will be stored in
login.sessionfile. Keep this file as secure as your password.
Now this script is ready to run. You can run this with anything you want, bash script, cronjob, or whatever that can call a command.
The post Simple automated interactions with Telegram Bots using MTProto (Pyrogram) appeared first on 1A23 Blog.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/blueset/simple-automated-interactions-with-telegram-bots-using-mtproto-pyrogram-4ge5 | CC-MAIN-2021-10 | refinedweb | 302 | 64 |
4 years ago.
ADC In to DAC Out Nucleo 302FR8 334FR8 L152RE How to configure the DAC Out?
Hi,
Im Jesse. real new. Got a fair amount of hardware experience but this programming and MCU stuff is taking a while to stick. Struggling a little to grasp the basics here.
Like to make my Nucleo 302FR8 read an ADC/AnalogIn (PA_0 or whichever one really, doesn't matter much) and output it via the DAC (PA_4)
Done a ton of googling and searching... I get the Nucleo_read_analog_value:
So AnalogIn will configure the pin for the Nucleos ADC (Is that right, or when it's being programmed like the Arduino is it implemented differently?)
Most of the stuff I can find on the DAC is a Sinewave out and I can't extrapolate how to configure a pin for the DAC out. The Appnotes and Datasheets are a little over my head and don't necessarily apply directly to the MBED implementation.
How can I get from :
include the mbed library with this snippet
#include "mbed.h" AnalogIn analog_value(A0); DigitalOut led(LED1); int main() { float meas; printf("\nAnalogIn example\n"); while(1) { meas = analog_value.read(); // Converts and read the analog input value (value from 0.0 to 1.0) meas = meas * 3300; // Change the value to be in the 0 to 3300 range printf("measure = %.0f mV\n", meas); if (meas > 2000) { // If the value is greater than 2V then switch the LED on led = 1; } else { led = 0; } wait(0.2); // 200 ms } }
to having the read analog value output from DAC pin PA_4?
Thanks!
To post an answer, please log in. | https://os.mbed.com/questions/6944/ADC-In-to-DAC-Out-Nucleo-302FR8-334FR8-L/ | CC-MAIN-2019-18 | refinedweb | 273 | 73.17 |
Background
Before we proceed with how dependency injection works in Spring lets see what dependency injection actually is and why is it so important.
Consider a simple example. Lets say we are designing a Cricket game. How will it's class look like?
You will have a Cricket class that will probably have Bat class and Ball class.
public class Cricket { private Bat bat; private Ball ball; public Cricket() { bat = new Bat("MyBat1") ; ball= new Ball("MyBall1") ; } public void play() { bat.play(); ball.play(); } }
How would you play cricket? Probably something like
public class Main { public static void main(String args[]) { Cricket cricket1 = new Cricket(); cricket1.play(); } }
What do you think about above code -
- It is tightly coupled. If you device to change your Bat or Ball you have to write a new Cricket instance.
- It will also be very difficult to test. There can be any type of Bat or Ball and each test your need new Cricket instance.
This is where dependency injection comes into picture. Now consider your cricket class as follows.
public class Cricket { private Bat bat; private Ball ball; public Cricket(Bat bat, Ball ball) { this.bat = bat; this.ball = ball; } public void play() { bat.play(); ball.play(); } }
and you play like
public static void main(String args[]) { Bat bat1 = new Bat("MyBat"); Ball ball1 = new Ball("MyBall"); Cricket cricket1 = new Cricket(bat1, ball1); cricket1.play(); Bat bat2 = new Bat("MyNewBat"); Ball ball2 = new Ball("MyNewBall"); Cricket cricket2 = new Cricket(bat2, ball2); cricket2.play(); }
If you notice you can create and use any type of bat and ball and play cricket with it. All of it is done at runtime. So essentially you are injecting Bat and Ball dependencies into your Cricket object.
Also notice the problems we faced with earlier code are vanished
- Code is now decoupled. You can use any Bat and Ball to play cricket.
- Testing has also becomes very easy as you can now mock your Bat and Ball objects and test your cricket.
There are two types of dependency injection
- Constructor based dependency injection
- Setters based dependency injection
public class Cricket { private Bat bat; private Ball ball; public Bat getBat() { return bat; } public void setBat(Bat bat) { this.bat = bat; } public Ball getBall() { return ball; } public void setBall(Ball ball) { this.ball = ball; } public void play() { bat.play(); ball.play(); } }
and you play like
public static void main(String args[]) { Bat bat1 = new Bat("MyBat"); Ball ball1 = new Ball("MyBall"); Cricket cricket1 = new Cricket(); cricket1.setBat(bat1); cricket1.setBall(ball1); cricket1.play(); Bat bat2 = new Bat("MyNewBat"); Ball ball2 = new Ball("MyNewBall"); Cricket cricket2 = new Cricket(); cricket2.setBat(bat2); cricket2.setBall(ball2); cricket2.play(); }
Simply putting
"Dependency injection is basically providing the objects that an object needs (its dependencies) instead of having it construct them itself. It's a very useful technique for testing, since it allows dependencies to be mocked or stubbed out."
That's the dependency injection in general. Now lets come to Spring dependency injection.
Spring Dependency Injection
In Spring dependency injection Spring container instantiates and injects dependencies in your instance (also called beans) based on the dependency type or name (more on this later) rather that you instantiating and injecting it yourself.
Lets see how spring DI works in case of our above code.
- Setter DI
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <bean id="myBat" class="Bat"> <bean id="myBall" class="Ball"> <bean id="cricket" class="Cricket"> <property name="bat" ref="myBat"/> <property name="ball" ref="myBall"/> </bean> </beans>
Above bean definition uses setter DI. Spring container scans beans and automatically injects dependencies in it. You can also use
- Constructor based DI
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <bean id="myBat" class="Bat"> <bean id="myBall" class="Ball"> <bean id="cricket" class="Cricket"> <constructor-arg <!-- constructor argument order should be same--> <constructor-arg </bean> </beans>
Notes :
- For values as dependency (like simple String) you can do <property name="brand" ref="MRF"/>
- Instead of using property tag you can also use p namespace - <bean id="cricket" class="Cricket" p: (don't forget to add the namespace)
- Also note p namespace does not have any schema reference.
Aurowiring
Instead of explicitly providing dependencies to be injected you can autowire them. One simple example is autowiring by type. In this case Spring container will look for bean of dependency type among all beans and inject it. However note if more than one type of such bean exist this will fail.
- autowire is an attribute in bean tag with following possible values.
Another interesting aspect is spring DI using annotations and component scan. Instead of specifying beans you can directly use annotations and ask spring framework to scan those and autowire. You can see the same in next post -
NOTE : Dependency injection and IoC (Inversion of control) are words used interchangeably. Both mean dependencies are injected rather that created itself by the object which needs the dependencies. | http://opensourceforgeeks.blogspot.com/2015/10/spring-dependency-injection.html | CC-MAIN-2017-43 | refinedweb | 825 | 56.25 |
12115/solve-method-personal-newaccount-does-exist-available-error
I am working on web3 and running a private blockchain
The app.ts file:
import * as Web3 from 'web3';
var web3 = new Web3(new Web3.providers.WebsocketProvider('ws://localhost:8546'));
web3.eth.getAccounts().then(accounts => {
var sender = accounts[0];
web3.eth.personal.unlockAccount(sender, 'password');
});
And I am getting the following error:
Unhandled rejection Error: Returned error: The method personal_newAccount does not exist/is not available
$: peer channel fetch config config_block.pb -o ...READ MORE
Contrary to the popular belief, it is ...READ MORE
This seems to be a problem with ...READ MORE
I was having the same problem and ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
Try compiling and migrating the contract:
$ truffle ...READ MORE
This was a bug. They've fixed it. ...READ MORE
The error is mainly because the asset ...READ MORE
First try restarting the system and then ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/12115/solve-method-personal-newaccount-does-exist-available-error?show=12116 | CC-MAIN-2022-40 | refinedweb | 181 | 51.95 |
Realm is great, but your app has to be a lot more than just a database. You need a UI! Thorben explains how he built Realm add-ons for Android including Recycler View, Map View, and Search View, to help you more easily make a beautiful and performant UI powered by your data. He covers the techniques he used to connect the views to the data, & how to use the add-ons in your apps.
See the discussion on Hacker News .
Transcription below provided by Realm: a replacement for SQLite that you can use in Java or Kotlin. Check out the docs!
Get new videos & tutorials — we won’t email you for any other reason, ever.
About the Speaker: Thorben Primke
Thorben is a Software Engineer at Pinterest working towards making all product pins buyable. Prior to Pinterest he worked on Android at Jelly, Facebook, and Gowalla.
My name is Thorben. I work at Pinterest. Specifically, I work on the commerce team, and we’re trying to make pins buyable. I previously worked at a company called Jelly, and more specifically on a product Super there. I also worked on Facebook Messenger and search at Facebook, both on Android. And originally, I got started on Android at GoWalla, which was a location-based application or service similar to FourSquare.
How I Got Started Using Realm
At Jelly, we pivoted to another product called Super, so we were starting from scratch. We needed a feed of things that people were posting, and there was also a detail screen where you could go to, that was like a post, as well as write comments. Between the different screens, we were always making network requests, and the options were getting out of sync. For example, if you “like” a detail, how do we get that data back to the feed to keep the same story in sync?
One approach we could have taken was an event pass – that is, send something over and pick it up. We could have taken all the network requests, cached them in a DB, as well as in memory, so that all the screens rely on the same in-memory object.
When I first discovered Realm, everybody talked about how fast it was, especially the fact that it didn’t have to go to the background thread to do any of your queries – you could always do it on your UI thread. As soon as Realm supported primary keys, I took all the SQL code out, which was much easier compared to the work at Remind because we hadn’t launched yet, and replace it with Realm. In the end, that solved any synchronization problems between all of my screens.
The Intentions Behind Realm Add-Ons
Realm Add-ons came about as components that I built while building Super, but they’re reusable and the kind of thing that’s usually needed in most other applications. I also wanted them to be extensible so that I didn’t just build them for my own needs, but also for other people, so they’re on GitHub so people can customize them and edit them themselves.
These are the three I created:
- RealmRecyclerView – a Realm component that’s built on top of the new RecyclerView.
- RealmSearchView – a RecyclerView with a search bar at the top to type searches, that’s auto-filled with your Realm data as you type.
- RealmMapView – a one-off component I built to match the iOS component, that probably not many people will have a use for, but is very neat and also the easiest to integrate.
I always needed pull to refresh functionality to fetch new data from the server. In the case of Super, this was the feed as people posted more content. I also needed support for infinite scrolling and pagination.
The data might not necessarily be in your Realm, but you can kick off a network request, fetch more data, insert it into your Realm, and with the automatic updates, have your data displayed in your feed.
Something that is new with the RecyclerView is that you get animations for free. With listening to changes on the Realm, you kind of know that something’s changed, but you don’t have very fine-grained notifications yet about what changed.
For the pull to refresh, it’s using the Scribe refresh layout underneath. It requires a listener and that’s about it. So on the RealmRecyclerView, you just have your listener. If somebody initiates the pull to refresh, you get notified, and you can do whatever your logic requires.
realmRecyclerView.setOnRefreshListener( new RealmRecyclerView.OnRefreshListener() { @Override public void onRefresh() { asyncRefreshAllQuotes(); } } ); RealmRecyclerView.setRefreshing(false);
This is very similar to the pull to refresh: once you get to the bottom of the list, you’ll want to load more things so you set up a
LoadMoreListener . Again, you do your logic, either going to the network or doing something else that will get more stories, with the results inserted into your Realm.
realmRecyclerView.setOnLoadMoreListener( new RealmRecyclerView.OnLoadMoreListener() { @Override public void onLoadMore(Object lastItem) { asyncLoadMoreQuotes(); } } ); realmRecyclerView.disableShowLoadMore();
Animation now comes for free with the RecyclerView as long as you tell it what was inserted or what was removed, or what particular rows changed. But for this, we need to know exactly what changed.
I rely on the
onChange event on a Realm results set. But this only lets me know that something’s changed – I don’t know exactly what. So I maintain IDs separately within the RealmRecyclerView adaptor. To mirror what’s actually being displayed from the Realm, and to know when something’s changed, I can look all the IDs that are now available on the Realm and compare them to the old results.
We use another open source library for this, called difflib, to figure out what the difference is, which will tell me the delta.
This one is very simple. There’s a default empty state, but if you want to provide your own empty state, just create a view and provide that resource ID via custom attribute on the Realm RecyclerView in your XML layout.
For section headers, I integrated a library that specifically works on providing section headers of different kinds. This one’s called SuperSLiM. It also provides the animation for inserting and deletion, and in the case of deletion it keeps track of which particular rows those are and removes not only the last item in that section but also the header at the same time.
- Linear
- Grid
- Vertical LinearLayout with Section Headers
You can have a linear layout, either on vertical or horizontal. You can also have a grid, and that can have that vertical in your layout with the section headers.
- Swipe to Remove
Another feature is swipe to remove. I added that on the rows so you can actually swipe it, and it will trigger the removal of that item from the Realm. And for the layout with grid, you can specify how many grid columns and what width they’re supposed to be.
The RealmSearchView is something I’ve needed in the past more than once. Essentially, you have some content that you want filtered. This one actually extends the RealmRecyclerView as far as the bottom content where it displays the data but it comes with a search field on the top. You take this view component, drop it into your fragment or activity, and it will look as what is rendered on this slide.
realmSearchView = (RealmSearchView) findViewById(R.id.search_view); realm = Realm.getDefaultInstance(); adaptor = new BlogRecyclerViewAdaptor(this, realm, "title"); realmSearchView.setAdaptor(adaptor); public class BlogRecyclerViewAdaptor extends RealmSearchAdaptor<Blog, BlogRecyclerViewAdaptor.ViewHolder> { public BlogRecyclerViewAdaptor( Context context, Realm realm, String filterColumnName) { super(context, realm, filterColumnName); } ) }
If you have any input, it will display the clear button. On the bottom there’s a specific adaptor, and you pass in, in this case block, which is the class of my model. And so it looks at this class and queries all the items available for it underneath, and that’s where the content for the search view comes from. In this case, we provided
title as the field that we want to filter on, so as you type something it’s going to just look at the title field of this Realm object.
This is the most simplistic implementation, but another option is
contains , so anywhere within, not just at the beginning. You can specify case sensitive, you can specify the order of the results, as well as the key, or the field that you want to sort by. And if you want to always show or prefilter your results by a particular term – let’s say you have some mixed data or content within that particular Realm or class – you can always provide the base predicate to actually prefill data results, and then the user types the additional search term on top.
public void filter(String input) { RealmResults<T> businesses; RealmQuery<T> where = realm.where(clazz); if (input.isEmpty() && basePredicate != null) { if (useContains) { where = where.contains(filterKey, basePredicate, casing); } else { where = wehre.beginsWith(filterKey, basePredicate, casing); } } else if (!input.isEmpty()) { if (useContains) { where = where.contains(filterKey, input, casing); } else { where = where.beginsWith(filterKey, input, casing); } } if (sortKey == null) { businesses = where.findAll(); } else { businesses = where.findAllSorted(sortKey, sortOrder); } updateRealmResults(businesses); }
The bulk of the logic, what this component does, is all within this one filter method, where it applies those different options every time the text field input changes. This rer-uns the query on every single keystroke, and I found that there was pretty much no lag, even when filtering the Realm live.
And the last, the third component is the RealmMapView. This is based on the SupportMapFragment. So it’s actually not really a view, but more a fragment that your fragment extends.
These support MapFragment, and for the clustering, it uses the MapUtils to group the data together. There’s logic that I built to provide the data in a form that the MapUtils clustering methods needed, which means extracting the lat long out of it to begin and build up that list, so the clustering can be done ahead of time.
It’s super easy to integrate, and all you have to provide is the type of the model, and three things: the title name, or the name of the business in this case, and the latitude and longitude."; } }
All three of the Add-Ons are on GitHub, and they’re easily integratable with Gradle via JetPack.
See the discussion on Hacker News .
Get new videos & tutorials — we won’t email you for any other reason, ever.
Realm: a replacement for SQLite that you can use in Java or Kotlin.Check out Realm Add-ons for Android
评论 抢沙发 | http://www.shellsec.com/news/13335.html | CC-MAIN-2017-04 | refinedweb | 1,794 | 62.58 |
Rethinking equality
2011-05-07 09:56:52 GMT the standard Equals type class:
class Equals[T] {
def eql(x: T, y: T)
}
And put the following method in Predef:
<at> inline def areEqual[T](x: T, y: T)(implicit eq: Equals[T]) = eq.eql(x, y)
Define equality as follows:
X == Y is equivalent to areEqual(X, Y), if that typechecks, and otherwise equivalent to what it was until now (i.e. universal equality)
[Aside: I note that the spec still defines Any.== to be
if (null eq this) null eq that else this equals that
We should update that to reflect the realities wrt boxed numbers (on the other hand, if we continue down the path I outline, those realities will also change, see below).]
Furthermore, issue a warning if there is an Equals[T] class for one of the types of X and Y, but not the other (in that case we fall back to the default behavior).
The effect of step 1 is just that any implicit Equals definitions override default behavior, so we are backwards compatible.
Step 2:
======
Define equality as follows:
X == Y is equivalent to areEqual(X, Y), if there is an implicit Equals[T] value for at least one of the types T of X or Y.
Otherwise it is equivalent to universal equality, but in that case a warning is issued.
Step 3:
======
Define equality as follows:
X == Y is equivalent to areEqual(X, Y).
Here's an example: Let's say we have a class Person with an implicit Equals[Person].
Let's assume:
p, q: Person
a, b: Any
Then we have
Step1 Step2 Step3
p == q implicit implicit implicit
p == a universal error error
a == p + warning
a == b universal universal error
+ warning
Notes:
1. Of course I assume everywhere that X != Y is !(X == Y).
2. Once we have arrived at step 3, the universal equality logic for boxed numbers would go into
the implicit equalities for these numbers.
3. All logic we have now in the compiler that warns of equalities that are always true or false is no longer needed. | http://permalink.gmane.org/gmane.comp.lang.scala.internals/4793 | CC-MAIN-2014-41 | refinedweb | 355 | 64.24 |
Remix Adapter for CloudFront Lambda@EdgeRemix Adapter for CloudFront Lambda@Edge
Package was originally forked from but deployed to npm as its own package after failing to get in contact with the original author
This adapter transforms CloudFront Origin Request Events into Web Fetch API Request and Response objects using the adapter convention set out in the Remix Docs.
UsageUsage
import { createRequestHandler } from "remix-lambda-at-edge"; export const handler = createRequestHandler({ getBuild: () => require("./build"), getLoadContext: event => { // access to raw CloudFront event to provide context to loaders }, // mode?: string; development or production, defaulted to NODE_ENV // originPaths?: (string | RegExp)[]; set of paths returned to cloudfront to lookup in S3 instead // onError?: (e: Error) => void; method called if remix fails to handle the request for any reason // debug?: boolean; add extra logging to cloudfront, defaults to false }); | https://www.npmjs.com/package/remix-lambda-at-edge | CC-MAIN-2022-40 | refinedweb | 134 | 52.9 |
Mar 04, 2018 03:36 AM|aakashbashyal|LINK
During the
Edit I have to update some of the tables with the value from the form, and also have to add some new records on the table.
I am always getting the same error:.
My view model is as below:
public class CorporateCustomerUpdateViewModel { public Guid Id { get; set; } public string FullName { get; set; } public CorporateCustomerViewModel CorporateCustomer { get; set; } } public class CorporateCustomerViewModel { public int BusinessTypeId { get; set; } public List<CorporateMemberViewModel> CorporateMembers { get; set; } } public class CorporateMemberViewModel { public string IdNumber { get; set; } public string Name { get; set; } public string Document { get; set; } }
Entities::
public class User { public Guid Id { get; set; } public string FullName { get; set; } public virtual CorporateCustomer CorporateCustomer { get; set; } } public class CorporateCustomer { public int BusinessTypeId { get; set; } public virtual ICollection<CorporateMember> CorporateMembers { get; set;} } public class CorporateMember { public string IdNumber { get; set; } public string Name { get; set; } public string Document { get; set; } }
I am using AutoMapper to map the fields from view models to entities.
Configuration looks like:
public class CorporateCustomerMapper : Profile { CreateMap<CorporateCustomerUpdateViewModel, User>(); }
I am trying to Update the
User table as below:
public void UpdateCorporateCustomerDetail(CorporateCustomerUpdateViewModel model) { var userRepo = _genericUnitOfWork.GetRepository<User, Guid>(); var user = userRepo.GetById(model.Id); var entity = _mapper.Map(model, user); //mapping CorporateCustomerUpdateViewModel to User userRepo.Update(entity); _genericUnitOfWork.SaveChanges(); }
GetById
public TEntity GetById(TKey id) { return _dbSet.Find(id); }
After the MAP the
user.CorporateCustomer.CorporateMembers is getting null because there is no value on
model.CorporateCustomer.CorporateMembers.
But that should not be getting null because there are already some
CorporateMembers in the database which I get during
GetById.
I think here is the problem because EF is going to update the null entry.
Here in the link provided is the discussion about the same scenarios.
I have gone through both the links. But there are the discussion with one
ChildObjects and no AutoMapper. I cannot figure out where did I actually did something wrong, and where I should change my code.
Please can you suggest some code changes, or some hints?
Mar 04, 2018 05:24 AM|DA924|LINK
Myself, I avoid the UoW and the generic repository patterns like the plague.
I would rather use the non-generic repository with Data Access Layer using the DAO pattern.
Or just Data Access Layer using the Data Access Object pattern.
Mar 04, 2018 07:45 AM|DA924|LINK
Is that approach going to help me for the current problems?
The approach I was thought many years ago is to keep it simple. There is nothing wrong with the repository pattern being used in a non-generic manner or not use the pattern at all nor the UoW pattern.
But you see, you should have had the database CRUD all figured out and tested before you even tried to hook up the UI/presentation layer to the backend, which would make things a whole lot more easier too, knowing that the database code is solid.
You are aware of the information in the link, since Web applications are stateless, right?
Mar 04, 2018 09:18 AM|DA924|LINK
It's good that you are trying to use a Repository, but maybe, you should just consider using a Data Access Layer and maybe, the DAO pattern. I'll show you the concept.
But you should also keep in mind what is in the links and the MVC pattern.
<copied>>
You can view the below thread to see what I am talking about of view -- controller -- model -- DAL using DAO.
The DTO(s) are in the Entities classlib project, and the MVC and the DAL have project reference to Entities and know about the DTO(s).
.
Mar 05, 2018 10:03 AM|X.Daisy|LINK
Hi aakashbashyal,
You use breakpoint to debug your project and check the model value.
Set a breakpoint on UpdateCorporateCustomerDetail and check the model's value.
When it comes into the breakpoint, use F10 to execute the code line by line and check the result.
Best Regards,
Daisy<div></div>
6 replies
Last post Mar 05, 2018 10:03 AM by X.Daisy | https://forums.asp.net/t/2137364.aspx?Entity+Framework+update+the+parents+and+add+the+child+objects+Getting+error | CC-MAIN-2018-17 | refinedweb | 682 | 53.1 |
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | ATTRIBUTES | SEE ALSO | NOTES
#include <signal.h>int sigaction(int sig, const struct sigaction *act, struct sigaction *oact);
The sigaction() function allows the calling process to examine or specify the action to be taken on delivery of a specific signal. See signal(3HEAD) for an explanation of general signal concepts.
The sig argument specifies the signal and can be assigned any of the signals specified in signal(3HEAD) except
SIGKILL and
SIGSTOP. In a multithreaded process, sig cannot be
SIGWAITING,
SIGCANCEL, or
SIGLWP.
If the argument act is not NULL, it points to a structure specifying the new action to be taken when delivering sig. If the argument oact is not NULL, it points to a structure where the action previously associated with sig is to be stored on return from sigaction().
The sigaction structure includes the following members:
void (*sa_handler)( ); void (*sa_sigaction)(int, siginfo_t *, void *); sigset_t sa_mask; int sa_flags;
The storage occupied by sa_handler and sa_sigaction may overlap, and a standard-conforming application (see standards(5)) must not use both simultaneously.
The sa_handler member identifies the action to be associated with the specified signal, if the SA_SIGINFO flag (see below) is cleared in the sa_flags field of the sigaction structure. It may take any of the values specified in signal(3HEAD) or that of a user specified signal handler. If the SA_SIGINFO flag is set in the sa_flags field, the sa_sigaction field specifies a signal-catching function.
The sa_mask member specifies a set of signals to be blocked while the signal handler is active. On entry to the signal handler, that set of signals is added to the set of signals already being blocked when the signal is delivered. In addition, the signal that caused the handler
to be executed will also be blocked, unless the SA_NODEFER flag has been specified.
SIGSTOP and
SIGKILL cannot be blocked (the system silently enforces this restriction).
The sa_flags member specifies a set of flags used to modify the(2),)).
If set and sig equals
SIGCHLD, the system will not create zombie processes when children of the calling process exit. If the calling process
subsequently issues a wait(2),:
kill(1), intro(2),), send(3SOCKET), siginfo(3HEAD), signal(3C), signal(3HEAD), sigsetops(3C), thr_create(3THR), ucontext(3HEAD),context.h>) which contains the context from before the signal. It is not recommended that ucp be used by the handler to restore the context from before the signal delivery.SunOS 5.9 Last Revised 9 Jul 2002
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | ATTRIBUTES | SEE ALSO | NOTES | http://docs.oracle.com/cd/E19683-01/817-0691/6mgfmmdt4/index.html | CC-MAIN-2015-48 | refinedweb | 428 | 51.99 |
Editors: Asir S. Vedamuthu, webMethods and Mary Holstege, Mark Logic Corporation
This is the official pre documents seems to use a declarative style to define its requirements, although there is an occurrence RFC 2119 Keyword ("MAY") in 4.; the declarative style seems pretty appropriate to this abstract type of specification, but it makes it harder to identify edge-cases and error processing if applicable
- the EBNF in 4.1 seems to contradict the possibility of using the xmlns() scheme in the relative-schema- component-designator
- "schena" should read "schema" in 4.2.2
- the formatting of the TOC is done with ; please use the appropriate HTML markup instead (nested <ul> or <ol>)
- "Structurally, the first part looks like a URI, and the second part looks like an XPointer fragment identifier. An absolute schema component designator therefore is structurally a URI reference." -> why 'looks like' rather than 'is'? (at least, it should be either 'looks like' in all the cases, or 'is' in all the cases)
- splitting the bibliography into Normative/Informative references would be good
- I have some suspicion the document was produced with XMLSpec; could the XML version be provided as a non-normative alternative version to the doc?
Agreed.
the document doesn't distinguish normative from information sections; it would useful to do so to allow your readers to see where the requirements are set at first glance
Agreed. Done.
- the conformance section is marked as "To be done"; does the WG has even a remote idea of what conformance to this specification would look like? if so, documenting this would be tremendously useful; e.g., is there any expectation to define an XML Schema Designator processor? or is it out of scope for this specification? what about XML Schema Designator generators (which, confusingly enough, would likely XML Schema processors)?
Agreed. See conformance section in new draft.
We discussed this case in passing at the F2F and I went away to determine what the facts of the case are. I can, sadly, report:
The path: /element(a)/element(b) could be either:
an abbreviation for /element(a)/type()/sequence()/element(b)
or:
element a's substitution group head.
Options:
(1) Accept it: c'est la vie It isn't as if /element(a)/element(b) mightn't ambiguously refer to multiple components. OTOH: The justification for elided steps is to be able to refer conveniently to content models. Pulling in substitution group heads seems like a very bad thing.
(2) Get rid of traversal over substitution group heads It's just one arc, we're not losing that much functionality. OTOH: it's just one arc, and special casing that one would be bad.
(3) Get rid of elisions They introduce ambiguity and lots of potential ambiguity and don't give us a lot more functionality than what we get with //. What's more, they violate any XPath expectations. OTOH: Lotsa folks like the simplicity of just navigating through the content model subgraph without having to type extra characters or think about the mechanics of the component model.
(3b) Get rid of unmarked elisions Not just elision, but some visible marker for the elision. This would do less violence to xpath intuitions and solve the ambiguity problem. OTOH: No clue what that syntactic marker should be.
(4) Recast syntax to arc based. (For default axes we can keep current spelling.) e.g. /element(a)/substitutionGroup(b) Solves the ambiguity problem, and lots of folks like the idea a priori anyway. Might be worth doing in any case, as certain aspects of the discussion of canonicals and // need to speak of arcs rather than just component types in any case. OTOH: Will require a fair amount of rework of the draft.
For myself, I'm all in favour of (3). It has always struck me as deeply wrong to have paths that left no syntactic mark that they were doing something magic. I could live with (4) as well, although the busy lazy person in me resists. 3b would be good if we could invent some good syntactic marker, but I cannot think what it should be.
I include (1) and (2) for completion, but I don't think either is a good solution.
(1) Shall we resolve the ambiguity in SCD syntax by adopting the alternative lexemes for non-default axes as indicated in the alternative SCD draft?
MSM: bumpersticker is "adopt the axis model" RESOLVED: adopt.
(2) Shall we additionally shift from a functional notation to an axis-style notation for steps?
MH: Editors say "no" but MSM says yes. RESOLVED: adopt axis syntax.
- 4.2.3 has "The URI on the left hand side of the schema component designator should be a URI of an actual document, in some media type. That media type should be some XML derivative, so that the XPointer framework applies." ; this seems very fuzzy: a URI is an identifier ; in this case, what it identifies would be a schema designator rather than "a document"; I think it should say "The schema designator URI should be dereferenceable ; if it is, the representation of this URI should be an XML document with a MIME-Type on which the XPointer framework applies"
Agreed. Section has been substantially reworked.
"In the simplest case, where there is one root schema document, the URI of that document suffices" ; maybe a "SHOULD" for XSCD generators would be in order, with a reference to the WebArch principle,
Agreed. Section has been substantially reworked.
compatibility with XPath in the XSCD steps would be a big plus
On June 25th 2004, MSM suggested that wd-7 means syntactic and semantic compatibility with XPath. That is, our step types became new axis names, except that alone would leave different semantics. Semantic compatibility means providing a coherent answer for what do various steps mean when given both an XML document and a Schema component graph. Would there be explicit crossover steps? That is, navigate to an EII and the step into the schema component graph at the type for that element?
David Ezell: I would like to have a last call draft with notes inserted, showing the alternative syntax at each juncture point, so people can see that alternative option. If it doesn't take too many hours, a Working Draft (internal or public)... in analogy to how QT has recently done it's publication. Do people agree that we are finished? Do we consider the SCDs issue closed, modulo syntax issues? Yes. Closed!
Consider the axis style syntax proposed by Michael (3 for, none opposed, 7 abstain) That makes it editor's choice. still gives XQuery/XPath and XSLT 2.0 (QT) as possible users of the SCD's.
Given that the QT have NO current plans to use Component Designators shouldn't this use case be changed or dropped?
Agreed to change use case to not refer to be more generic - RESOLVED: Instruct SCD editors: s/XQuery 1.0/type-aware languages that operate on schema-validated XML/ or similar phrasing.
When we are building a component for a type Te which extends a base type Tb, we need to make the content model of Te consist of a sequence with two children: the content model of Tb, and then the content model appearing locally in the source declaration for Te. If I know that Te's content model has an outermost sequence, I can write /type(Tb)/sequence() -- if I know it's a choice, or an all, I write /type(Tb)/choice(), or .../all().
It would be convenient to be able to write something like /type(Tb)/model() and be done with it. (One drawback: that would presumably not be the canonical SCD for the model group in question.)
Will change model group step syntax to model(choice), model(sequence), and model(all) with model(*) available as a wildcard option.
When we are handling redefinitions, it is convenient to be able to refer to the components which have been shadowed by the redefines.
Let's define a couple of terms. If schema document SD1 redefines schema document SD2, and SD1 contains a redefining source declaration for type Tr, then in schema(SD1) we'll have two components of particular interest, namely the 'new' Tr and the 'old' Tr. More precisely, the old Tr is what is denoted by /type(Tr) in schema(SD2), and the new Tr is denoted by /type(Tr) in schema(SD1). Let's define the terms 'predecessor' and 'successor': the old Tr is the predecessor of the new one, and the new one is the successor of the old one. (In Noah's account of redefines [1], each successor / predecessor pair forms a 'redeclaration prototype;.)
[1]
We have wondered in the past whether it's necessary to provide SCDs for the predecessors of redefined components. The more I think about it, the more I think it's probably worth doing just to ensure that we can talk about them when we need to. Note also that since the predecessor must have a successor of its own, we need to be able to handle arbitrarily long chains of predecessor links.
On the graph-model view of component structure, defining SCDs for predecessors is just a question of convenience. On the tuple view, it's essential, because without a SCD a successor can't point to its predecessor.
In section 4.3.1 of my paper,, I use the notation munge(...) to denote predecessors: the predecessor of type /type(Tr) is munge(/type(Tr)), and its predecessor is munge(munge(/type(Tr))) and so on. This isn't a terribly serious proposal; if I had been thinking harder I would have done something more like the rest of our syntax, and if I had been making a serious proposal I would have found a word other than 'munge'.
Agreed. The canonical (and only) SCP for the pre-redefined type named foo will be /type(foo)/type(). Redefined attribute group definitions and model group definitions have no predecessor in the component graph, and therefore will have no SCP.
Shall we specify --A-- specific namespace prefix for the canonical path syntax? Say, 'ns'. Example, /ns:po/@orderDate
Agreed to specify a canonical prefix for namespaces in canonical SCDs. Prefixes for non-canonical SCDs and component paths are unfixed.
Per the current draft, * is an abbreviation for any step in a schema component path. Should we allow * as an abbreviation for any QName in type(*), element(*), etc.?
Agreed to add wildcard syntax inside the step 'functions' generically for all steps where it makes sense.
Given expanded traversals, adding // to a path pretty much pulls in everything in the schema. This isn't very useful. For example, consider/type(a)//element(b)
This means any element b anywhere linked indirectly with a: element b in the content model of a, or in the content model of the base type of a, or in the content model of the base type of the base type of a, or in the content model of the base type of an element in the content model of the base type of a, or...
It seems that // is too broad: should we restrict it to only certain traversals; if so which ones? (Suggestion: the default traversals.)
Agreed. Will use default traversals.
The rules for elided steps, as written, allow the base type to be elided in a path. Do we really want this?/type(a)/element(b)
This refers to any element b in the content model of a, but if the base type step is elided, this can also mean any element b in the content model of the base type of a. Good? Bad?
Agreed that traversal shouldn't pass through the base type for elided steps. Belief was that this was already the case. Editor, on further review, wasn't sure.
We have short forms for element(qname) (qname), element(*) (*), attribute(qname) (@qname), and attribute(*) (@*). It would be good to have a short form for type(qname) and type(*) as well.
[OUTCOME] Adopt the tilde ("~") as the shortform prefix for a type (type::).
[OUTCOME] Adopt the notation "type::0" to indicate anonymous types in the SCDs recommendation.
Given expanded traversals, adding * to a path pulls in a lot of stuff, and it isn't clear it is useful. For example, consider:/type(a)/*
This refers to any component referenced by a. If steps are elided, it could mean any component referenced by any component in the model group of a. Or (see wd-15) any component referenced in the content model of a base type of a. In most cases the possibility of elided steps will expand the applicability of * in ways that probably aren't useful. Suggestion: restrict interaction of * with elided steps, or require explicit marker (//) for elided steps.
Question rendered irrelevant by decisions on wildcards (wd-12).
Last update: $Date: 2005/03/11 13:35. | http://www.w3.org/XML/2004/06/scds-pre-lc-issues/ack_sort.html | CC-MAIN-2013-48 | refinedweb | 2,155 | 62.27 |
Template implementation & Compiler (.h or .cpp?) - 2017
In this section, we'll discuss the issue of template implemention (definition) and declaration (interface), especially related to where we should put them: header file or cpp file?
Quite a few questions have been asked like these, addressing the same fact:
- Why can templates only be implemented in the header file?
- Why should the implementation and the declaration of a class template be in the same header file?
- Why do class template functions have to be declared in the same translation unit?
- Do class template member function implementations always have to go in the header file in C++?
For other template topics, please visit.
As we already know, template specialization (or instantiation) is another type of polymorphism where choice of function is determined by the compiler at compile time, whereas for the virtual function, it is not determined until the run time.
Template provides us two advantages:
First, it gives us type flexibility by using generic programming.
Second, its performance is near optimal.
Actually, a compiler does its best to achieve those two things.
However, it also has some implications as well.
Stroustrup explains the issue in his book "Programming Principles and Practice Using C++:
As usual, the benefits have corresponding weaknesses. For templates, the main problem is that the flexibility and performance come at the cost of poor separation between the "inside" of a template (its definition) and its interface (its declaration).
When compiling a use of a template, the compiler "looks into" the template and also into the template argument types. It does do to get the information to generate optimal code. To have all the information available, current compilers tend to require that a template must be fully defined whenever it is used. That includes all of its member functions and all template functions called from those. Consequently, template writers tend to place template definition in header files. That is not actually required by the standard, but until improved implementations are widely available, we recommend that you do so for your own templates: place the definition of any template that is to be used in more than one translation unit in a header file.
When we do:
template<typename T> class Queue { ... }it's important for us to realize that the template is not class definition yet. It's a set of instructions to the compiler about how to generate the class definition. A particular realization of a template is called an instantiation or a specialization.
Unless we have a compiler that has implemented the new export keyword, placing the template member functions in a separate implementation file won't work. Because the templates are not functions, they can't be compiled separately.
Templates should be used in conjunction with requests for particular instantiations of templates. So, the simplest way to make this work is to place all the template information in a header file and to include the header file in the file that the template will be used.
The code has generic implementation of queue class (Queue) with simple operations such as push() and pop().
The foo() does int specialization, and bar() does string. The declaration and definition are all in one header file, template.h. Each of the foo.cpp and bar.cpp includes the same template.h so that they can see both of the declaration and definition:
template.h
#include <iostream> // Template Declaration template<typename T> class Queue { public: Queue(); ~Queue(); void push(T e); T pop(); private: struct node { T data; node* next; }; typedef node NODE; NODE* mHead; }; // template definition template<typename T> Queue<T>::Queue() { mHead = NULL; } template<typename T> Queue<T>::~Queue() { NODE *tmp; while(mHead) { tmp = mHead; mHead = mHead->next; delete tmp; } } template<typename T> void Queue<T>::push(T e) { NODE *ptr = new node; ptr->data = e; ptr->next = NULL; if(mHead == NULL) { mHead = ptr; return; } NODE *cur = mHead; while(cur) { if(cur->next == NULL) { cur->next = ptr; return; } cur = cur->next; } } template<typename T> T Queue<T>::pop() { if(mHead == NULL) return NULL; NODE *tmp = mHead; T d = mHead->data; mHead = mHead->next; delete tmp; return d; }
foo.cpp
#include "template.h" void foo() { Queue<int> *i = new Queue<int>(); i->push(10); i->push(20); i->pop(); i->pop(); delete i; }
bar.cpp
#include "template.h" void foo() { Queue<std::string> *s = new Queue<std::string>(); s->push(10); s->push(20); s->pop(); s->pop(); delete s; }
We could breakup the header into two parts: declaration (interface) and definition (implementation) so that we can keep the consistency regarding the separation of interface from implementation. We usually name the interface file as .h and name the implementation file as .hpp. However, the end result is the same: we should include those in the .cpp file.
There is a delicate but significant distinction between class template and template class:
- Class template is a template used to generate template classes.
- Template class is an instance of a class template. | http://www.bogotobogo.com/cplusplus/template_declaration_definition_header_implementation_file.php | CC-MAIN-2017-34 | refinedweb | 829 | 54.12 |
android_wifi_info
Dart plugin package for accessing Android's WifiInfo from Flutter. Android-only plugin.
This is a Dart plugin package for accessing wifi information from Flutter and Dart. This is an Android-only plugin.
The plugin wraps the
WifiInfo
class and provides access to all of its methods. It describes the state of any Wifi connection that
is active or is in the process of being set up.
References
The plugin is published on Dart Pub
pub.dartlang.org/packages/android_wifi_info
You can read the API reference on Dart Pub.
The source code is available on GitHub
smaho-engineering/android_wifi_info.
This Flutter plugin is created by the SMAHO engineering team.
Usage
import 'package:android_wifi_info/android_wifi_info.dart'; getNetworkInfo() async { final bssid = await AndroidWifiInfo.bssid; final ssid = await AndroidWifiInfo.ssid; }
Example app
For a working example app, see the
example directory.
To view example screen record, click here
Documentation
Combine with other plugins to access features cross-platform
This plugin is intentionally supporting only Android. However, this doesn't mean you cannot use this plugin to create your own cross-platform utility module for fetching the details you need for your app.
For example, if you want to fetch the current WiFi's BSSID, you can combine it with
ios_network_info package:
import 'dart:io'; import 'package:android_wifi_info/android_wifi_info.dart'; import 'package:ios_network_info/ios_network_info.dart'; get bssid { if (Platform.isAndroid) { return AndroidWifiInfo.bssid; } else if (Platform.isIOS) { return IosNetworkInfo.bssid; } throw Exception('WiFi BSSID is not supported on this platform'); }
To do
This plugin is very much in progress.
There are some important tasks I'm still planning to do for the
1.x version release.
- handle enums properly
- related code, eg converting RSSI to something useful (scale)
- don't forget to point users to how to convert int values to more traditional representations (ip)
- Android Q? tx, rx link speed
- could add links to the right sections in android docs
- consider adding code snippets to the documentation
- example app docs contains links and on tap could open the links in browser... but it's a bit of an overkill for an example app
- make mac address work with proper permissions or at least document it | https://pub.dev/documentation/android_wifi_info/latest/ | CC-MAIN-2019-30 | refinedweb | 363 | 50.63 |
Python Haystack Utility
Project description
What is this ?
Pyhaystack is a module that allow python programs to connect to a haystack server using semantic data model for buildings (project-haystack).
Browse a campus, building, floor… find VAV boxes, AHU units, etc. Then extract history data from them and get the results ready for analysis using pandas or your own database implementation.
Which clients are implemented ?
Actually, connection can be established with :
- Niagara4 by Tridium
- NiagaraAX by Tridium
- Widesky by Widesky.cloud
- Skyspark by SkyFoundry (version 2 and 3+)
Connection to Niagara AX or Niagara 4 requires the nHaystack module by J2 Innovations to be installed and properly configured on your Jace. Refer to documentation of nHaystack for details.
How do I install pyhaystack ?
pip install pyhaystack
Or you can also git clone the develop branch and use
python setup.py install
Note
Some users reported problems when installing pyhaystack using the Python version provided by their OS (Mac OS users). We recommend to try the virtual environment approach when you are unsure about the python version our modules dependencies.
Using virtual env
You can find more information on how to use virtualenv but here is a short way of making it work.
sudo pip install virtualenv mkdir your project folder cd project virtualenv venv source venv/bin/activate
Note
Once you are in your virtual env DO NOT use sudo to pip install. (in fact, this is the part that made me think of permission issue as I read somewhere that we should never sudo pip install anything)
So now you are in your virtual env (it’s in parenthesis in the console) and you
pip install requests pip install hszinc pip install pyhaystack
(note that this time you won’t see any weird message when trying to install pandas and you need xcode to perform the install….) You are now able to
import hszinc hszinc.MODE_ZINC from pyhaystack.client.skyspark import SkysparkHaystackSession
What is project-haystack ?
As stated in the web site
.”
—Project-Haystack
Actual implementation
Pyhaystack is robust and will be ready for asynchronous development.
We have chosen a state machine approach with observer pattern. See the docs for more informations.
This implementation has been mostly supported by Widesky.cloud and Servisys. We are hoping that more people will join us in our effort to build a well working open-source software that will open the door of building data analysis to Python users.
Dependency
Pyhaystack highly depends on hszinc which is a special parser for zinc encoded data. Zinc was created for project-haystack as a CSV replacement.
For analysis, we also suggest using Pint to deal with units. It will bring a lot of possibilities to pyhaystack (ex. unit conversion)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pyhaystack/ | CC-MAIN-2019-47 | refinedweb | 477 | 64.81 |
06 November 2012 09:19 [Source: ICIS news]
?xml:namespace>
May LLDPE futures, the most actively traded contract on the Dalian Commodity Exchange (DCE), closed at yuan (CNY) 10,015/tonne ($1,605/tonne), up by CNY65/tonne from the previous settlement price of CNY9,950/tonne on 5 November.
Around 1.94m tonnes of LLDPE, or 774,144 contracts, were traded for delivery in May, according to the DCE data.
Chinese petrochemical giants Sinopec and PetroChina raised their offers for LLDPE resins for some regions on 6 November, further strengthening the positive outlook for the futures and spot markets, according to market sources.
Spot LLDPE prices in the Chinese domestic market rose by CNY50-100/tonne to CNY10,750-11,100/tonne on 6 November, compared with the previous day’s levels at CNY10,700-11,000/tonne, a trader | http://www.icis.com/Articles/2012/11/06/9611164/china-lldpe-futures-up-by-0.65-on-higher-domestic-spot.html | CC-MAIN-2014-15 | refinedweb | 141 | 55.78 |
Stream.Read Method
Assembly: mscorlib (in mscorlib.dll)
Parameters
- buffer
An array of bytes. When this method returns, the buffer contains the specified byte array with the values between offset and (offset + count - 1) replaced by the bytes read from the current source.
- offset
The zero-based byte offset in buffer at which to begin storing the data read from the current stream.
- count
The maximum number of bytes to be read from the current stream.
Return ValueThe total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached..
The following example shows how to use Read to read a block of data.
using System; using System.IO; public class Block { public static void Main() { Stream s = new MemoryStream(); for (int i=0; i<100; i++) s.WriteByte((byte)i); s.Position = 0; // Now read s into a byte buffer. byte[] bytes = new byte[s.Length]; int numBytesToRead = (int) s.Length; int numBytesRead = 0; while (numBytesToRead > 0) { // Read may return anything from 0 to numBytesToRead. int n = s.Read(bytes, numBytesRead, numBytesToRead); // The end of the file is reached. if (n==0) break; numBytesRead += n; numBytesToRead -= n; } s.Close(); // numBytesToRead should be 0 now, and numBytesRead should // equal 100. Console.WriteLine("number of bytes read: "+numBytes. | http://msdn.microsoft.com/en-us/library/29tb55d8(v=vs.85) | CC-MAIN-2014-15 | refinedweb | 233 | 68.47 |
I'm trying to comppile python 2.5 as a static library using VS2005 SP1. I realized you only have Debug, PG and Release configurations. Of course, like you provide the sources, I could modify it manually... but I think will be much better to provide the Debug_Static and Release_Static configurations for noob user like me :P Also, you are using this code in the pyconfig.h: /* For an MSVC DLL, we can nominate the .lib files used by extensions */ #ifdef MS_COREDLL # ifndef Py_BUILD_CORE /* not building the core - must be an ext */ # if defined(_MSC_VER) /* So MSVC users need not specify the .lib file in their Makefile (other compilers are generally taken care of by distutils.) */ # ifdef _DEBUG # pragma comment(lib,"python25_d.lib") # else # pragma comment(lib,"python25.lib") # endif /* _DEBUG */ # endif /* _MSC_VER */ # endif /* Py_BUILD_CORE */ #endif /* MS_COREDLL */ This does not allow the user to rename the output library ( for example, to pytoncore_static_debug.lib ). It would be very desireable to allow the user to change the python library output name... and use these names as defaults: python25_md.lib -> python 2.5, multithread debug C CRT python25_mdd.libl -> python 2.5, multithread debug DLL C CRT python25_static_debug.lib -> python 2.5, multithread debug static library C CRT python25_static.lib -> python 2.5, multithread static library C CRT On the other hand, I see the python 3.0rc1 solution has been saved using VS2008. I think that's bad, because VS2005 users won't be able to open the solution. Ideally, you should provide a python solution for each Visual Studio: PCBuild_VC6 PCBuild_VC2002 PCBuild_VC2003 PCBuild_VC2005 PCBuild_VC2008 or provide just the VC6 solution that can be easily converted by all the modern Visual Studio versions. thanks. | https://mail.python.org/pipermail/python-ideas/2008-October/002220.html | CC-MAIN-2016-36 | refinedweb | 282 | 68.97 |
Scala is a general-purpose, high-level, multi-paradigm programming language. It is a pure object-oriented programming language which also provides the support to the functional programming approach. There is no concept of primitive data as everything is an object in Scala. It is designed to express the general programming patterns in a refined, succinct, and type-safe way. Scala programs can convert to bytecodes and can run on the JVM(Java Virtual Machine). Scala stands for Scalable language. It also provides the Javascript runtimes. Scala is highly influenced by Java and some other programming langauges like Lisp, Haskell, Pizza etc.
Evolution of Scala:
Scala was designed by the Martin Odersky, professor of programming methods at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a German computer scientist. Martin Odersky is also the co-creator of javac (Java Compiler), Generic Java, and EPFL’s Funnel programming language. He started to design the Scala in 2001. Scala was first released publicly in 2004 on the Java platform as its first version. In June 2004, Scala was modified for the .Net Framework. Soon it was followed by second version i.e. (v2.0) in 2006. At JavaOne conference in 2012, Scala was awarded as the winner of the ScriptBowl contest. From June 2012, Scala doesn’t provide any support for .Net Framework. The latest version of scala is 2.12.6 which released on 27-Apr.
- Contains best Features: Scala contains the features of different languages like C, C++, Java etc. which makes the the support by compiling to JavaScript. Similarly for desktop applications, it can be compiled to JVM bytecode.
- Used by Big Companies: Most of the popular companies like Apple, Twitter, Walmart, Google etc. move their most of codes to Scala from some other languages. reason being it is highly scalable and can be used in backend operations.
Note: People always thinks that Scala is a extension of Java. But it is not true. It is just completely interoperable with Java. Scala programs get converted into .class file which contains Java Byte Code after the successful compilation and then can run on JVM(Java Virtual Machine).
Beginning with Scala Programming
Finding a Compiler: There are various online IDEs such as GeeksforGeeks IDE, Scala Fiddle IDE etc. which can be used to run Scala programs without installing.
Programming in Scala: Since the Scala is a lot similar to other widely used languages syntactically, it is easier to code and learn in Scala. Programs can be written in Scala in any of the widely used text editors like Notepad++, gedit etc. or on any of the text-editors. After writing the program, save the file with the extension .sc or .scala.
For Windows & Linux: Before installing the Scala on Windows or Linux, you must have Java Development Kit(JDK) 1.8 or greater installed on your system. Because Scala always runs on Java 1.8 or above.
In this article, we will discuss how to run the Scala programs on online IDE’s.
Example : A simple program to print Hello Geeks! using object-oriented approach.
Output:
Hello, Geeks!
Comments: Comments are used for explaining the code and are used in a similar manner as in Java or C or C++. Compilers ignore the comment entries and do not execute them. Comments can be of a single line or multiple lines.
- Single line Comments:
Syntax:
// Single line comment
- Multi line comments:
Syntax:
/* Multi-line comments syntax */
object Geeks: object is the keyword which is used to create the objects. Here “Geeks” is the name of the object.
def main(args: Array[String]): def is the keyword in Scala which is used to define the function and “main” is the name of Main Method. args: Array[String] are used for the command line arguments.
println(“Hello, Geeks!”): println is a method in Scala which is used to display the string on console.
Note: There is also functional approach that can be used in Scala programs. Some Online IDE doesn’t provide support for it. We will discuss it in upcoming articles.
Features of Scala
There are many features which makes it different from other languages.
- aplly the parallelism(Synchronize) and concurrency.
- Run on JVM & Can Execute Java Code: Java and Scala have a common runtime environment. So the user can easily move from Java to Scala. The Scala compiler compiles the program into .class file, containing the Bytecode that can be executed by JVM. All the classes of Java SDK can be used by Scala. With the help of Scala user can customize the Java classes.
Advantages:
- Scala’s complex features provided the better coding and efficiency in performance.
- Tuples, macros, and functions are the advancements in Scala.
- It incorporates the object-oriented and functional programming which in turn make it a powerful language.
- It is highly scalable and thus provides a better support for backend operations.
- It reduces the risk associated with the thread-safety which is higher in Java.
- Due to the functional approach, generally, a user ends up with fewer lines of codes and bugs which result in higher productivity and quality.
- Due to lazy computation, Scala computes the expressions only when they are required in the program.
- There are no static methods and variables in Scala. It uses the singleton object(class with one object in the source file).
- It also provides the Traits concept. Traits are the collection of abstract and non-abstract methods which can be compiled into Java interfaces.
Disadvantages:
- Sometimes, two approaches make the Scala hard to understand.
- There is a limited number of Scala developers available in comparison to Java developers.
- It has no true-tail recursive optimization as it runs on JVM.
- It always revolves around the object-oriented concept because every function is value and every value is an object in Scala.
Applications:
- It is mostly used in data analysis with the spark.
- Used to develop the web-applications and API.
- It provide the facility to develop the frameworks and libraries.
- Preferred to use in backend operations to improve the productivity of developers.
- Parallel batch processing can be done using Scala.. | https://www.geeksforgeeks.org/introduction-to-scala/?ref=rp | CC-MAIN-2020-45 | refinedweb | 1,020 | 59.4 |
Hi there!
One question, how can I filter something coming from a database in PHP using ADDT, maybe using a conditional, for example, I have a table with posted jobs, those jobs belongs to departments, for example
If you see, the departments are just 2 Brokerage and Carrier but in the database they are called like I described you Brokerage - Something and Carrier - Something.
I want to have 2 HTML tables the table in the left with only posting the Brokerage Positions and the one in the right with the Carrier Positions, I used this on the left one: - hey, don't laugh! -
<?php
// Show IF Conditional region1
if (@$row_rsBJ['Job_Department'] != "Brokerage") {
?>
<?php do { ?>
<table width="100%" border="0" cellspacing="2" cellpadding="2">
<tr>
<td width="27%" class="ShowJobsPostings"><div align="left"><?php echo KT_formatDate($row_rsBJ['Job_Date']); ?></div></td>
<td width="73%" class="ShowJobsPostings"><div align="left"><?php echo $row_rsBJ['Job_Department']; ?><br />
<?php echo $row_rsBJ['CompanyInfo_COName']; ?> <?php echo $row_rsBJ['Job_City']; ?>, <?php echo $row_rsBJ['Job_State']; ?></div></td>
</tr>
</table>
<?php } while ($row_rsBJ = mysql_fetch_assoc($rsBJ)); ?>
<?php }
// endif Conditional region1
but didn't work, I need to filter with something like "Send me only departments that start with brokerage" or something like that, but my programming expertise is minimal, any idea? or suggestion? Of course, this using ADDTB
Thanks so much in advance!
Arturo
Just a politely Bump! | https://forums.adobe.com/message/4187954?tstart=0 | CC-MAIN-2017-22 | refinedweb | 225 | 57.98 |
Even beginners can also understand this tutorial..
Even beginners can also understand this tutorial... Very nice...
Please move the psvm(String args[]){}
method as t
Please move the psvm(String args[]){}
method as the last method, so that the beginners
will first go through what are all the code that constitue the frames, and how at the end of the program, the entire code is shrunk in the name of the parent cl
Hi,
can u tell me the creation of Notepad in swi
Hi,
can u tell me the creation of Notepad in swing....... The above example is very useful for me.
Thanks.,
Gokul Kumar D.
hi
hi,
instead of using myaction class we can use THIS method. ex.. button.addActionListener(this). by using this statement we can avoid the public class Myaction()..
good 2 read
hi,
this is very nice to step in to swing using this example
thanks...
swings
the programe is good.but what i feel is instead of using two classes ShowDialogBox and MyAction
is it not possible to use one class and do the job
hello
sir, ilove the samples u posted..
thanks
hi
thanks for information
this is very good
thanks again
hi
hi how hide the minimize and maximize buttton in the title bar of the window using java swings plz reply as soon as possible
Thanks
Thanks by posting your code... I got a new idea in your code by using it in my web application as msgbox.. Thanks again...
Example
It's a best example and best way to learn about message box.
compiling problem
If i click "compile" button then error say "frame.add(button);". So what does this mean
One Problem
example is good but dialog box is appeared behind the parent window first time, if i am adding these lines in my java code without button.. why it is happening?? User has to press Alt+ Tab to show the dialog box. Can anyone explain this??
Thanks
Thanks. I wz looking for this
about this
sorry gay's i can't understand this leason's
Thanks For The information
The breif and Advanced information is most apt
expect the seme future
Mr
I think the above program will not execute because we are having two classes that are both public.
can u pls help me make a database program that can add or sort by name or by age?? i really am troubled in making these program pls reply as soon u r able to comply... thanx..
Dialog box that accepts user name and password
I'm interested in coding/programming special thingsrelated user interfaces!!!
why did we not use ContentPane
Here if we click on the frame and not the button even then the dialog box is being displayed.
Why use the button?
Insted could we not add content pane and then add the button so that the dialog box is displayed only when the button is pressed.
Nice
Wow, thats great, I love the simplicity of the code. Appreciate it, GOD Bless ya.
putting image in this code
can you give me an example and the code with the image by inserting the dialog box? please??
other example of dialogbox programming
please i would like thanks for your effort.
give me other.
thank you !.
hai
hai there can i get the ways hw to translet normal java coding into dialog box matted please can you guys help me>>>>>><<<<<
GOOD TUTORIAL
very nice
comment
I like this sample java program,,, this program help me to improve my knowledge.....Thank'x,,,,
good
hai thanks for ur help
Java beginners tutorial
The beginners tutorial in Java helps you learn the basic of Java... with a little or no knowledge of programming
can understand and start learning Java... can take help of these examples and even post your query. Java
professionals
Core Java tutorial for beginners
of the few Core Java tutorials, examples and programs
that can help you understand...
Core Java tutorials for beginners makes its simpler for novices to learn the
basic of Java language. In order to make it simpler to understand
Swing - Java Beginners
can also visit the following link where you can find lot of swing examples:... links:
Beginners Java Tutorial
Beginners Java Tutorial
... with the Java Programming language. This
tutorial is for beginners, who wants to learn Java from scratch. In this
beginners Java Tutorial you will learn how
Java tutorial for beginners
Java Beginners tutorial Home page...Java tutorials for beginners will help them learn the basic of Java... the program but also helps you run it on your system.
Java has been developed
Tutorial For Java beginners
Tutorial For Java beginners I am beginners in Java. Is there any good tutorial and example for beginners in Java? I want to learn Java before Jan... of December 2013.
Check the Java Beginners Java Tutorial section for tutorials
Java Programming video tutorial for beginners
tutorial section helps the beginners in Java to learn java with simple example... for beginners in Java help them to understand Java step-by-step and in systematic way... that programmer already know much. But the Java programming video tutorial for beginners
-to-One Relationship |
JPA-QL Queries
Java Swing Tutorial Section... Component
Java Swing Tutorial Section - II
Limiting
Values in Number... using Swing |
Chess Application In Java Swing
Jboss 3.0 Tutorial Section
10 | http://www.roseindia.net/tutorialhelp/allcomments/345 | CC-MAIN-2014-41 | refinedweb | 895 | 63.49 |
04 October 2012 12:04 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
It has cut the run rates at the crackers with a combined capacity of 2.93m tonnes/year to 80% from 100% on 27 September - earlier than initially planned - following an outage at a downstream monoethylene glycol (MEG) plant at the site, the source said.
The MEG plant is operated by FPCC’s sister company, Nan Ya Plastics.
Prior to the MEG plant outage, FPCC was planning to cut its cracker runs to below 90% in early October because of poor market conditions in the downstream polyethylene (PE) sector.
FPCC operates a 700,000 tonne/year No 1 cracker, a 1.03m tonne/year No 2 cracker and a 1.2m tonne/year No 3 cracker in Mailiao.
The No 2 cracker had a recent turnaround and was restarted on 14 September as | http://www.icis.com/Articles/2012/10/04/9600912/taiwans-fpcc-to-keep-80-run-rate-at-mailiao-crackers-in-october.html | CC-MAIN-2015-11 | refinedweb | 145 | 64.91 |
Source
JythonBook / TestingIntegration.rst
Chapter 18: Testing and Continuous Integration
Nowadays, automated testing is a fundamental activity in software development. In this chapter, you will see a survey of the tools available for Jython in this field. These tools range from common tools used in the Python world to aid with unit testing, to more complex tools available in the Java world that can be extended or driven using Jython.
Python Testing Tools
Let’s start the chapter with a discussion of the most common Python testing tools. We will start with UnitTest to get a feel for the process.
UnitTest
First we will take a look at the most classic test tool available in Python: UnitTest. It follows the conventions of most “xUnit” incarnations (such as JUnit); you subclass from class, write test methods (which must have a name starting with “test”), and optionally override the methods and which are executed around the test methods. And you can use the multiple methods provided by . The following is a very simple test case for functions of the built-in math module.
Listing 18-1.
import math import unittest))
There are many other assertion methods besides , of course. The following lists comparing floating point numbers.
- assertNotAlmostEqual(a, b): The opposite of assertAlmostEqual().
- assert_(x): Accepts a Boolean argument, expecting it to be . You can use it to write other checks, such as “greater than,” or to check Boolean functions/attributes. (The trailing underscore is needed because is a keyword.)
- assertFalse(x). The opposite of assert_().
- assertRaises(exception, callable). Used to assert that an exception passed as the first argument is thrown when invoking the callable specified as the second argument. The rest of the arguments passed to are passed on to the callable.
As an example, let’s extend our test of mathematical functions using some of these other assertion functions:
Listing 18-2. , then run:
$ jython test_math.py
And you will see this output:
.... ---------------------------------------------------------------------- Ran 4 tests in 0.005s OK
Each dot before the dash line represents a successfully run test. Let’s see what happens if we add a test that fails. Change the invocation method in to use instead. If you run the module again, you will see the following output:
Listing 18-3.
.. and are not equal. The last line also shows the grand total of 1 failure.
By the way, now you can imagine why using is better than : if the test fails, provides helpful information, which can’t possibly provide by itself. To see this in action, let’s change to use : that we have is the and the message. No extra information is provided to help us diagnose the failure, as was the case when we used . That’s why all the specialized methods are so helpful. Actually, with the exception of , all assertion methods accept an extra parameter meant to be the debugging message, which will be shown in case the test fails. That lets you write helper methods such as:
Listing 18 in one Python module, for maintainability reasons. Let’s create a new module named with the following test code:
Listing 18-5. of a method, which allows us to avoid repeating the same initialization code on each method. The method is executed once before every test. Similarly, the method is executed once after each test to perform cleanup activities.
And, restoring our math tests to a good state, the will contain the following:
Listing 18-6. None. A test suite is simply a collection of test cases (and/or other test suites) which, when run, will run all the test cases (and/or test suites) contained by it. Note that a new test case instance is built for each test method, so suites have already been built under the hood every time you have run a test module. Our work, then, is to “paste” the suites together.
Let’s build suites using the interactive interpreter. First, import the involved modules:
Listing 18-7.
>>> import unittest, test_math, test_lists
Then, obtain the test suites for each one of our test modules (which were implicitly created when running them using the shortcut), using the class:
Listing 18-8.
>>> loader = unittest.TestLoader() >>> math_suite = loader.loadTestsFromModule(test_math) >>> lists_suite = loader.loadTestsFromModule(test_lists)
Now we build a new suite, which combines want runners, you can easily write a script to run the tests of any project. Obviously, the details of the script will vary from project to project, depending on the way in which you decide to organize your tests. There are a number of other features that are included with the unittest framework. For more detailed information, please refer to the Python documentation.
On the other hand, you won’t typically write custom scripts to run all your tests. Using test tools that do automatic test discovery is a much more convenient approach. We will look at one of them shortly. But first, we must show you another testing tool that is very popular in the Python world: doctests.
Doctests
Doctests are an ingenious combination of, well, documentation and tests. A doctest is, in essence, no more than a snapshot of an interactive interpreter session, mixed with paragraphs of documentation, typically inside of a docstring. Here is a simple example:
Listing 18-9. will work as long as is an integer: >>> is_even(4.0) True """ remainder = number % 2 if 0 < remainder < 1: raise ValueError("%f isn't an integer" % number) return remainder == 0
Note that, if we weren’t talking about testing, we may have thought that the docstring of is just normal documentation, in which the convention of using the interpreter prompt to mark example expressions and their outputs was adopted. After all, in many cases we use examples as part of the documentation. Take a look at Java’s documentation, located in java.sun.com/javase/6/docs/api/java/text/SimpleDateFormat.html, is that it encourages the inclusion of these examples by doubling them as tests. Let’s save our example code as and add the following snippet at the end:
Listing 18-10.
if __name__ == "__main__": import doctest doctest.testmod()
Then, run it:
$ jython even.py
Doctests are a bit shy and don’t show any output on success. But to convince you that it is indeed testing our code, run it with, because the interactive examples can be directly copied and pasted from the interactive shell, transforming the manual testing in documentation examples and automated tests in one shot.
You don’t really None to include doctests as part of the documentation of the feature they test. Nothing stops you from writing the following code in, say, the module:
Listing 18-11.
"""()
Something to note about the last test in the previous example is that, in some cases, doctests are not the cleanest way to express a test. Also note that, if that test fails, you will None get useful information about the failure. It will tell you that the output was when was expected, without the extra details that would give you. The moral of the story is to realize that doctest is just another tool in the toolbox, which can fit very well in some cases, but not in others.
Note
Speaking of doctests gotchas: the use of dictionary outputs in doctests is a very common error that breaks the portability of your doctests across Python implementations (for example, illustrated by the examples in this section is the way to write expressions that are written on more than one line. As you may expect, you have to follow the same convention used by the interactive interpreter: start the continuation lines with an ellipsis (...). For example:
Listing 18-12.
""". Figure 18-1 shows one of the solutions of the puzzle.
**Figure 18-1. **Eight queens chess
We like to use doctests to check the contract of the program with the outside, and unittest for what we could see as the internal tests. We types of tests have strengths and weaknesses, and you may find some cases in which you will prefer the readability and simplicity of doctests and only use them on your project. Or you will favor the granularity and isolation of unittests and only use them on your project. As with many things in life, it’s a trade-off.
We’ll develop this program in a test-driven development fashion. The tests will be written first, as a sort of specification for our program, and code will be written later to fulfill the tests’ requirements.
Let’s start by specifying the public interface of our puzzle checker, which will live on the package. This is the start of the main module, :
Listing 18-13.
""" that can be used to verify our solution to the problem.
Now we will specify the “internal” interface that shows how we can solve the problem of writing the solution checker. It’s common practice to write the unittests on a separate module. So here is the code for :
Listing 18-14. unittests propose a way to solve the problem, decomposing it in two big tasks (input validation and the actual verification of solutions), and each task is decomposed on a smaller portion that is meant to be implemented by a function. In some way, they are an executable design of the solution.
So we have a mix of doctests and unittests. How do we run all of them in one shot? Previously we showed you how to manually compose a test suite for unit tests belonging to different modules, so that may be an answer. And indeed, there is a way to add doctests to test suites: . But, because ).
An easy way to install Nose is via setuptools. If you have not yet installed setuptools, please see Appendix A for details on doing so. Once you have setuptools installed, you can proceed to install Nose:
Listing 18-15.
$ easy_install nose
Once Nose is installed, an executable named will appear on the bin/ directory of your Jython installation. Let’s try it, locating ourselves on the parent directory of and running:
Listing 18-16.
$ nosetests --with-doctest
By default, Nose does None run doctests, so we have to explicitly enable the doctest plug-in that comes built in with Nose.
Back to our example, here is the shortened output after running Nose:
Listing 18-17.
FEEEEEE [Snipped output] ---------------------------------------------------------------------- Ran 8 tests in 1.133s FAILED (errors=7, failures=1)
Of course, all of our tests (7 unittests and 1 doctest) failed. It’s time to fix that. But first, let’s run Nose again, None the doctests, because we will follow the unittests to construct the solution. And we know that as long as our unittests fail, the doctest will also likely fail. Once all unittests pass, we can check our whole program against the high level doctest and see if we missed something or did it right. Here is the Nose output for the unittests:
Listing 18-18.
$ . That is, the , , and _ functions, in , , and , to pass ::
Listing 18-19.
$ nosetests ...E... ======================================================================' ---------------------------------------------------------------------- Ran 7 tests in 0.938s FAILED (errors=1)
Finally, we have to assemble the pieces together to pass the test for :. For more information on Nose, please see the documentation available at the project web site: code.google.com/p/python-nose/.:
Listing 18-20.
""" that you learned in this chapter to test code written in Java. We the 1960s; and is growing an important user base is Hudson. Among its prominent features are the ease of installation and configuration, and the ability to deploy it in a distributed, master/slaves environment for cross-platform testing.
But we think Hudson’s main strength is its highly modular, plug-in-based architecture, which has resulted in the creation of plug-ins to support most of the version control, build and reporting tools, and many languages. One of them is the Jython plug-in, which allows you to use the Python language to drive your builds.
You can find more details about the Hudson project at its homepage at hudson.dev.java.net.
Getting Hudson
Grab the latest version of Hudson from hudson-ci.org/latest/hudson.war. You can deploy it to any servlet container, such as Tomcat or Glassfish. But one of the cool features of Hudson is that you can test it by simply running:
Listing 18-21.
$ java -jar hudson.war
After a few seconds, you will see some logging output on the console, and Hudson will be up and running. If you visit the site localhost:8080 you will get a welcome page inviting you to start using Hudson creating new jobs.
Note
WARNING The default mode of operation of Hudson fully trusts its users, letting them execute any command they want on the server, with the privileges of the user running Hudson. You can set stricter access control policies on the Configure System section of the Manage Hudson page.
Installing the Jython Plug-in
Before creating jobs, we will install the Jython plug-in. Click on the Manage Hudson link on the left-hand menu. Then click Manage Plug-ins. Now go to the Available tab. You will see a very long list of plug-ins (we told you this was the greatest Hudson strength!) Find the Jython Plug-in, click on the checkbox at its left (as shown in Figure 18-2), then scroll to the end of the page and click the Install button.
**Figure 18-2. **Selecting the Jython Plug-in
You will see a bar showing the progress of the download and installation, and after little while you will be presented with a screen like that shown in Figure 18-3, notifying you that the process has finished. Press the Restart button, wait a little bit, and you will see the welcome screen again. Congratulations, you now have a Jython-powered Hudson!
Creating a Hudson Job for a Jython Project
Let’s now follow (the equivalent to the New Job entry on the left-hand menu), you will be asked for a name and type for the job. We will use the project built on the previous section, so name the project “eightqueens”, select the “Build a free-style software project” option, and press the OK button.
In the next screen, we need to set up an option on the Source Code Management section. You may want to experiment with your own repositories here (by default only CVS and Subversion are supported, but there are plug-ins for all the other VCSs used out there). For our example, we’ve hosted the code on a Subversion repository at: kenai.com/svn/jythonbook~eightqueens. So select Subversion and enter kenai.com/svn/jythonbook~eightqueens/trunk/eightqueens/ as the Repository URL.
Note
Using the public repository will be enough to get a feeling of Hudson and its support of Jython. However, we plug-in, which adds the Execute Jython Script build step.
Note
At the time of writing, the Jython plug-in (version 1.0) does not ship with the standard library. This is expected to change in the next version.
So, click on Add Build Step and then select Execute Jython Script. We will use our knowledge of test suites gained in the UnitTest section. The following script will be enough to run our tests:
Listing 18-22.)
Figure 18-4 shows how the page looks so far for the Source Code Management, Build Triggers, and Build sections.
**Figure 18-4. **Hudson Job Configuration
The next section explains how to specify an action to carry once the build has finished, ranging from collecting results from reports generated by static-analysis tools to testing runners to sending emails notifying someone of build breakage. We have left these options blank so far. Click the Save button at the bottom of the page.
At this point Hudson will show the job’s main page. But it won’t contain anything useful, because Hudson is waiting for the hourly trigger to poll the repository and kick the build. But we don’t need to wait if we don’t want to: just click the Build Now link on the left-hand menu. Shortly, a new entry will be shown in the Build History box (also on the left side, below the menu), as shown in Figure 18-5.
**Figure 18-5. **The first build of our first job
If you click on the link that just appeared, you will be directed to the page for the build we just made. If you click on the Console Output link on the left-hand menu, you will see what’s shown in Figure 18-6.
**Figure 18-6. **Console output for the build
As you would expect, it shows that our eight tests (remember that we had seven unittests and the module doctest) all passed.
Using Nose on Hudson
You may be wondering why we crafted a custom-built script instead of using Nose, because we stated that using Nose was much better than manually creating suites.
The problem is that the Jython runtime provided by the Jython Hudson plug-in-hand menu, go to the Build section of the configuration, and change the Jython script for our job to:
Listing 18-23.
# () and set an environment in which it will work. Then, we check for the availability of Nose, and if it's not present we install it using .
The interesting part is the last line:
Listing 18-24.
nose.run(argv=['nosetests', '-v', '--with-doctest', '--with-xunit'])
Here we are invoking Nose from python code, but using the command line syntax. Note the usage of the option. It generates JUnit-compatible XML reports for our tests, which can be read by Hudson to generate very useful test reports. By default, Nose will generate a file called on the current directory.
To let Hudson know where the report can be found, scroll to the Post Build Actions section in the configuration, check “Publish JUnit test result reports,” and enter “” in the Test Report XMLs input box. Press Save. If Hudson points you that “doesn’t match anything,” don’t worry: just press Save again. Of course it doesn’t match anything None, because we haven’t run the build again.
Trigger the build again, and after the build is finished, click on the link for it on the Build History box, or go to the job page and following the Last build [...] permalink). Figure 18-7 shows what you see if you look at the Console Output, and Figure 18-8 shows what you see on the Test Results page.
**Figure 18-7. **Nose's Output on Hudson
None**Hudson’s test reports
Navigation on your test results is a very powerful feature of Hudson. But it shines when you have failures or tons of tests, which is not the case on this example. But we wanted to show it in action, so we fabricated some failures on the code to show you some screenshots. Look at Figure 18-9 and Figure 18-10 to get an idea of what you get from Hudson.
**Figure 18-9. **Graph of test results over time
**Figure 18-10. **Test report showing failures.
Summary
Testing is fertile ground for Jython usage, because you can exploit the flexibility of Python to write concise tests for Java APIs, which also tend to be more readable than the ones written with JUnit. Doctests in particular don’t have a parallel in the Java world, and can be a powerful way to introduce the practice of automated testing for people who want it to be simple and easy.
Integration with continuous integration tools, and Hudson in particular, lets you get the maximum from your tests, avoid unnoticed test breakages and delivers a live history of your project health and evolution. | https://bitbucket.org/idalton/jythonbook/src/28b0486ae6c1/TestingIntegration.rst | CC-MAIN-2015-32 | refinedweb | 3,308 | 71.44 |
You're reading the documentation for a development version. For the latest released version, please have a look at Humble.
Migrating launch files from ROS 1 to ROS 2
Table of Contents
This guide describes how to write XML launch files for an easy migration from ROS 1.
Background
A description of the ROS 2 launch system and its Python API can be found in Launch System tutorial.
Replacing an include tag
In order to include a launch file under a namespace as in ROS 1 then the
include tags must be nested in a
group tag.
<group> <include file="another_launch_file"/> </group>
Then, instead of using the
ns attribute, add the
push_ros_namespace action tag to specify the namespace:
<group> <push_ros_namespace namespace="my_ns"/> <include file="another_launch_file"/> </group>
Nesting
include tags under a
group tag is only required when specifying a namespace
Substitutions
Documentation about ROS 1’s substitutions can be found in roslaunch XML wiki.
Substitutions syntax hasn’t changed, i.e. it still follows the
$(substitution-name arg1 arg2 ...) pattern.
There are, however, some changes w.r.t. ROS 1:
envand
optenvtags have been replaced by the
envtag.
$(env <NAME>)will fail if the environment variable doesn’t exist.
$(env <NAME> '')does the same as ROS 1’s
$(optenv <NAME>).
$(env <NAME> <DEFAULT>)does the same as ROS 1’s
$(env <NAME> <DEFAULT>)or
$(optenv <NAME> <DEFAULT>).
findhas been replaced with
find-pkg-share(substituting the share directory of an installed package). Alternatively
find-pkg-prefixwill return the root of an installed package.
There is a new
exec-in-pkgsubstitution. e.g.:
$(exec-in-pkg <package_name> <exec_name>).
There is a new
find-execsubstitution.
arghas been replaced with
var. It looks at configurations defined either with
argor
lettag.
evaland
dirnamesubstitutions haven’t changed.
anonsubstitution is not supported.
Type inference rules
The rules that were shown in
Type inference rules subsection of
param tag applies to any attribute.
For example:
<!--Setting a string value to an attribute expecting an int will raise an error.--> <tag1 attr- <!--Correct version.--> <tag1 attr- <!--Setting an integer in an attribute expecting a string will raise an error.--> <tag2 attr- <!--Correct version.--> <tag2 attr- <!--Setting a list of strings in an attribute expecting a string will raise an error.--> <tag3 attr- <!--Correct version.--> <tag3 attr-
Some attributes accept more than a single type, for example
value attribute of
param tag.
It’s usual that parameters that are of type
int (or
float) also accept an
str, that will be later substituted and tried to convert to an
int (or
float) by the action. | https://docs.ros.org/en/rolling/How-To-Guides/Launch-files-migration-guide.html | CC-MAIN-2022-40 | refinedweb | 426 | 57.06 |
In this shot, we will discuss how to convert a decimal number to a hexadecimal number in C++.
When we convert a decimal number to a hexadecimal number, we divide the number by . Then, we divide the quotient by again. We repeat the process until the quotient becomes .
We will take the hexadecimal in type
string. Hexadecimal number values range between to and then to .
Let’s look at the hexadecimal values and their equivalent decimal counterparts.
Take a look at the code snippet below to understand this better.
#include <iostream> #include <algorithm> using namespace std; int main() { int decimal, remainder, product = 1; string hex_dec = ""; cin >> decimal; while (decimal != 0) { remainder = decimal % 16; char ch; if (remainder >= 10) ch = remainder + 55; else ch = remainder + 48; hex_dec += ch; decimal = decimal / 16; product *= 10; } reverse(hex_dec.begin(), hex_dec.end()); cout << "The number in the hexadecimal form is: " <<hex_dec; }
Enter the input below
Enter a number in the input section above.
In line 6, we initialize the variables
decimal,
remainder, and
product.
In line 7, we initialize
the variable
hex_dec as a string. This string will store the result in reverse order.
In line 9, we take
decimal as input.
From lines 10 to 20, we initialize a while loop. In the loop, we calculate the remainders and quotients as discussed in the above illustration to convert the decimal number to its hexadecimal equivalent. In line 12, we initialize a variable
ch of
char type that stores each of the hexadecimal values for the decimal digits.
In line 22, we use the
reverse() function to print the output in hexadecimal form.
In line 23, we print the output, i.e., the hexadecimal equivalent of the decimal number.
This way, we can convert the value of a decimal number to a hexadecimal number.
RELATED TAGS
CONTRIBUTOR
View all Courses | https://www.educative.io/answers/how-to-convert-a-number-from-decimal-to-hexadecimal-in-cpp | CC-MAIN-2022-33 | refinedweb | 305 | 58.28 |
Hi all,
New to the forum here, and coding in general. I am posting because I am having a problem with a simple program intended to act as a word, character and line counter, with each counter being its own function.
I keep running into the same problem over and over. Everything works fine if I don't break them out into separate functions. But when I break them into individual functions, something breaks. If I call the linecount function first, then it will work fine, but the other two return a value of 0. If I call the other two first, then the linecount function will will return a value of 0.
I am sure something is happening as variables are passed from the main function to linecount, but I can't seem to understand why it's changing. My understanding is that passing a variable into a function as a parameter wouldn't alter the original value (it's only modified locally within the local function).
Anyway, here is what I have. I removed most of the comments so it was shorter and quicker for people to read. If that was a bad move (again, I am a complete beginner) then let me know and i will post the commented version.
I am not looking for someone to fix my code, as much as help me understand what I am missing.
def linecount(x): lines = 0 for line in x: lines += 1 return lines def wordcount(wc_target): split_words = wc_target.split(None) words = len(split_words) return words def charactercount(cc_target): characters = 0 for number in cc_target: characters += 1 return characters def main(): print """ This program will evaluate a text (.txt) file, count the number of lines, words, and total characters in the file you enter then display each value. ------------------------------------------------------------ """ filename = raw_input("Please enter the name of the file you would like to count: ") readfile = open(filename) listdata = readfile.read() print "For the file %s, here are your values:" % filename print "Lines :", linecount(readfile) print "Words :", wordcount(listdata) print "Characters:", charactercount(listdata) readfile.close() main()
thanks for any thoughts, guidance! | https://www.daniweb.com/programming/software-development/threads/190157/help-debugging-a-simple-program | CC-MAIN-2017-43 | refinedweb | 350 | 62.17 |
I have a thread that uses a handler to post a runnable instance. it works nicely but I'm curious as to how I would pass params in to be used in the Runnable instance? Maybe I'm just not understanding how this feature works.
To pre-empt a "why do you need this" question, I have a threaded animation that has to call back out to the UI thread to tell it what to actually draw.
Simply a class that implements
Runnable with constructor that accepts the parameter can do,
public class MyRunnable implements Runnable { private Data data; public MyRunnable(Data _data) { this.data = _data; } @override public void run() { ... } }
You can just create an instance of the Runnable class with parameterized constructor.
MyRunnable obj = new MyRunnable(data); handler.post(obj); | https://codedump.io/share/FVR4xW4uEKPj/1/is-there-a-way-to-pass-parameters-to-a-runnable | CC-MAIN-2017-34 | refinedweb | 131 | 64.3 |
US20100191783A1 - Method and system for interfacing to cloud storage - Google PatentsMethod and system for interfacing to cloud storage Download PDF
Info
- Publication number
- US20100191783A1US20100191783A1 US12508614 US50861409A US2010191783A1 US 20100191783 A1 US20100191783 A1 US 20100191783A1 US 12508614 US12508614 US 12508614 US 50861409 A US50861409 A US 50861409A US 2010191783 A1 US2010191783 A1 US 2010191783A1
- Authority
- US
- Grant status
- Application
- Patent type
-
- Prior art keywords
- file system
- data
- volume
- local file
- metadata
-32—Caching or prefetching or hoarding23—Versioning file systems, temporal file systems, e.g. file system supporting different historic versions of, e2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/805—Real-time
-.
Description
- This application is based on and claims priority to Ser. No. 61/146,978, filed Jan. 23, 2009.
- This application also is related to Ser. No. 12/483,030, filed Jun. 11, 2009.
-.
-is a block diagram illustrating how the subject matter of this disclosure interfaces a local file system to an object-based data store; FIG. 2is a block diagram of a representative implementation of a portion of the interface shown in FIG. 1; FIG. 3illustrates how the interface may be implemented in association with different types of local file systems; FIG. 4illustrates the interface implemented as an appliance within a local processing environment; FIG. 5illustrates a portion of a file system “tree” showing the basic component elements that are used to create a structured data representation of the “versioned” file system according to the teachings herein; FIG. 6illustrates the portion of the tree (as shown in FIG. 5) after a change to the contents of the file has occurred in the local file system; FIG. 7illustrates the portion of the tree (as shown in FIG. 5) after a change to the contents of the c-node has occurred; FIG. 8illustrates the portion of the tree (as shown in FIG. 5) after a change to the contents of a directory has occurred; FIG. 9illustrates how a number of file changes are aggregated during a snapshot period and then exported to the cloud as a new version; FIG. 10illustrates how CCS maintains an event pipe; and FIG. 11illustrates how the CCS Volume Manager allows one or many VFS roots to be mounted to an FSA instance associated with a physical server. FIG. 1illustrates how the subject matter of this disclosure interfaces a local file system 100 to an object-based data store 102. Although not meant to be limiting, preferably the object-based data store 102 is a “write-once” store and may comprise a “cloud” of one or more storage service providers. The subject matter is an interface 104, which provides for a “versioned file system” that only requires write-once behavior from the object-based data store 102 to preserve substantially its “complete” state at any point-in-time. As used herein, the phrase “point-in-time” should be broadly construed, and it typically refers to periodic “snapshots” of the local file system (e.g., once every “n” minutes). The value of “n” and the time unit may be varied as desired.
- As other computing device),.
- A single appliance may be associated with more one local file system. In such case, the appliance will have multiple VFS instances associated therewith. A given VFS generated by the appliance as described herein may be conceptualized as a “file system in the cloud” or “cloud volume,” and each such cloud volume may connect to one or more storage service providers. As used herein, a “volume” is an abstraction that is not tied to any physical location or capacity (except in the general sense of being associated with one or more storage service providers). A volume (or “cloud volume”) is simply a “container” for the VFS generated by the appliance. As will be seen, a Volume Manager is provided to enable the user of the appliance to create, administer and manage volumes.
- The interface 104 generates and exports to the write-once data store a series of structured data representations (e.g., XML documents) that together comprise the versioned file system. The data representations comprise “metadata” and are stored in the data store. As will be described below, the interface 104 may also perform other transformations, such as compression, encryption, de-duplication, and the like, before exporting the metadata (the VFS) and the data that it represents to the cloud..
FIG. 2is a block diagram of a representative implementation of how the interface captures all (or given) read/write events from a local file system 200. In this example implementation, the interface comprises a file system agent 202 that is positioned within a data path between a local file system 200 and its local storage 206. The file system agent 202 has the capability of “seeing” all (or some configurable set of) read/write events output from the local file system. The interface also comprises a content control service (CCS) 204 as will be described in more detail below. The content control service is used to control the behavior of the file system agent. The object-based data store is represented by the arrows directed to “storage” which, as noted above, typically comprises any back-end data store including, without limitation, one or more storage service providers. The local file system stores local user files (the data) in their native form in cache 208. Reference numeral 210 represents that portion of the cache that stores pieces of metadata (the structured data representations, as will be described) that are exported to the back-end data store (e.g., the cloud). FIG. 3is a block diagram illustrating how the interface may be used with different types of local file system architectures. In particular, FIG. 3shows the CCS (in this drawing a Web-based portal) controlling three (3) FSA instances. Once again, these examples are merely representative and they should not be taken to limit the invention. In this example, the file system agent 306 is used with three (3) different local file systems: NTFS 300 executing on a Windows operating system platform 308, MacFS 302 executing on an OS X operating system platform 310, and EXT3 or XFS 304 executing on a Linux operating system platform 312. These local file systems may be exported (e.g., via CIFS, AFP, NFS or the like) to create a NAS system based on VFS. Typically, there is one file system agent per local file system. In an alternative implementation, a single file agent may execute multiple threads, with each thread being associated with a local file system. As noted above, conventional hardware, or a virtual machine approach, may be used in these implementations, although this is not a limitation. As indicated in FIG. 3, each platform may be controlled from a single CCS instance 314, and one or more external storage service providers may be used as an external object repository 316. As noted above, there is no requirement that multiple SSPs be used, or that the data store be provided using an SSP. FIG. 4illustrates the interface implemented as an appliance within a local processing environment. In this embodiment, the local file system traffic 400 is received (or “intercepted”) over Ethernet and represented by the arrow identified as “NAS traffic.” That traffic is provided to smbd layer 402, which is a SAMBA file server daemon that provides CIFS (Windows-based) file sharing services to clients. The layer 402 is managed by the operating system kernel 404 is the usual manner. In this embodiment, the local file system is represented (in this example) by the FUSE kernel module 406 (which is part of the Linux kernel distribution). Components 400, 402 and 404 are not required to be part of the appliance. The file transfer agent 408 of the interface is associated with the FUSE module 406 as shown to intercept the read/write events as described above. The CCS (as described above) is implemented by a pair of modules (which may be a single module), namely, a cache manager 410, and a volume manager 412. Although not shown in detail, as noted above preferably there is one file transfer agent instance 408 for each local file system. The cache manager 410 breaks up large files into smaller objects (the chunks) for transfer and storage efficiency, and also because some cloud providers have their own size limits for files. The cache manager 410 is responsible for management of “chunks” with respect to a cache, which in this example is shown as local disk cache 414. The cache may also comprise portions of memory.
- The cache manager and the associated caching operations provide significant advantages. Preferably, the cache (disk and/or disk and memory) comprises at least some of the data and metadata already written to the cloud, as well as all of the data and metadata waiting to be written to the cloud. In one illustrative embodiment, the cache is managed by the cache manager 410 such that recently used data and metadata, as well as write data and metadata pending transfer to the cloud is kept local, but typically is only a relatively small percentage of the overall data and metadata stored in the cloud. The cache manager 410 provides intelligent cache management by establishing and maintaining a set of least recently used (LRU) queues or the like and implementing an LRU or other intelligent caching algorithm. This enables the interface to maintain a local cache of the data structures (the structured data representations) that comprise the versioned file system. In effect, data and metadata are staged to the cloud on-demand to provide a “thin provisioning” solution. Importantly, the cache and cache management policies facilitate recovery and “near-instant” restore operations. In particular, using the cache, the file system agent is capable of providing immediate or substantially immediate file system access. The file system agent also can completely recover from the cloud the state of the file system, although that operation of course takes longer than the recovery using locally-cached data and metadata.
- Referring back to
FIG. 4, the volume manager 412 maps the root of the FSA data to the cloud (as will be described below), and it further understands the one or more policies of the cloud storage service providers. The volume manager also provides the application programming interface (API) to these one or more providers and communicates the structured data representations (that comprise the versioned file system) through a transport mechanism 416 such as cURL. Further details of the volume manager 412 are provided below. cURL is a command line tool for transferring files with URL syntax that supports various protocols such as FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS and FILE. cURL also supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication, file transfer resume, proxy tunneling, and the like. Typically, each storage service provider has its own unique API, and there is no requirement that a particular SSP implement any particular storage system (or even have knowledge of the local file system, directories, files, or the like). The appliance, however, is able to interoperate with any such SSP through the use of a plug-in architecture that also supports rapid support for new providers. As noted above, the VFS can reside in any basic data store that supports basic REST-like functions such as GET, PUT, DELETE and the like.
- The structured data representations preferably are encrypted and compressed prior to transport by the transformation module 418. The module 418 may provide one or more other data transformation services, such as duplicate elimination. The encryption, compression, duplicate elimination and the like, or any one of such functions, are optional. A messaging layer 420 (e.g., local socket-based IPC) may be used to pass messages between the file system agent instances, the cache manager and the volume manager. Any other type of message transport may be used as well.
- As noted above, the components of the interface shown in
FIG. 4may be distinct or integrated. Thus, the specific interface architecture shown in this drawing is merely illustrative and not meant to be taken by way of limitation.
- The interface shown in
FIG. 4may be implemented as a standalone system or in association with a service. The interface typically executes in an end user (local file system) environment. In a service solution, a managed service provider provides the interface (e.g., as a piece of downloadable software) and the versioned file system service, the latter preferably on a fee or subscription basis, and the data store (the cloud) typically is provided by one or more third party service providers. The managed service thus operates as a gateway to the one or more cloud service providers The interface may have its own associated object-based data store, but this is not a requirement, as its main operation is to generate and manage the structured data representations that comprise the versioned file system. The cloud preferably is used just to store the structured data representations, preferably in a write-once manner, although the “versioned file system” as described herein may be used with any back-end data store. Each structured data representations exported to the cloud represents a version of the local file system. Generalizing, the versioned file system is a set of structured data (e.g., XML) objects.
- As described above, the file system agent is capable of completely recovering from the cloud (or other store) the state of the native file system and, by using the cache,, as described in more detail below.
FIG. 5is a representation of a portion of a tree showing the basic elements that are represented in a versioned file system according to the teachings herein. The reference numeral 500 is a c-node (or “cloud” node). A c-node preferably contains all of the information passed by a file system agent instance about an inode (or inode-equivalent) local file system. As will be seen in the examples below, the inode subset of the c-node includes data that would be returned by a typical “stat” function call, plus any additional extended attributes that are file system-dependent. One or more remaining parts of the c-node are used to provide a CCS super-user with additional access control and portability across specific file system instances. Stated another way, c-nodes preferably act as super-nodes for access control to files and metadata. While the inode sub-structure contains information from the original local file system, c-nodes allow administrators of the system to gain access to files in a portable, file system-independent manner. Preferably, each c-node is addressable by a URI. A c-node preferably also includes a pointer to the actual location of the data file. C-nodes indicate where the remote copies of the item may be found in the data store. The reference numeral 502 is a datafile. This object represents the file preferably as it was created in the local file system. One of the main benefits to isolating the metadata in the c-nodes is that a user's data files can be stored with no modifications. As in a traditional file system, preferably the name of the file is stored in the directory or directories that contain it and not as a part of the file itself. Preferably, URIs (for the actual data files in the cloud) remain opaque to the end-users, although this is not a requirement. An FSA instance controls access to the data file URIs through the respective c-nodes. The reference numeral 504 is a directory. Directories are c-nodes that contain a simple list relating names to the corresponding URIs for other c-nodes that, in turn, point to other files or directories. Directories provide a convenient way to establish a namespace for any data set. There can be multiple directories that point to the same files or directories. As in traditional file systems, preferably symbolic links are simply multiple name entries that point to the same c-node. Directories are owned by their own c-node, which preferably holds its metadata and controls access to it. FIG. 6illustrates the portion of the tree (as shown in FIG. 5) after a change to the contents of the file 502 has occurred in the local file system. In this example, which is merely representative, a new version of the local file system is then created (preferably at a “snapshot” period, which is configurable). The new version comprises the file 602, the new c-node 600, and the new directory 604. As also seen in this drawing, the changes to the tree also propagate to the root. In particular, and according to the teachings herein, upon a given occurrence in the local file system (as will be described), a “new version” of the file system is created (for export to the cloud), and this new version is represented as a new structured data representation (e.g., a new XML document). As will be seen, the new structured data representation differs from the prior version in one or more parent elements with respect to the structured data element in which the change within the file system occurred. Thus, upon a change within the file system, the disclosed interface creates and exports to the data store a second structured data representation corresponding to a second version of the file system, and the second structured data representation differs from the first structured data representation up to and including the root element of the second structured data representation. In this manner, the interface provides for a “versioned” file system that has complete data integrity to the data store without requiring global locks. As noted, this approach circumvents the problem of a lack of reliable atomic object replacement in cloud-based object repositories. FIG. 6illustrates one type of change (a file update) that triggers the generation of a new version. FIG. 7illustrates another type of change (an update to c-node 700) that also triggers the generation of a new version with changes propagated to root, and FIG. 8illustrates yet another type of change (an update to each of the directories 804 and 808) that also implements a new version, once again with changes propagated to root. Generalizing, while the types of changes that trigger a new version may be quite varied, typically they include one of the following: file creation, file deletion, file modification, directory creation, directory deletion and directory modification. This list is not intended to be taken by way of limitation.
-.
FIG. 9illustrates this approach. As seen in this drawing, an FSA instance preferably aggregates all of the changes to the local file system in two ways: delta frames 900, and reference frames 902. The delta frames 900 control the number (and size) of the objects that need to be stored in cloud storage. As noted above, preferably every local file system event is recorded by the FSA instance as a change event 904. As noted, new inodes, directories and files trigger corresponding new entities (created by FSA) in the cloud; however, preferably modifications to existing structures create change events that are aggregated by FSA into a single new entity, the delta frame 900. A delta frame 900 starts with a new root that represents the current state of the file system. Preferably, the FSA instance compiles the delta frame information such that each of the new entry points (i.e. any modifications to the previous version) to c-nodes, directories and files are represented as new versions of the data structures plus pointers to the old structures. To reconstruct the current state of a local file system, an FSA client only has to walk a tree for any version to see all the correct items in the tree. Reference frames 902 are also compiled by FSA and contain an aggregation of the previous reference frame plus all the intervening delta frames.
-
FIG. 10. The event pipe (with its entry points into cloud storage) is then the primary means to access all files stored remotely. In particular, one of ordinary skill in the art will appreciate that this is a lightweight data structure that preferably contains only versions of root for the given volume. Although it is desired that CCS be highly available, preferably the “writes” occur periodically in a transaction safe way as controlled by FSAs. The “reads” are only necessary when an FSA copy has failed; therefore, CCS can be run using an ordinary (high-availability) database or file-based back-end. Preferably, the mix of delta and reference frames in the event pipe is chosen to balance storage and bandwidth utilization against a practical recovery time for FSA to create a new local file system instance. The composition of the event pipe can also be set according to a configurable policy. For instance, users may choose to keep only so many versions or versions dating back to a specific date. If desired, a rotation schedule can be specified at CCS such that, for instance, deltas are kept daily for a month and then rolled into a monthly reference frame.
- As noted above, the VFS comprises a series of structured data representations that are exported to the cloud. Typically, a simple directory tree is pushed to the cloud as a version. As one or more changes in the local file system occur, a new version of that tree is exported, with changes to root propagated described. Preferably, data is not deleted, overwritten or updated, and any version can be retrieved from the cloud at any given time.
- The following provides additional details regarding the Content Control Service (CCS). As noted above, the CCS is responsible for the configuration and control of one or more FSA instances that belong to a VFS implementation. CCS also serves a layer of indirection between the FSA instances and their current representation in the cloud. Preferably, CCS is executed as a software service in the cloud via web-based portal access, although this is not a limitation. The CCS web portal provides administrators a set of familiar tools that act similarly to traditional volume management functions.
- A customer owns a set of volumes. A volume is the point of indirection that separates the logical from the physical implementations, the file systems from the actual storage cloud. The administrator of that set has access to manage the volumes. Through CCS, the administrator (the user) controls read/write access, number of copies to the cloud, and other high level permissions and attributes, preferably at the volume level. To create a new, empty volume, preferably the administrator uses the CCS web portal to create a new volume. There is no need to associate disks with the new volume; rather, all that is needed is the volume name.
- CCS preferably contains a registry for the FSA instances. The registry is used to authenticate each FSA opening access to its corresponding cloud store. In a typical use case, an entity (such as an organization, an individual, a computing system, or the like), registers with a service provider, which provider provides the VFS “gateway” service. An administrator (or other user) is permitted to access and to configure the organization's use of the system. In use, the administrator logs into the CCS, e.g., via a web-based portal, and sees the file system agents that are associated with (belong to) the organization. The administrator can navigate his or her collection of file system agents (corresponding to file system instances) and perform configuration and management functions such as: set and change a configuration for an FSA, upgrade FSA software, create and delete an FSA, activate and suspend an FSA, change ownership of an FSA, migrate back-end FSA remote storage repository, and one or more volume manager operations that are described in more detail below. Preferably, the CCS also allows the administrator the ability to set quotas in bulk for storage and bandwidth utilization. It also aggregates reporting information from one or more reports generated by the file system agents for the organization. Preferably, any errors reported to the CCS (e.g., a failure to find an object in the cloud) are reported in CCS via the portal access.
- CCS also performs management of the encryption keys used to encrypt data sent to the cloud. The CCS manages those keys and enables customers to generate new keys or use existing keys to encrypt their data.
- As illustrated in
FIG. 11, a Volume Manager 1102 preferably runs in the CCS 1100 and maps the root of the VFS data structure 1104, or volume, to physical servers 1106. To execute any write operations in the data model, preferably at least one server running an FSA instance must be mapped to it. Assuming proper access credentials, an FSA can read from a volume that has no FSA association. Preferably, volumes persist in the cloud even when there are no FSAs associated with them. As noted above, a volume is an abstraction that represents a container for a given VFS. FSA instances can exist either in the remote local file systems being managed, or they can be instantiated in a compute layer that is logically close to the cloud storage. The Volume Manager 1102 allows one or many VFS roots to be mounted in a single FSA instance. Preferably, each root defines its own namespace that is identified by the name of the volume. In this way, the volumes behave as in a traditional Unix-based file system. A given volume may store copies of metadata and data at multiple clouds for replication. Or, a pair of volumes may be mounted to a single FSA where each volume stores in a different cloud. There may be multiple volumes associated with an FSA instance or, conversely, there may be multiple FSA instances associated with a single volume. A VFS root may exist without having an FSA associated with it. In this circumstance, the data set is physically present; however, users cannot perform operations on the set. The Volume Manager 1102 also contains one or more control routines to facilitate data replication to multiple remote object repositories. The Volume Manager 1102 may also migrate volumes from one remote storage repository to another.
- The following section provides additional description regarding the behavior of the data model in typical Information Technology (IT) use cases. These scenarios are composed from a set of primitive operators that can be combined to create complex data management behaviors. Unless otherwise indicated, this functionality is implemented by the CCS and supported by the appliance.
- Preferably, the operators are executed by FSA instances. A given operation may be executed a by a different FSA than the one that stored the data originally. The basic operators are based on the commands of the Unix file system, with several important differences. The operators work across volumes even if those volumes are mounted on different FSA instances. Moreover, preferably operators work on the directory structures as they exist in time. Preferably, the operators obey access control privileges defined in the c-nodes. The operations at this level typically are for administrators of the system working through the CCS at the volume level. Local file system operations typically are performed at the local level using already available file system tools.
- This operation creates a new volume, identifies the cloud SSP that will be used to store the volume, identifies the number of copies of the volume that should be created, as well as specifies other volume level operations such as encryption level, encryption keys, and the like. Thus, one or more additional parameters, such as the cloud repository to use, replication to multiple clouds, and so forth, are set with this command. Preferably, no other operations can be performed until the volume is associated with an FSA.
- This operation associates or de-associates an existing volume with an FSA.
- This operation moves a c-node from a source to a destination. If no additional parameters are specified, the movement occurs from the present version of the source. In addition, a time variable may be used to specify a particular directory version to be removed from the source. Preferably, moving a complete directory structure involves only versioning the parent c-node to terminate at the source, which indicates that the child is no longer attached to it. A target c-node is then created or versioned from an existing one to point to the directory store. This is a lightweight operation, as none of the children are affected by this operation, and none of the data is actually moved. Preferably, a move is executed at root and at a certain point in time, although this is not a limitation. Thus, for example, a move can be applied at the sub-directory level.
- This operation connects a new c-node on the source to a destination. There is no change to the destination c-node. The links allow for file transversal to jump across volumes. Preferably, the new c-nodes created as a result of this operation have their own access control; however, in the event that the volumes are mounted in a different FSA, care must be taken to enforce the write permissions of the owning volume at the FSA. Link basically works like Move, only the ownership of the target remains with the original c-node. The same time parameter preferably applies for the destination.
- This operation copies the c-nodes and the data files for the source to the destination. At the volume level, this operation preferably is performed from a compute layer that has ample bandwidth to the storage. Preferably, a Copy command leaves no links to the source. New instances of c-nodes and data files are then created from the source. Copy (like Move) specifies the same time parameter behavior for its source.
- This operation changes a version and view of the volume but typically does not remove any data from the cloud. The operation creates a new version in the VFS data structure that, in effect, terminates the old i-nodes leaving intact all of the previous versions. The operation may specify a moment in time, and it merely changes the version/view to exclude data. The data remains in the cloud for recovery purposes.
- This operation enables the user to clean up/prune space used in the cloud. This operation removes history, and it deletes the c-node and all of its children. This operation may be executed to reduce storage.
- Change Mode (chmod)
- This operation accesses the c-node security layers and allows the administrator to change the attributes of a volume or part of the directory structure.
- This operation behaves similarly to its Unix equivalent by providing statistics about the file system.
- The VFS offers data management capabilities by combining the basic operators. Preferably, the FSA client performs some of the data management operations and administrator clients (preferably executing near the cloud storage) may perform some of the other operations. The list below is merely representative.
- A user that wants to create a new instance from scratch would choose to instantiate a new volume. The user would then link his or her local FSA instance to this volume. The user would be able to choose a cloud to associate with the Volume as well. Preferably, the level of caching done by the FSA client is defined locally at the client by providing available cache space.
- Using the Volume Manager, the administrator unmounts an old FSA client and then mounts a new FSA to the same volume. Metadata flows back first to enable the file system to come quickly back online.
- The administrator moves the volumes responsible for the servers being merged into either a new single volume or into one of the existing volumes. Ownership of those data structures is transferred to the new volume by the Move command.
- The administrator moves part of a volume (picking the appropriate directory) to a new volume with its new matching FSA. This split can happen anywhere in a current directory or at a particular point in time (version set).
- Recovery of a full volume is the same as a migration. Performing a file or directory level restore to a certain point in time preferably involves either a linking from a present created directory (called, say, /Restored Files) to a desired point in time. The entry point for this recovery mode can be a point in time or an object (file or directory). The administrator can then choose to move individual files or directories into the present or roll-back the whole system to a certain point in time.
- A script may be executed to select a specific moment in time and move the files to an Archive Volume. At the same time, if continued access is desired from the old Volumes, the script may link the old file names to the Archive Volume.
- A script may be executed to select a certain point in time and remove all entries. This is the purge operation. Preferably, this operation is run asynchronously in bandwidth proximity to the remote object repository.
- Other scenarios may include indexing the data (Discovery) by moving it to an external service, de-duplication (assuming object transparency), integrity checks, content distribution using third party providers, and so forth.
- The described subject matter provides numerous advantages. The interface provides a primary, local, but non-resident file system to facilitate storage of user data and file system metadata one or more storage service providers. The FSA provides a caching system comprised of local cache storage and algorithms for provisioning file system data and metadata to the local client. A versioning system implemented in the file system and part of the file system metadata structure provides backup and disaster recovery functions. The described framework leverages security, ACL and other standard attributes of a local file system. The subject matter herein provides support for multiple storage service providers. It also enables protection of data (e.g., via mirroring, RAID, or the like) across and within storage service providers.
- The FAS enables full functionality (reads, writes, deletes) during periods of outage by the storage service providers (with the exception of read/access of uncached data). Preferably, the FAS provides a cache collision avoidance mechanism to avoid data loss. The FAS also preferably provides an audit log at the file system object (directory, file) level that includes the history of the objects (create, update, rename/move etc). It also performs internal integrity checks by comparing system metadata and data against the data stored in the storage service providers. The FSA preferably maintains cryptographic hashes of file data (in its entirety or in portions) within file system metadata for the purposes of data integrity checking.
- Using the CCS, the system administrator can provision volumes that provide multiple top level directories for management purposes. Volume level operations allow the movement of portions of the namespace between system instances. Volume level properties control attributes for the system instances and for the use of the storage service providers. Volume level metrics provide information about the use and capacity of the volumes. Preferably, customers of the service can create multiple instances of the system, and portions of the namespace can be shared between copies of the system.
- The disclosed technique enables quick restore of file system metadata from the storage service providers. The system also enables full system access with only resident metadata thus providing near-instant recovery of failed systems for disaster recovery scenarios. As noted above, preferably data and metadata saved in the cloud (the one or more storage service providers) is encrypted but shareable by control of the system administrator. The system preferably also limits limit bandwidth consumption by sending to the cloud only incremental changes. The system preferably limits space consumption by saving to the cloud only incremental changes. More generally, the service enables customers to create instances of the system within a cloud computing layer (i.e. Amazon EC2) to allow system administrators to execute global operations. This enables entities to provide value-added services.
- The system has the additional attributes of being able to be CIFS- and NFS-exportable to provide NAS functionality. Further, the system may be instantiated in a virtual machine.
- The system preferably reduces duplicate data by using metadata to reference the same data multiple times, and by using metadata to reference sub-file fragments of the same data multiple times.
- The Content Control Service (CCS) provides additional advantages. It provides a web-based portal to manage the service. Using CCS, system administrators create and manage volumes, provide access to volumes to others outside their organization, monitor system metrics, manage FSA instances, and subscribe and manage one or more other service features.
- The appliance provides a secure and reliable link between network attached storage (NAS) and a cloud store. It caches and provides thin provisioning of the cloud to deliver virtually unlimited storage capacity among one or more storage locations, and it facilitates data snapshots and secure sharing among those locations. Preferably, the subject matter described herein is packaged as a virtual NAS appliance, although this is not a limitation, as has been described. The appliance preferably implements a simple web-based interface that is easy-to-use, and that allows access to multiple storage clouds from a single control panel. The appliance provides full support for known technologies such as Windows Shares, CIFS and Active Directory. In use, the user creates volumes out in the storage clouds and publishes them, preferably as Windows Shares (although this is not a limitation). The interface also facilitates advanced features such as snapshots and rollbacks.
- The disclosed subject matter integrates traditional file systems with cloud storage, simplifying file management. It simplifies storage by providing one platform that addresses all of the key areas of storage management, namely, protection, provisioning and file portability. By combining these attributes into an integrated platform, the disclosed subject matter significantly reduces storage management expense and complexity. In a preferred embodiment, enhanced protection is provided in several ways: security, backup and disaster recovery. With respect to security, preferably all data is sent to the cloud encrypted. Preferably, data is encrypted at a user premises using known technologies (e.g., OpenPGP with AES-256) and remains encrypted in the cloud. This guarantees end-to-end protection of customer data, which is never visible to the service provider or to the cloud vendors. Backup and restore also are built into the VFS, as all changes to the local file systems are versioned, and the VFS stores them in the cloud and keeps track of all versions, past and present. A user can roll back to any version without having to do a traditional restore. Disaster recovery also is intrinsic to the VFS because all data exists in the cloud, and the cloud architecture inherently protects data with copies in multiple locations. A single cloud typically is robust enough for most users, although extra protection can be provided by associating the data to multiple clouds.
- The platform also provides enhanced provisioning in the form of unlimited capacity and multi-cloud support. The VFS allows thin-provisioning, and it turns a local file system into a cache for the cloud. As a result, the VFS grows continuously in the cloud and delivers unlimited storage to customers. Preferably, the platform optimizes data in the cache, working within the constraints of the local storage capacity while maximizing performance and reducing unnecessary network traffic to the cloud. Moreover, the VFS can be provisioned by more than one cloud, allowing customers to select vendors according to price, quality of service, availability, or some combination thereof. Thus, for example, a customer may a first set of user files to a less-expensive cloud while sending a more sensitive second set of files to a compliance-grade cloud. Preferably, the data is de-duplicated and compressed before being sent to the cloud to reduce network traffic and storage costs.
- As noted above, the platform also provides enhanced portability. A VFS file retains forever its history, but it is not dependent on any particular instance of the system. Files are stored in the cloud in their native forms while the VFS accumulates metadata including the locations, history, associations, and the like of the individual files. This allows customers to easily migrate file servers, to combine them, and to share data with other organizations. It also enables partners to introduce value-added services such as compliance, search, archiving and the like.
- The subject matter disclosed herein thus provides a virtual appliance that acts as a gateway that enables the cloud storage of files. A service provider provides the appliances to its users (customers), and it may offer an SSP gateway (or “access”) service to those users in the form of the CCS and other ancillary services, such as billing. In an illustrative use case, a customer registers with the service, downloads and installs the virtual appliance in its data center, and then configures one or more volumes (through CCS) to gain access to one or more (preferably third party) storage clouds. The service provider acts as a go-between that continuously monitors cloud performance and availability, and makes that information available to its customers. It provides customers a choice among cloud vendors to facilitate the full potential of multi-vendor cloud storage. Preferably, the service provider itself does not host or otherwise store the customer's data and the metadata (the VFS) generated by the appliance, although this is not a requirement.
-.
- specially constructed for the required purposes, or it may comprise a.
Claims (15)
- 1. at least first and second portions of the metadata and the local file system data represented by the metadata in association with the local file system;exporting the metadata and local file system data to one or more storage service providers;wherein the first portion cached represents metadata and local file system data that is to be written to the one or more storage service providers, and the second portion cached represents recently used local file system data.
- 2. The computer-readable medium as described in
claim 1wherein the method further includes:applying one or more data transformations to the metadata and the local file system data prior to exporting.
- 3. The computer-readable medium as described in
claim 2wherein the one or more data transformations are one of: compression, encryption, de-duplication, and combinations thereof.
- 4. The computer-readable medium as described in
claim 1wherein at least one of the storage service providers has associated therewith a write-once data store.
- 5. The computer-readable medium as described in
claim 1wherein a structured data representation is an XML representation.
- 6. The computer-readable medium as described in
claim 4wherein the method further includes configuring a volume in at least one of the storage service providers to store metadata and the local file system data that the metadata represents.
- 7. The computer-readable medium as described in
claim 6wherein the method further includes executing a management function with respect to the volume, wherein the management function is selected from one of: moving the volume, copying the volume, linking the volume, recovering the volume, removing the volume, changing an attribute associated with the volume, and reporting on data associated with the volume.
- 8. The computer-readable medium as described in
claim 1wherein a structured data representation is generated upon a change within the file system.
- 9. The computer-readable medium as described in
claim 8wherein the change within the file system is one of: a file creation, file deletion, file modification, directory creation, directory deletion and directory modification.
- 10. The computer-readable medium as described in
claim 1wherein the method further includes using the second portion cached to restore the local file system on an as-needed basis.
- 11. The computer-readable medium as described in
claim 1wherein the method further includes restoring the local file system to a point-in-time by retrieving metadata and local file system data from the one or more storage service providers and performing a restore operation at the local file system.
- 12. the metadata and the local file system data represented by the metadata in association with the local file system;applying one or more data transformations to the metadata and the local file system data; andexporting the metadata and local file system data, as transformed by the one or more data transformations, to one or more of a configurable set of storage service providers.
- 13. An apparatus for configuring one or more user local file systems to interface to cloud storage, comprising:a processor;a computer-readable medium having stored thereon instructions that, when executed by the processor performs a configuration method, comprising:creating a volume in cloud storage for use in storing a series of structured data representations that represent versions of a user's local file system;associating to the volume a file system agent that executes in the user local file system, wherein the file system agent intercepts local file system data and generates the series of structured data representations; andidentifying one or more storage service providers to host the volume.
- 14. The apparatus as described in
claim 13wherein the configuration method further includes executing a management function with respect to the volume, wherein the management function is selected from one of: associating or de-associating the volume with the file system agent, moving the volume, copying the volume, linking the volume, recovering the volume, removing the volume, changing an attribute associated with the volume, and reporting on data associated with the volume.
- 15. The apparatus as described in
claim 13wherein the configuration method further includes providing an encryption key to the user local file system for use in encrypting the structured data representations. | https://patents.google.com/patent/US20100191783A1/en | CC-MAIN-2018-43 | refinedweb | 7,645 | 50.36 |
view raw
I import tqdm as this:
import tqdm
Traceback (most recent call last):
File "process.py", line 15, in <module>
for dir in tqdm(os.listdir(path), desc = 'dirs'):
TypeError: 'module' object is not callable
path = '../dialogs'
dirs = os.listdir(path)
for dir in tqdm(dirs, desc = 'dirs'):
print(dir)
The error is telling you are trying to call the module. You can't do this.
To call you just have to do
tqdm.tqdm(dirs, desc = 'dirs')
to solve your problem. Or simply change your import to
from tqdm import tqdm
But, the important thing here is to review the documentation for what you are using and ensure you are using it properly. | https://codedump.io/share/l4W44661p7XD/1/tqdm-39module39-object-is-not-callable | CC-MAIN-2017-22 | refinedweb | 115 | 76.62 |
What are the programming languages in general? They are nothing more than a set of semantic and syntactic rules like normal languages. The only difference is that they describe how to build proper instructions that will produce a proper application. They are directions for both the user and the computer. The language specifies how the computer should understand certain instructions.
As there are many languages, there are many techniques for programming. How to group them into categories? You have certainly come across the term "programming paradigm". What does it mean? To take it in a simple term – it is a style of programming. More precisely, it is a set of techniques used to create a program and the definition of how a computer executes it. Paradigms are the way to categorize programming languages based on what type of techniques they support. Different languages can be based on a single or support many paradigms. Let's see what the 5 most popular paradigms are.
Imperative Programming Paradigm
It is the paradigm corresponding to the oldest programming techniques. It’s also the most basic one. According to the philosophy of this paradigm, code is just a series of instructions that change the state of the machine. For example, defining the variable changes the state of registries as well as memory and so the state of the whole computer. The machine just executes commands, nothing more. Essentially, you describe to the computer what to do, not what you want to achieve. This paradigm is strictly connected to von Neumann computer architecture. Most of the common programming languages are imperative.
You can use the languages within this paradigm for creating the logic of the app. Validation of data or execution of mathematical algorithms is only some of the tasks that are suitable for this paradigm.
Imperative Programming Language Example: JavaScript
One of the many languages that implement this paradigm is JavaScript, as you can express the JS program as a series of instructions for the machine. The syntax is much like this known from C language, but the first big difference is automatic semicolon insertion. This feature allows omitting semicolons at the end of lines. Here you can find a sample of JS code:
function fun() { var thisIsVariable = 8 var alsoAVariable = "Hi!" thisIsVariable = true console.log(alsoAVariable) if (thisIsVariable) { var now = new Date() today = now.getDate() + "/" + (now.getMonth() + 1) + "/" + now.getFullYear() console.log(today); } }
JS alongside HTML and CSS is one of the core technologies in the field of creating web apps. Mainly, because you can easily execute it in a web browser. It can also be used in some server-side tasks thanks to Node.js. In the case of web apps, it is usually responsible for page behavior, including animations and responding to user interactions. It also has several APIs that allow working with data structures, regular expression, and text. On the other hand, JS does not provide any input or output API by itself.
Besides web applications, you can also implement software for many other platforms like desktop, mobile or embedded with JavaScript. Frameworks such as Felgo SDK use JavaScript for scripting logic of your application.
In the case of basic concepts, there are several things to know. First of all, JS is weakly typed. It is also dynamically typed which allows you to use techniques like Duck Typing. Although it implements an imperative paradigm, it also supports an object-oriented programming paradigm. You will find out more about this paradigm later on this post.
Declarative Programming Paradigm
This paradigm is the opposite of imperative programming. In this case, you describe what you want to achieve and not how. The programmer does not control the flow. You express the logic by declaring the outcome. One of the strong sides of declarative programming is that it streamlines the creation of parallel apps.
There are several use cases of declarative programming. For instance, database query languages are implementing this paradigm. On the other hand, frontend is also a very good field for declarative programming to shine.
Declarative Programming Language Example: QML
The best example of a language that follows this paradigm is QML. In this language, the code is based on objects containing several parameters describing them – it looks similar to JSON format. Below, you can see an example:
import Felgo 3.0 import QtQuick 2.5 App { id: app NavigationStack { Page { title: "Hi!" AppText { anchors.centerIn: parent text: "Hello Felgo" } } // Page } // NavigationStack } // App
You can organize the items in QML in a tree structure. This parenting system allows items to communicate in both ways. As you saw in the example above, you use this mechanism in the “anchors.centerIn: parent” line, which allows you to position text in the center of the page. One of the very handy features of this language is that it allows property bindings. The properties of your objects can automatically update according to changes occurring in other parts of the application. This helps to keep your state across different components consistent. You also do not have to worry about updating different parts of the UI if some value changes.
QML is also a great choice when it comes to fluid animations. The state's system greatly simplifies the way of handling the UI changes. It also allows you to automatically execute animations when you change the properties.
QML is internally translated to C++. Thanks to this, it’s very efficient. It also allows your code to run on various platforms, like desktop, mobile or embedded.
Any language cannot do without some way of declaring functions. That is why QML allows declaring JavaScript functions within its code to handle imperative aspects and implement some more advanced behavior. You can also write JS code in-line.
Thanks to JS, you have access to many functionalities of this mature language while using QML. All of it with the knowledge that you already have! This greatly simplifies the process of creating a responsive user interface and providing an excellent user experience.
Learn how to code with QML, Qt and Felgo. Consult with out experts today!
Object-oriented Programming Paradigm
Object-oriented programming is probably one of the most popular programming paradigms. It is based on organizing application elements around the data by defining the application as a set of objects. Each of them contains parts of necessary data as member variables and methods . Then, the object executes the data. This approach has several advantages: it helps to provide code reusability, data security, and scalability.
There are four main principles of object-oriented programming:
- Encapsulation – Access to objects variables and methods should be restricted in a way that only necessary data is available for other objects to use. This ensures that data is secure, and the chance of data corruption is reduced.
- Abstraction – Objects are hiding any unnecessary implementation code. The example of following this principle is creating getters and setters.
- Inheritance – Sub-classes containing logic of parent class can be created. It greatly improves code reusability.
- Polymorphism – Objects can be interpreted differently according to the context.
Object-oriented programming is commonly used when creating complex applications.
Object-Oriented Programming Language Example: C++
The use examples are very diverse: server-side applications, games, and even operating systems. The most known language implementing this pattern in C++. Below some sample code to refresh your knowledge:
class Board { string name; int size; char color; public: Board(string name); ~Board(); Dice* basicDice; void info() { cout << name << endl; cout << size << endl; cout << color << endl; cout << basicDice->roll() << endl; }; };
This language is famous for its high performance and being able to access low-level device memory. Thanks to code portability, you can also easily execute it on embedded devices. Over the years, C++ was significantly improved, by adding a lot of new functionalities like smart pointers.
You can integrate the C++ components into QML code. In this way, the application can execute any heavy computation tasks using the performance of C++.
Functional Programming Paradigm
Functional programming is a variant of declarative programming. In this case, you interpret the application as a complex mathematical function. It takes the specified input parameters and returns the result after processing. In some languages that implement this paradigm, there is no such thing as a variable, as the state of the machine is not taken into consideration. However, this approach is not as popular. Usually, languages implement other paradigms alongside functional programming. You can use the functional programming paradigm in machine learning, modeling of speech, computer vision, etc.
Functional Programming Language Example: Python
Python, one of the most popular programming languages, implements a functional paradigm. The creators of Python focused on making the code as clear as possible. The first thing that catches an eye while looking at Python code is the lack of brackets. You can see it by yourself:
def qsort(L): if L == []: return [] return ( qsort([x for x in L[1:] if x < L[0]]) + L[0:1] + qsort([x for x in L[1:] if x >= L[0]]) )
As in the case of JS, python is dynamically typed. It also provides a garbage collector which is a common feature of high-level programming languages. Python is prized for its universal standard library.
Structured Programming Paradigm
A structured programming paradigm - also called modular - is an approach that has its roots in imperative programming. The main idea of this approach is to create a program as a set of separated modules. The paradigm focuses more on the process itself rather than the required data. You should create every module, which is usually a self-contained function, in a way that allows you to reuse it while maintaining them clean and readable. Just like with imperative programming, you can execute the instructions one after another. There is no support for instructions like infamous GOTO, which causes the creation of spaghetti code. Structural programming also encourages you to create code in a top-down approach. You start with the most complex block and adding details later. Structured programming is quite popular in microservices and selected embedded systems.
Structural Programming Language Example: C
As already mentioned, many languages use multiple paradigms. Such an example is C language, which had its first release in 1972. It combines aspects of imperative and procedural approaches. You can also implement mechanisms in C and use them in modern languages including Java, JavaScript, C++, and many others.
Nowadays languages are much more convenient to use. Also, efficiency is the key advantage of C language. Thanks to that, you can use this language for low-performance devices such as microcontrollers and other embedded systems.
What about some more code examples? Here you can find some code written in top-down implementation:
struct Rectangle { int a; int b; int surface; }; // you pass parameter either by value or by pointer int calculateSurface(struct Rectangle *rec); void printRectangleDetails(struct Rectangle rec); int main() { printf("Hello, World!\n"); struct Rectangle rec; rec.a = 4; rec.b = 3; calculateSurface(&rec); printRectangleDetails(rec); return 0; } int calculateSurface(struct Rectangle *rec) { const int surface = rec->a * rec->b; rec->surface = surface; return surface; } void printRectangleDetails(struct Rectangle rec) { printf("Rectangle's surface side: %d\n", rec.surface); }
Conclusion
Paradigms presented in this post are not the only ones you can find. There are as many flavors of programming as there are languages. By learning about the most popular paradigms, you now know that in which cases you can use them and what languages implement them.
You also learned that you can mix some of the paradigms in the case of single languages. You can achieve the same by using two different languages in one program – like with QML and JavaScript. Using them together is a great way to provide every functionality that you need in the app. You can also utilize the knowledge that you already have very well. You can try to do it with Felgo, as the apps using this framework are based on QML code so the JS functions are also supported.
If you want to build your first mobile and cross-platform app, Felgo is a way to go. It is not only a framework for adding some objects but is also a whole set of tools. You can reload code instantly with QML Hot Reload or test your app wirelessly with Live Server. Just download the SDK with the button below!
If you want to learn more from the experts simply sign up for our training session or watch own of our webinars. Felgo offers tailored Qt training and workshops, so you get the most value and customized experience. You can also hire one of our experts if you are looking for an experienced team that uses the latest technologies to develop beautiful and performant apps.
Related Articles:
QML Tutorial for Beginners
3 Practical App Development Video Tutorials
Best Practices of Cross-Platform App Development on Mobile
More Posts Like This
Flutter, React Native & Felgo: The App Framework Comparison
Continuous Integration and Delivery (CI/CD) for Qt and Felgo
QML Hot Reload for Qt - Felgo | https://blog.felgo.com/5-popular-programming-languages-examples | CC-MAIN-2022-33 | refinedweb | 2,175 | 57.06 |
Custom properties on nodesTim Conrad Nov 3, 2010 2:54 PM
Hi.
I am trying to get the "Custom properties on nt:file and nt:folder nodes" () running on a JBOSS AS 5.1 / Modeshape 2.3 software stack.
So I created a "acme.cnd" file and now need to tell Modeshape to find it and add this to the namespace.
My first idea was to put an additional line into the "<jcr:nodeTypes>" section in the "modeshape-services.jar/modeshape-config.xml" file.
But whatever I tried so far, Modeshape doesn't see the file or the contained infos.
Can someone please point out where to put the file or what I am doing wrong here?
Cheers
Tim
1. Re: Custom properties on nodesBrian Carothers Nov 3, 2010 4:42 PM (in response to Tim Conrad)
You should be able to add this to your configuration file inside the mode:repository tag for your JCR repository, assuming that acme.cnd is located in some location where it is accessible with a resource path of "/acme.cnd". If this doesn't work for you, please post your config file so that we can try it out.
<!--
Import the custom node types defined in the named resource (a file at a classpath-relative
path). If there was more than one file with custom node types, we could either add successive
<jcr:nodeTypes ... /> elements or just add all of the files as a comma-delimited string in the
mode:resource property.
-->
<jcr:nodeTypes mode:
2. Re: Custom properties on nodesTim Conrad Nov 3, 2010 5:06 PM (in response to Brian Carothers)
Hi.
That was pretty much my (newbie) quuestion: in which folder would Modeshape (JBOSS-AS) look into if I specify "/acme.cnd"?
Cheers
Tim
3. Re: Custom properties on nodesBrian Carothers Nov 3, 2010 7:42 PM (in response to Tim Conrad)
Actually, now that I look at it, 'Shape will first treat the path as a resource path. If it can't find anything at that resource path, it will try to load a file from that file path. If that doesn't work either, it will throw an exception. Sorry for the half-answer before.
4. Re: Custom properties on nodesTim Conrad Nov 4, 2010 8:15 AM (in response to Brian Carothers)
OK - and what (file-)path would it look for?
Say my JBOSS-AS is located at "/servers/jbossas/server/default/" and Modeshape's "modeshape-config.xml" resists inside "/server/default/deploy/modeshape-services.jar/". If I now have the "<jcr:nodeTypes mode:" in this config file - in which directory should I place the "acme.cnd" file?
5. Re: Custom properties on nodesBrian Carothers Nov 4, 2010 8:37 AM (in response to Tim Conrad)
Since you're deploying ModeShape as a JAR, you have to either have acme.cnd in a directory that's on the classpath either implicitly (like /server/default/conf) or explicitly (like some directory you put into your JBOSS_CLASSPATH environment variable before starting up the server).
Alternatively, you could treat "/acme.cnd" as a file path and put it in the root directory of your filesystem, but there are some obvious flaws with that deployment mechanism.
If it were me and I wanted to externalize 'Shape from any particular webapp or service running on the app server, I would probably make a modeshape directory under /server/default/conf and stick my CND files in there. That way I could patch versions of modeshape-services.jar without overwriting my custom node types.
People who spend more time administering JBoss may be able to find flaws with that approach though.
6. Re: Custom properties on nodesTim Conrad Nov 4, 2010 2:35 PM (in response to Brian Carothers)
Alright - that was the missing piece... I thought the path would be relative to something.
Thanks Brian for making it clear =)
Cheers
Tim | https://developer.jboss.org/message/569566 | CC-MAIN-2016-30 | refinedweb | 648 | 64.61 |
NE
W
The ultimate guide to coding with Python
Pi 2
projects
inside
Learn to use Pythont Program games t Get creative with Pi
Welcome to
Python. Get ready to become a true Python expert with the wealth of information
contained within these pages.
Python
The
Imagine Publishing Ltd
Richmond House
33 Richmond Hill
Bournemouth
Dorset BH2 6EZ
+44 (0) 1202 586200
Website:
Twitter: @Books_Imagine
Publishing Director
Aaron Asadi
Head of Design
Ross Andrews
Production Editor
Alex Hoskins
Senior Art Editor
Greg Whitaker
Designer
Perry Wardell-Wicks
Printed by
William Gibbons, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT
Distributed in the UK, Eire & the Rest of the World by
Marketforce, Blue Fin Building, 110 Southwark Street, London, SE1 0SU.
The Python Book © 2015 Imagine Publishing Ltd
ISBN 9781785460609
Part of the
bookazine series
Python
5IF
$POUFOUT
8(FUTUBSUFE
XJUI1ZUIPO
.BTUFSUIFCBTJDTUIFSJHIUXBZ
16
FTTFOUJBM
DPNNBOET
5IFDPNNBOETZPVOFFEUPLOPX
8PSLXJUI1ZUIPO
$SFBUFXJUI1ZUIPO
74$SFBUFEZOBNJDUFNQMBUFT
108#VJMEUJDUBDUPFXJUI,JWZ
78.BLFFYUFOTJPOTGPS9#.$
112$SFBUFUXPTUFQBVUIFOUJDBUJPO
844DJFOUJöDDPNQVUJOH
1165XJUUFST0"VUIQSPDFTT
88*OTUBOUNFTTBHJOH
1201SPHSBNB4QBDF
*OWBEFSTDMPOF
1SPHSBNOPVHIUTBOEDSPTTFT
6TF+JOKB 'MBTLBOENPSF
6TF5XJMJPGPSTBGFBVUIFOUJDBUJPO
&OIBODF9#.$XJUIUIJTUVUPSJBM
#VJMETJHOJOHSFRVFTUT
(FUUPHSJQTXJUI/VN1Z
(FUDIBUUJOHVTJOH1ZUIPO
943FQMBDFZPVSTIFMM
6TF1ZUIPOGPSZPVSQSJNBSZTIFMM
1ZUIPOFTTFOUJBMT
981ZUIPOGPSTZTUFNBENJOT
26$PEFSPDL QBQFS TDJTTPST
1024DSBQF8JLJQFEJB
1VUCBTJDDPEJOHJOUPBDUJPO
321SPHSBNBIBOHNBOHBNF
6TF1ZUIPOUPNBLFUIFDMBTTJDHBNF
)PX1ZUIPOIFMQTTZTUFNBENJOJTUSBUJPO
6TF#FBVUJGVM4PVQUPSFBEPõ
JOF
74
.BLFUIFCBTJD1JWBEFSTHBNF
124"EEBOJNBUJPOBOETPVOE
&OIBODFZPVS1JWBEFSTHBNF
128.BLFBWJTVBMOPWFM
1SPHSBNBCPPLTUZMFHBNF
120
65IF1ZUIPO#PPL
88
128
144
166
66
148
8FCEFWFMPQNFOU
134%FWFMPQXJUI1ZUIPO
8IZ1ZUIPOJTQFSGFDUGPSUIFXFC
140#VJMEZPVSPXOCMPH
50
1ZUIPO
UJQT
6TF1ZUIPOXJUI1J
1541SPHSBNNJOHJO1ZUIPOPO
3BTQCFSSZ1J
-FBSOIPXUPPQUJNJTFGPS1J
1581SPHSBN.JOFDSBGU1J
1MBZB.JOFDSBGUHBNFPO1J
162#VJMEBO-&%.BUSJY
6TF1JUPDPOUSPMMJHIUTFRVFODFT
1663BTQCFSSZ1JDBSDPNQVUFS
(FUXIFSFZPVSFHPJOHXJUI3BTQCFSSZ1J
#FHJOEFWFMPQJOHZPVSCMPH
144%FMJWFSDPOUFOUUPZPVSCMPH
"EEDPOUFOUUPZPVSTJUF
148&OIBODFZPVSCMPH
$PNQMFUFZPVSCMPHXJUIBEEPOT
i1ZUIPOJTFYQBOTJWF CVUZPVMMCF
BOFYQFSUCFGPSFZPVLOPXJUw
5IF1ZUIPO#PPL7
with
Python
Always wanted to have a go at
programming? No more excuses,
because Python is the perfect way to get started!
P
ython is a great programming language for
both beginners and experts. It is designed with
code readability in mind, making it an excellent
choice for beginners who are still getting used to
various programming concepts.
The language is popular and has plenty of libraries
available, allowing programmers to get a lot done with
relatively little code.
You can make all kinds of applications in Python:
you could use the Pygame framework to write
simple 2D games, you could use the GTK
8 The Python Book
libraries to create a windowed application, or you could
try something a little more ambitious like an app such
as creating one using Python’s Bluetooth and Input
libraries to capture the input from a USB keyboard and
relay the input events to an Android phone.
For this tutorial we’re going to be using Python 2.x
since that is the version that is most likely to be installed
on your Linux distribution.
In the following tutorials, you’ll learn how to create
popular games using Python programming. We’ll also
show you how to add sound and AI to these games.
The Python Book 9
Hello World
Variables and data types
Let’s get stuck in, and what better way than with the
programmer’s best friend, the ‘Hello World’ application! Start
by opening a terminal. Its current working directory will be your
home directory. It’s probably a good idea to make a directory for
the files we’ll be creating in this tutorial, rather than having them
loose in your home directory. You can create a directory called
Python using the command mkdir Python. You’ll then want to
change into that directory using the command cd Python.
The next step is to create an empty file using the command
‘touch’ followed by the filename. Our expert used the command
touch hello_world.py. The final and most important part of
setting up the file is making it executable. This allows us to run
code inside the hello_world.py file. We do this with the command
chmod +x hello_world.py. Now that we have our file set up, we
can go ahead and open it up in nano, or any text editor of your
choice. Gedit is a great editor with syntax highlighting support
that should be available on any distribution. You’ll be able to
install it using your package manager if you don’t have it already.
A variable is a name in source code that is associated with an
area in memory that you can use to store data, which is then
called upon throughout the code. The data can be one of many
types, including:
[liam@liam-laptop
[liam@liam-laptop
[liam@liam-laptop
[liam@liam-laptop
[liam@liam-laptop
~]$ mkdir Python
~]$ cd Python/
Python]$ touch hello_world.py
Python]$ chmod +x hello_world.py
Python]$ nano hello_world.py
Our Hello World program is very simple, it only needs two lines.
The first line begins with a ‘shebang’ (the symbol #! – also known
as a hashbang) followed by the path to the Python interpreter.
The program loader uses this line to work out what the rest of the
lines need to be interpreted with. If you’re running this in an IDE
like IDLE, you don’t necessarily need to do this.
The code that is actually read by the Python interpreter is only
a single line. We’re passing the value Hello World to the print
function by placing it in brackets immediately after we’ve called
the print function. Hello World is enclosed in quotation marks to
indicate that it is a literal value and should not be interpreted as
source code. As expected, the print function in Python prints any
value that gets passed to it from the console.
You can save the changes you’ve just made to the file in nano
using the key combination Ctrl+O, followed by Enter. Use Ctrl+X
to exit nano.
#!/usr/bin/env python2
print(“Hello World”)
You can run the Hello World program by prefixing
– in this case you’d type:
its filename with ./
./hello_world.py.
[liam@liam-laptop Python]$ ./hello_world.py
Hello World
TIP
If you were using a graphical
editor such as gedit, then
you would only have to do
the last step of making the
file executable. You should
only have to mark the file as
executable once. You can
freely edit the file once it
is executable.
10 The Python Book
Integer
Stores whole numbers
Float
Stores decimal numbers
Boolean
Can have a value of True or False
String
Stores a collection of characters. “Hello
World” is a string
As well as these main data types, there are sequence types
(technically, a string is a sequence type but is so commonly used
we’ve classed it as a main data type):
List
Contains a collection of data in a specific order
Tuple
Contains a collection immutable data in a
specific order
A tuple would be used for something like a co-ordinate,
containing an x and y value stored as a single variable, whereas
a list is typically used to store larger collections. The data
stored in a tuple is immutable because you aren’t able to
change values of individual elements in a tuple. However, you
can do so in a list.
It will also be useful to know about Python’s dictionary
type. A dictionary is a mapped data type. It stores data in
key-value pairs. This means that you access values stored in
the dictionary using that value’s corresponding key, which is
different to how you would do it with a list. In a list, you would
access an element of the list using that element’s index (a
number representing the element’s position in the list).
Let’s work on a program we can use to demonstrate how to
use variables and different data types. It’s worth noting at
this point that you don’t always have to specify data types
in Python. Feel free to create this file in any editor you like.
Everything will work just fine as long as you remember to make
the file executable. We’re going to call ours variables.py.
“A variable is a name
in source code that is
associated with an area in
memory that you can use to
store data”
Interpreted vs compiled languages
An interpreted language such as Python is one where the source
code is converted to machine code and then executed each time the
program runs. This is different from a compiled language such as C,
where the source code is only converted to machine code once – the
resulting machine code is then executed each time the program runs.
#!/usr/bin/env python2
The following line creates an
integer variable called hello_int
with the # value of 21. Notice
how it doesn’t need to go in
quotation marks
# We create a variable by writing the name of the variable we want followed
# by an equals sign, which is followed by the value we want to store in the
# variable. For example, the following line creates a variable called
# hello_str, containing the string Hello World.
hello_str = “Hello World”
hello_int = 21
The same principal is true of
Boolean values
We create a tuple in the
following way
And a list in this way
hello_bool = True
hello_tuple = (21, 32)
hello_list = [“Hello,”, “this”, “is”, “a”, “list”]
# This list now contains 5 strings. Notice that there are no spaces
# between these strings so if you were to join them up so make a sentence
# you’d have to add a space between each element.
You could also create the
same list in the following way
hello_list = list()
hello_list.append(“Hello,”)
hello_list.append(“this”)
hello_list.append(“is”)
hello_list.append(“a”)
hello_list.append(“list”)
# The first line creates an empty list and the following lines use the append
# function of the list type to add elements to the list. This way of using a
# list isn’t really very useful when working with strings you know of in
# advance, but it can be useful when working with dynamic data such as user
# input. This list will overwrite the first list without any warning as we
# are using the same variable name as the previous list.
We might as well create a
dictionary while we’re at it.
Notice how we’ve aligned the
colons below to make the
code tidy
Notice that there will now be
two exclamation marks when
we print the element
hello_dict = { “first_name” : “Liam”,
“last_name” : “Fraser”,
“eye_colour” : “Blue” }
# Let’s access some elements inside our collections
# We’ll start by changing the value of the last string in our hello_list and
# add an exclamation mark to the end. The “list” string is the 5th element
# in the list. However, indexes in Python are zero-based, which means the
# first element has an index of 0.
print(hello_list[4])
hello_list[4] += “!”
# The above line is the same as
hello_list[4] = hello_list[4] + “!”
print(hello_list[4])
TIP
At this point, it’s worth
explaining that any text in
a Python file that follows
a # character will be
ignored by the interpreter.
This is so you can write
“Any text in a Python file that follows a #
character will be ignored”
print(str(hello_tuple[0]))
# We can’t change the value of those elements like we just did with the list
# Notice the use of the str function above to explicitly convert the integer
# value inside the tuple to a string before printing it.
Remember that tuples are
immutable, although we can
access the elements of them
like so
print(hello_dict[“first_name”] + “ “ + hello_dict[“last_name”] + “ has “ +
hello_dict[“eye_colour”] + “ eyes.”)
Let’s create a sentence using the
data in our hello_dict
print(“{0} {1} has {2} eyes.”.format(hello_dict[“first_name”],
hello_dict[“last_name”],
hello_dict[“eye_colour”]))
A tidier way of doing this
would be to use Python’s
string formatter
Control structures
In programming, a control structure is any kind of statement that
can change the path that the code execution takes. For example, a
control structure that decided to end the program if a number was
less than 5 would look something like this:
#!/usr/bin/env python2
import sys # Used for the sys.exit function
int_condition = 5
if int_condition < 6:
sys.exit(“int_condition must be >= 6”)
else:
print(“int_condition was >= 6 - continuing”)
More about a
Python list
A Python list is similar to an
array in other languages. A
list (or tuple) in Python can
contain data of multiple
types, which is not usually
the case with arrays in other
languages. For this reason,
we recommend that you
only store data of the same
type in a list. This should
almost always be the case
anyway due to the nature of
the way data in a list would
be processed.
12 The Python Book
The path that the code takes will depend on the value of
the integer int_condition. The code in the ‘if’ block will only be
executed if the condition is true. The import statement is used
to load the Python system library; the latter provides the exit
function, allowing you to exit the program, printing an error
message. Notice that indentation (in this case four spaces per
indent) is used to indicate which statement a block of code
belongs to.
‘If’ statements are probably the most commonly used control
structures. Other control structures include:
• For statements, which allow you to iterate over items in
collections, or to repeat a piece of code a certain number
of times;
• While statements, a loop that continues while the condition
is true.
We’re going to write a program that accepts user input from the
user to demonstrate how control structures work. We’re calling it
construct.py.
The ‘for’ loop is using a local copy of the current value, which
means any changes inside the loop won’t make any changes
affecting the list. On the other hand however, the ‘while’ loop is
directly accessing elements in the list, so you could change the list
there should you want to do so. We will talk about variable scope in
some more detail later on. The output from the above program is
as follows:
Indentation in detail
As previously mentioned, the level of indentation
dictates which statement a block of code belongs
to. Indentation is mandatory in Python, whereas in
other languages, sets of braces are used to organise
code blocks. For this reason, it is essential that you
use a consistent indentation style. Four spaces
are typically used to represent a single level of
indentation in Python. You can use tabs, but tabs are
not well defined, especially if you happen to open a
file in more than one editor.
“The ‘for‘ loop uses
a local copy, so
changes in the loop
won’t affect the list”
[liam@liam-laptop Python]$ ./construct.py
How many integers? acd
You must enter an integer
[liam@liam-laptop Python]$ ./construct.py
How many integers? 3
Please enter integer 1: t
You must enter an integer
Please enter integer 1: 5
Please enter integer 2: 2
Please enter integer 3: 6
Using a for loop
5
2
6
Using a while loop
5
2
6
#!/usr/bin/env python2
The number of integers we
want in the list
# We’re going to write a program that will ask the user to input an arbitrary
# number of integers, store them in a collection, and then demonstrate how the
# collection would be used with various control structures.
import sys # Used for the sys.exit function
target_int = raw_input(“How many integers? “)
# By now, the variable target_int contains a string representation of
# whatever the user typed. We need to try and convert that to an integer but
# be ready to # deal with the error if it’s not. Otherwise the program will
# crash.
try:
target_int = int(target_int)
except ValueError:
sys.exit(“You must enter an integer”)
A list to store the integers
ints = list()
These are used to keep track
of how many integers we
currently have
If the above succeeds then isint
will be set to true: isint =True
count = 0
# Keep asking for an integer until we have the required number
while count < target_int:
new_int = raw_input(“Please enter integer {0}: “.format(count + 1))
isint = False
try:
new_int = int(new_int)
except:
print(“You must enter an integer”)
# Only carry on if we have an integer. If not, we’ll loop again
# Notice below I use ==, which is different from =. The single equals is an
# assignment operator whereas the double equals is a comparison operator.
if isint == True:
# Add the integer to the collection
ints.append(new_int)
# Increment the count by 1
count += 1
By now, the user has given up or
we have a list filled with integers.
We can loop through these in a
couple of ways. The first is with
a for loop
print(“Using a for loop”)
for value in ints:
print(str(value))
The Python Book 13
TIP
You can define defaults
for variables if you want
to be able to call the
function without passing
any variables through at
all. You do this by putting
an equals sign after
the variable name. For
example, you can do:
def modify_string
(original=” Default
String”)
# Or with a while loop:
print(“Using a while loop”)
# We already have the total above, but knowing the len function is very
# useful.
total = len(ints)
count = 0
while count < total:
print(str(ints[count]))
count += 1
Functions and variable scope
Functions are used in programming to break processes down into smaller
chunks. This often makes code much easier to read. Functions can also be
reusable if designed in a certain way. Functions can have variables passed
to them. Variables in Python are always passed by value, which means that
a copy of the variable is passed to the function that is only valid in the scope
of the function. Any changes made to the original variable inside the function
will be discarded. However, functions can also return values, so this isn’t
an issue. Functions are defined with the keyword def, followed by the
name of the function. Any variables that can be passed through are put in
brackets following the function’s name. Multiple variables are separated by
commas. The names given to the variables in these brackets are the ones
that they will have in the scope of the function, regardless of what
the variable that’s passed to the function is called. Let’s see this
in action.
The output from the program opposite is as follows:
“Functions are used in
programming to break
processes down in”
#!/usr/bin/env python2
# Below is a function called modify_string,
# that will be called original in the scope
# indented with 4 spaces under the function
# scope.
def modify_string(original):
original += “ that has been modified.”
# At the moment, only the local copy of
We are now outside of
the scope of the modify_
string function, as we
have reduced the level
of indentation
The test string won’t be
changed in this code
which accepts a variable
of the function. Anything
definition is in the
this string has been modified
def modify_string_return(original):
original += “ that has been modified.”
# However, we can return our local copy to the caller. The function
# ends as soon as the return statement is used, regardless of where it
# is in the function.
return original
test_string = “This is a test string”
modify_string(test_string)
print(test_string)
test_string = modify_string_return(test_string)
print(test_string)
However, we can call the
function like this
14 The Python Book
# The function’s return value is stored in the variable test string,
# overwriting the original and therefore changing the value that is
# printed.
(FUTUBSUFEXJUI1ZUIPO
[liam@liam-laptop Python]$ ./functions_and_scope.py
This is a test string
This is a test string that has been modified.
Scope is an important thing to get the hang of, otherwise it can get you
into some bad habits. Let’s write a quick program to demonstrate this. It’s
going to have a Boolean variable called cont, which will decide if a number
will be assigned to a variable in an if statement. However, the variable
hasn’t been defined anywhere apart from in the scope of the if statement.
We’ll finish off by trying to print the variable.
#!/usr/bin/env python2
cont = False
if cont:
var = 1234
print(var)
In the section of code above, Python will convert the integer to a string
before printing it. However, it’s always a good idea to explicitly convert
things to strings – especially when it comes to concatenating strings
together. If you try to use the + operator on a string and an integer, there
will be an error because it’s not explicitly clear what needs to happen.
The + operator would usually add two integers together. Having said that,
Python’s string formatter that we demonstrated earlier is a cleaner way of
doing that. Can you see the problem? Var has only been defined in the scope
of the if statement. This means that we get a very nasty error when we try to
access var.
[liam@liam-laptop Python]$ ./scope.py
Traceback (most recent call last):
File Ŏ./scope.pyŏ, line 8, in <module>
print var
NameError: name Ăvarā is not defined
If cont is set to True, then the variable will be created and we can access
it just fine. However, this is a bad way to do things. The correct way is to
initialise the variable outside of the scope of the if statement.
#!/usr/bin/env python2
cont = False
var = 0
if cont:
var = 1234
if var != 0:
print(var)
The variable var is defined in a wider scope than the if statement, and
can still be accessed by the if statement. Any changes made to var inside
the if statement are changing the variable defined in the larger scope.
This example doesn’t really do anything useful apart from illustrate the
potential problem, but the worst-case scenario has gone from the program
crashing to printing a zero. Even that doesn’t happen because we’ve added
an extra construct to test the value of var before printing it.
Coding style
It’s worth taking a little time to talk about coding style. It’s simple to write
tidy code. The key is consistency. For example, you should always name
your variables in the same manner. It doesn’t matter if you want to use
camelCase or use underscores as we have. One crucial thing is to use
self-documenting identifiers for variables. You shouldn’t have to guess
Comparison operators
The common comparison operators available in Python include:
<
strictly less than
<=
less than or equal
>
strictly greater than
>=
greater than or equal
==
equal
!=
not equal
what a variable does. The other thing that goes with this is to always
comment your code. This will help anyone else who reads your code,
and yourself in the future. It’s also useful to put a brief summary at
the top of a code file describing what the application does, or a part of
the application if it’s made up of multiple files.
Summary
This article should have introduced you to the basics of programming
in Python. Hopefully you are getting used to the syntax, indentation
and general look and feel of a Python program. The next step is
to learn how to come up with a problem that you want to solve, and
break it down into small enough steps that you can implement in a
programming language.
Google, or any other search engine, is very helpful. If you are stuck
with anything, or have an error message you can’t work out how to
fix, stick it into Google and you should be a lot closer to solving your
problem. For example, if we Google ‘play mp3 file with python’, the
first link takes us to a Stack Overflow thread with a bunch of useful
replies. Don’t be afraid to get stuck in – the real fun of programming is
solving problems one manageable chunk at a time.
Happy programming!
ESSENTIAL
PYTHON
COMMANDS
Python is known as a very
dense language, with lots of
modules capable of doing
almost anything. Here,
we will look at the core
essentials that everyone
needs to know
16 The Python Book
Python has a massive environment of extra modules
that can provide functionality in hundreds of
different disciplines. However, every programming
language has a core set of functionality that everyone
should know in order to get useful work done. Python
is no different in this regard. Here, we will look at
50 commands that we consider to be essential to
programming in Python. Others may pick a slightly
different set, but this list contains the best of the best.
We will cover all of the basic commands, from
importing extra modules at the beginning of a program
to returning values to the calling environment at the
end. We will also be looking at some commands that
are useful in learning about the current session within
Python, like the current list of variables that have been
defined and how memory is being used.
Because the Python environment involves using a lot
of extra modules, we will also look at a few commands
that are strictly outside of Python. We will see how to
install external modules and how to manage multiple
environments for different development projects.
Since this is going to be a list of commands, there is the
assumption that you already know the basics of how
to use loops and conditional structures. This piece is
designed to help you remember commands that you
know you’ve seen before, and hopefully introduce you
to a few that you may not have seen yet.
Although we’ve done our best to pack everything
you could ever need into 50 tips, Python is such an
expansive language that some commands will have
been left out. Make some time to learn about the ones
that we didn’t cover here, once you’ve mastered these.
50 Python commands
02
01
Reloading modules
When a module is first imported, any initialisation functions are run at that time. This may involve
creating data objects, or initiating connections. But, this is only done the first time within a given session.
Importing the same module again won’t re-execute any of the initialisation code. If you want to have this
code re-run, you need to use the reload command. The format is ‘reload(modulename)’. Something to keep
in mind is that the dictionary from the previous import isn’t dumped, but only written over. This means that
any definitions that have changed between the import and the reload are updated correctly. But if you
delete a definition, the old one will stick around and still be accessible. There may be other side effects, so
always use with caution.
Importing modules
The strength of Python is its ability to be
extended through modules. The first step in many
programs is to import those modules that you need.
The simplest import statement is to just call ‘import
modulename’. In this case, those functions and
objects provided are not in the general namespace.
You need to call them using the complete name
(modulename.methodname). You can shorten the
‘modulename’ part with the command ‘import
modulename as mn’. You can skip this issue
completely with the command ‘from modulename
import *’ to import everything from the given module.
Then you can call those provided capabilities directly.
If you only need a few of the provided items, you can
import them selectively by replacing the ‘*’ with the
method or object names.
03
Installing new modules
While most of the commands we are looking at are Python commands
that are to be executed within a Python session, there are a few essential
commands that need to be executed outside of Python. The first of these is pip.
Installing a module involves downloading the source code, and compiling any included
external code. Luckily, there is a repository of hundreds of Python modules available
at. Instead of doing everything manually, you can install a
new module by using the command ‘pip install modulename’. This command will
also do a dependency check and install any missing modules before installing the
one you requested. You may need administrator rights if you want this new module
installed in the global library for your computer. On a Linux machine, you would
simply run the pip command with sudo. Otherwise, you can install it to your
personal library directory by adding the command line option ‘—user’.
“Every programming language out there has a
core set of functionality that everyone should
know in order to get useful work done. Python is
no different”
04
Executing a script
Importing a module does run the code
within the module file, but does it through the
module maintenance code within the Python
engine. This maintenance code also deals with
running initialising code. If you only wish to
take a Python script and execute the raw code
within the current session, you can use the
‘execfile(“filename.py”)’ command, where the
main option is a string containing the Python file
to load and execute. By default, any definitions
are loaded into the locals and globals of the
current session. You can optionally include
two extra parameters the execfile command.
These two options are both dictionaries, one
for a different set of locals and a different set of
globals. If you only hand in one dictionary, it is
assumed to be a globals dictionary. The return
value of this command is None.
05
An enhanced shell
The default interactive shell is provided
through the command ‘python’, but is
rather limited. An enhanced shell is provided by
the command ‘ipython’. It provides a lot of extra
functionality to the code developer. A thorough
history system is available, giving you access to
not only commands from the current session,
but also from previous sessions. There are also
magic commands that provide enhanced ways of
interacting with the current Python session. For
more complex interactions, you can create and use
macros. You can also easily peek into the memory
of the Python session and decompile Python code.
You can even create profiles that allow you to handle
initialisation steps that you may need to do every time
you use iPython.
06
Evaluating code
Sometimes, you may have chunks of
code that are put together programmatically. If
these pieces of code are put together as a string,
you can execute the result with the command
‘eval(“code_string”)’. Any syntax errors within
the code string are reported as exceptions. By
default, this code is executed within the current
session, using the current globals and locals
dictionaries. The ‘eval’ command can also take
two other optional parameters, where you can
provide a different set of dictionaries for the
globals and locals. If there is only one additional
parameter, then it is assumed to be a globals
dictionary. You can optionally hand in a code
object that is created with the compile command
instead of the code string. The return value of this
command is None.
The Python Book 17
07
Asserting values
At some point, we all need to debug
some piece of code we are trying to write. One
of the tools useful in this is the concept of an
assertion. The assert command takes a Python
expression and checks to see if it is true. If so,
then execution continues as normal. If it is not
true, then an AssertionError is raised. This way,
you can check to make sure that invariants
within your code stay invariant. By doing so,
you can check assumptions made within your
code. You can optionally include a second
parameter to the assert command. This second
parameter is Python expression that is executed
if the assertion fails. Usually, this is some type of
detailed error message that gets printed out. Or,
you may want to include cleanup code that tries
to recover from the failed assertion.
08
Mapping functions
A common task that is done in modern
programs is to map a given computation
to an entire list of elements. Python provides the
command ‘map()’ to do just this. Map returns a list of
the results of the function applied to each element of
an iterable object. Map can actually take more than
one function and more than one iterable object. If it
is given more than one function, then a list of tuples
is returned, with each element of the tuple containing
the results from each function. If there is more than
one iterable handed in, then map assumes that the
functions take more than one input parameter, so
it will take them from the given iterables. This has
the implicit assumption that the iterables are all of
the same size, and that they are all necessary as
parameters for the given function.
“While not strictly commands, everyone needs to
know how to deal with loops. The two main types
of loops are a fixed number of iterations loop (for)
and a conditional loop (while)”
10
Filtering
Where the command map returns a result for every element in an iterable, filter only returns a
result if the function returns a True value. This means that you can create a new list of elements where
only the elements that satisfy some condition are used. As an example, if your function checked that
the values were numbers between 0 and 10, then it would create a new list with no negative numbers
and no numbers above 10. This could be accomplished with a for loop, but this method is much
cleaner. If the function provided to filter is ‘None’, then it is assumed to be the identity function. This
means that only those elements that evaluate to True are returned as part of the new list. There are
iterable versions of filter available in the itertools module.
18 The Python Book
Virtualenvs
12
Reductions
Because of the potential complexity of
the Python environment, it is sometimes best to
set up a clean environment within which to install
only the modules you need for a given project. In
this case, you can use the virtualenv command
to initialise such an environment. If you create
a directory named ‘ENV’, you can create a new
environment with the command ‘virtualenv
ENV’. This will create the subdirectories bin, lib
and include, and populate them with an initial
environment. You can then start using this new
environment by sourcing the script ‘ENV/bin/
activate’, which will change several environment
variables, such as the PATH. When you are done,
you can source the script ‘ENV/bin/deactivate’
to reset your shell’s environment back to its
previous condition. In this way, you can have
environments that only have the modules you
need for a given set of tasks.
Loops
While not strictly commands, everyone needs
to know how to deal with loops. The two main
types of loops are a fixed number of iterations loop (for) and
a conditional loop (while). In a for loop, you iterate over some
sequence of values, pulling them off the list one at a time
and putting them in a temporary variable. You continue until
either you have processed every element or you have hit a
break command. In a while loop, you continue going through
the loop as long as some test expression evaluates to True.
While loops can also be exited early by using the break
command, you can also skip pieces of code within either
loop by using a continue command to selectively stop this
current iteration and move on to the next one.
11
09.
50 Python commands
is this?
16 What
Everything in Python is an object. You can
check to see what class this object is an instance
of with the command ‘isinstance(object, class)’.
This command returns a Boolean value.
it a subclass?
17 Is
The command ‘issubclass(class1, class2)’
checks to see if class1 is a subclass of class2. If
class1 and class2 are the same, this is returned
as True.
objects
18 Global
You can get a dictionary of the global
symbol table for the current module with the
command ‘globals()’.
objects
19 Local
You can access an updated dictionary
of the current local symbol table by using the
command ‘locals()’.
13
How true is a list?
In some cases, you may have collected a number of elements within a list that can be evaluated
to True or False. For example, maybe you ran a number of possibilities through your computation and
have created a list of which ones passed. You can use the command ‘any(list)’ to check to see whether
any of the elements within your list are true. If you need to check whether all of the elements are True,
you can use the command ‘all(list)’. Both of these commands return a True if the relevant condition is
satisfied, and a False if not. They do behave differently if the iterable object is empty, however. The
command ‘all’ returns a True if the iterable is empty, whereas the command ‘any’ returns a False when
given any empty iterable.
15
14
Enumerating
Sometimes, we need to label the elements
that reside within an iterable object with their
indices so that they can be processed at some later
point. You could do this by explicitly looping through
each of the elements and building an enumerated
list. The enumerate command does this in one line.
It takes an iterable object and creates a list of tuples
as the result. Each tuple has the 0-based index of
the element, along with the element itself. You can
optionally start the indexing from some other value
by including an optional second parameter. As an
example, you could enumerate a list of names with
the command ‘list(enumerate(names, start=1))’. In
this example, we decided to start the indexing at 1
instead of 0.
Casting
Variables in Python don’t have any type
information, and so can be used to store
any type of object. The actual data, however, is of
one type or another. Many operators, like addition,
assume that the input values are of the same type.
Very often, the operator you are using is smart
enough to make the type of conversion that is
needed. If you have the need to explicitly convert
your data from one type to another, there are a class
of functions that can be used to do this conversion
process. The ones you are most likely to use is ‘abs’,
‘bin’, ‘bool’, ‘chr’, ‘complex’, ‘float’, ‘hex’, ‘int’, ‘long’,
‘oct’, and ‘str’. For the number-based conversion
functions, there is an order of precedence where
some types are a subset of others. For example,
integers are “lower” than floats. When converting
up, no changes in the ultimate value should happen.
When converting down, usually some amount of
information is lost. For example, when converting
from float to integer, Python truncates the number
towards zero.
20 Variables
The command ‘vars(dict)’ returns writeable
elements for an object. If you use ‘vars()’, it
behaves like ‘locals()’.
a global
21 Making
A list of names can be interpreted as
globals for the entire code block with the
command ‘global names’.
22 Nonlocals
In Python 3.X, you can access names from
the nearest enclosing scope with the command
‘nonlocal names’ and bind it to the local scope.
23
Raising an exception
When you identify an error condition,
you can use the ‘raise’ command to throw up an
exception. You can include an exception type and
a value.
with an exception
24 Dealing
Exceptions can be caught in a try-except
construction. If the code in the try block raises an
exception, the code in the except block gets run.
methods
25 Static
You can create a statis method, similar
to that in Java or C++, with the command
‘staticmethod(function_name)’.
The Python Book 19
31
26
Ranges
You may need a list of numbers, maybe in
a ‘for’ loop. The command ‘range()’ can create an
iterable list of integers. With one parameter, it
goes from 0 to the given number. You can provide
an optional start number, as well as a step size.
Negative numbers count down.
27 Xranges
One problem with ranges is that all of the
elements need to be calculated up front and
stored in memory. The command ‘xrange()’ takes
the same parameters and provides the same
result, but only calculates the next element as it
is needed.
28 Iterators
Iteration is a very Pythonic way of doing
With modules
The ‘with’ command provides the ability to
wrap a code block with methods defined
by a context manager. This can help clean up code
and make it easier to read what a given piece of
code is supposed to be doing months later. A classic
example of using ‘with’ is when dealing with files.
You could use something like ‘with open(“myfile.
txt”, “r”) as f:’. This will open the file and prepare it for
reading. You can then read the file in the code block
with ‘data=f.read()’. The best part of doing this is that
the file will automatically be closed when the code
block is exited, regardless of the reason. So, even if
the code block throws an exception, you don’t need to
worry about closing the file as part of your exception
handler. If you have a more complicated ‘with’
example, you can create a context manager class to
help out.
32
Printing
The most direct way of getting output
to the user is with the print command.
This will send text out to the console window. If you
are using version 2.X of Python, there are a couple
of ways you can use the print command. The most
common way had been simply call it as ‘print
“Some text”’. You can also use print with the same
syntax that you would use for any other function.
So, the above example would look like ‘print(“Some
text”)’. This is the only form available in version 3.X.
If you use the function syntax, you can add extra
parameters that give you finer control over this
output. For example, you can give the parameter
‘file=myfile.txt’ and get the output from the print
command being dumped into the given text file.
It also will accept any object that has some string
representation available.
“A classic example of using ‘with’ is when dealing
with files. The best part of doing this is that the
file will automatically be closed when the code
block is exited, regardless of the reason”
things. For objects which are not intrinsically
iterable, you can use the command ‘iter(object_
name)’ to essentially wrap your object and provide
an iterable interface for use with other functions
and operators.
lists
29 Sorted
You can use the command ‘sorted(list1)’
to sort the elements of a list. You can give it
a custom comparison function, and for more
complex elements you can include a key function
that pulls out a ranking property from each
element for comparison.
30
33
Summing items
Above, we saw the general reduction
function reduce. A specific type of reduction
operation, summation, is common enough to
warrant the inclusion of a special case, the
command ‘sum(iterable_object)’. You can include
a second parameter here that will provide a
starting value.
20 The Python Book
Memoryview
Sometimes, you need to access the raw data of some object, usually as a buffer of bytes. You
can copy this data and put it into a bytearray, for example. But this means that you will be using extra
memory, and this might not be an option for large objects. The command ‘memoryview(object_name)’
wraps the object handed in to the command and provides an interface to the raw bytes. It gives access
to these bytes an element at a time. In many cases, elements are the size of one byte. But, depending
on the object details, you could end up with elements that are larger than that. You can find out the size
of an element in bytes with the property ‘itemsize’. Once you have your memory view created, you can
access the individual elements as you would get elements from a list (mem_view[1], for example).
50 Python commands
34
Files
When dealing with files, you need to create a file object to interact with it. The file command takes
a string with the file name and location and creates a file object instance. You can then call the file object
methods like ‘open’, ‘read’ and ‘close’, to get data out of the file. If you are doing file processing, you can
also use the ‘readline’ method. When opening a file, there is an explicit ‘open()’ command to simplify the
process. It takes a string with the file name, and an optional parameter that is a string which defines the
mode. The default is to open the file as read-only (‘r’). You can also open it for writing (‘w’) and appending
(‘a’). After opening the file, a file object is returned so that you can further interact with it. You can then read
it, write to it, and finally close it.
35
Yielding
In many cases, a function may need to
yield the context of execution to some other
function. This is the case with generators. The preferred
method for a generator is that it will only calculate the
next value when it is requested through the method
‘next()’. The command ‘yield’ saves the current state of
the generator function, and return execution control
to the calling function. In this way, the saved state of
the generator is reloaded and the generator picks up
where it left off in order to calculate the next requested
value. In this way, you only need to have enough memory
available to store the bare minimum to calculate the
next needed value, rather than having to store all of the
possible values in memory all at once.
37
36
Weak references
39
Threads
You sometimes need to have a reference
to an object, but still be able to destroy it if
needed. A weak reference is one which can
be ignored by the garbage collector. If the only
references left to n object are weak references,
then the garbage collector is allowed to destroy
that object and reclaim the space for other
uses. This is useful in cases where you have
caches or mappings of large datasets that
don’t necessarily have to stay in memory. If an
object that is weakly referenced ends up being
destroyed and you try to access it, it will appear
as a None. You can test for this condition and
then reload the data if you decide that this is a
necessary step.
Pickling data
There are a few different ways of
serialising memory when you need to checkpoint
results to disk. One of these is called pickling.
Pickle is actually a complete module, not just a
single command. To store data on to the hard
drive, you can use the dump method to write
the data out. When you want to reload the same
data at some other point in the future, you can
use the load method to read the data in and
unpickle it. One issue with pickle is its speed, or
lack of it. There is a second module, cPickle, that
provides the same basic functionality. But, since
it is written in C, it can be as much as 1000 times
faster. One thing to be aware of is that pickle does
not store any class information for an object,
but only its instance information. This means
that when you unpickle the object, it may have
different methods and attributes if the class
definition has changed in the interim.
38
Shelving data
While pickling allows you save data and
reload it, sometimes you need more structured
object permanence in your Python session. With the
shelve module, you can create an object store where
essentially anything that can be pickled can be stored
there. The backend of the storage on the drive can be
handled by one of several systems, such as dbm or
gdbm. Once you have opened a shelf, you can read and
write to it using key value pairs. When you are done, you
need to be sure to explicitly close the shelf so that it is
synchronised with the file storage. Because of the way
the data may be stored in the backing database, it is
best to not open the relevant files outside of the shelve
module in Python. You can also open the shelf with
writeback set to True. If so, you can explicitly call the
sync method to write out cached changes.
You can do multiple threads of execution
within Python. The ‘thread()’ command can create a
new thread of execution for you. It follows the same
techniques as those for POSIX threads. When you first
create a thread, you need to hand in a function name,
along with whatever parameters said function needs.
One thing to keep in mind is that these threads behave
just like POSIX threads. This means that almost
everything is the responsibility of the programmer. You
need to handle mutex locks (with the methods ‘acquire’
and ‘release’), as well as create the original mutexes
with the method ‘allocate_lock’. When you are done,
you need to ‘exit’ the thread to ensure that it is properly
cleaned up and no resources get left behind. You also
have fine-grained control over the threads, being able
to set things like the stack size for new threads.
The Python Book 21
50 Python commands
40
Inputting data
Sometimes, you need to collect input
from an end user. The command ‘input()’ can
take a prompt string to display to the user, and
then wait for the user to type a response. Once
the user is done typing and hits the enter key, the
text is returned to your program. If the readline
module was loaded before calling input, then
you will have enhanced line editing and history
functionality. This command passes the text
through eval first, and so may cause uncaught
errors. If you have any doubts, you can use the
command ‘raw_input()’ to skip this problem. This
command simply returns the unchanged string
inputted by the user. Again, you can use the
readline module to get enhanced line editing.
41
Internal variables
For people coming from other programming languages, there is a concept of having certain variables
or methods be only available internally within an object. In Python, there is no such concept. All elements of an
object are accessible. There is a style rule, however, that can mimic this type of behaviour. Any names that start
with an underscore are expected to be treated as if they were internal names and to be kept as private to the
object. They are not hidden, however, and there is no explicit protection for these variables or methods. It is up to
the programmer to honour the intention from the author the class and not alter any of these internal names. You
are free to make these types of changes if it becomes necessary, though.
43
42
Comparing objects
There are several ways to compare objects within Python, with several caveats. The first is that
you can test two things between objects: equality and identity. If you are testing identity, you are testing
to see if two names actually refer to the same instance object. This can be done with the command
‘cmp(obj1, obj2)’. You can also test this condition by using the ‘is’ keyword. For example, ‘obj1 is obj2’. If
you are testing for equality, you are testing to see whether the values in the objects referred to by the
two names are equal. This test is handled by the operator ‘==’, as in ‘obj1 == obj2’. Testing for equality
can become complex for more complicated objects.
22 The Python Book
Slices
While not truly a command, slices are
too important a concept not to mention in this
list of essential commands. Indexing elements
in data structures, like lists, is one of the most
common things done in Python. You can select a
single element by giving a single index value. More
interestingly, you can select a range of elements by
giving a start index and an end index, separated by
a colon. This gets returned as a new list that you can
save in a new variable name. You can even change
the step size, allowing you to skip some number of
elements. So, you could grab every odd element from
the list ‘a’ with the slice ‘a[1::2]’. This starts at index 1,
continues until the end, and steps through the index
values 2 at a time. Slices can be given negative index
values. If you do, then they start from the end of the
list and count backwards.
50 Python commands
“Python is an interpreted language, which means
that the source code that you write needs to be
compiled into a byte code format. This byte code
then gets fed into the actual Python engine”
46
__init__ method
When you create a new class, you can
include a private initialisation method that
gets called when a new instance of the class is
created. This method is useful when the new
object instance needs some data loaded in the
new object.
method
47 __del__
When an instance object is about to be
destroyed, the __del__ method is called. This
gives you the chance to do any kind of cleanup
that may be required. This might be closing files,
or disconnecting network connections. After this
code is completed, the object is finally destroyed
and resources are freed.
44
Lambda expressions
Since objects, and the names that point to them, are truly different things, you can have objects
that have no references to them. One example of this is the lambda expression. With this, you can create
an anonymous function. This allows you use functional programming techniques within Python. The
format is the keyword ‘lambda’, followed by a parameter list, then a colon and the function code. For
example, you could build your own function to square a number with ‘lambda x: x*x’. You can then have a
function that can programmatically create new functions and return them to the calling code. With this
capability, you can create function generators to have self-modifying programs. The only limitation is
that they are limited to a single expression, so you can’t generate very complex functions.
45
Compiling
code objects
Python is an interpreted
language, which means that the source
code that you write needs to be compiled
into a byte code format. This byte code
then gets fed into the actual Python engine
to step through the instructions. Within your program, you may
have the need to take control over the process of converting
code to byte code and running the results. Maybe you wish to
build your own REPL. The command ‘compile()’ takes a string
object that contains a collection of Python code, and returns
an object that represents a byte code translation of this code. This
new object can then be handed in to either ‘eval()’ or ‘exec()’ to be actually
run. You can use the parameter ‘mode=’ to tell compile what kind of code is being
compiled. The ‘single’ mode is a single statement, ‘eval’ is a single expression and
‘exec’ is a whole code block.
48
Exiting your program
There are two pseudo-commands
available to exit from the Python interpreter:
‘exit()’ and quit()’. They both take an optional
parameter which sets the exit code for the
process. If you want to exit from a script, you are
better off using the exit function from the sys
module (‘sys.exit(exit_code)’.
values
49 Return
Functions may need to return some value
to the calling function. Because essentially no
name has a type, this includes functions. So
functions can use the ‘return’ command to return
any object to the caller.
50
String concatenation
We will finish with what most lists start
with – string concatenation. The easiest way to
build up strings is to use the ‘+’ operator. If you
want to include other items, like numbers, you
can use the ‘str()’ casting function to convert it to
a string object.
The Python Book 23
Python
Essentials
26$PEFSPDL QBQFS TDJTTPST
1VUCBTJDDPEJOHJOUPBDUJPO
321SPHSBNBIBOHNBOHBNF
6TF1ZUIPOUPNBLFUIFDMBTTJDHBNF
44
245IF1ZUIPO#PPL
i(FUUPHSJQTXJUI1ZUIPO
BOETUBSUCVJMEJOHPOUIFCBTJDTw
5IF1ZUIPO#PPL25
Python essentials
Allow the Python script
to run in a terminal,
and outside the IDE
Human input in the form
of integers is used for
comparing moves and,
ultimately, playing the game
Use deduction to
determine one of
three outcomes
Loop the code over
again and start
from the beginning
Append to integer
variables to keep track
of scores and more
Code a game of
rock, paper, scissors
Learn how to do some basic Python coding by following
our breakdown of a simple rock, paper, scissors game
Resources
Python 2:
IDLE:
26 The Python Book
This tutorial will guide you through making
a rock, paper, scissors game in Python. The
code applies the lessons from the masterclass –
and expands on what was included there – and
doesn’t require any extra Python modules to run,
like Pygame.
Rock, paper, scissors is the perfect game to
show off a little more about what exactly Python
can do. Human input, comparisons, random
selections and a whole host of loops are used in
making a working version of the game. It’s also
easy enough to adapt and expand as you see
fit, adding rules and results, and even making a
rudimentary AI if you wish.
For this particular tutorial, we also
recommend using IDLE. IDLE is a great Python
IDE that is easily obtainable in most Linux
distributions and is available by default on
Raspbian for Raspberry Pi. It helps you by
highlighting any problems there might be with
your code and allows you to easily run it to make
sure it’s working properly.
Python essentials
01
This section imports the extra Python
functions we’ll need for the code – they’re
still parts of the standard Python libraries, just
not part of the default environment
02
The initial rules of the game are created
here. The three variables we’re using and
their relationship is defined. We also provide a
variable so we can keep score of the games
03
We begin the game code by defining the
start of each round. The end of each play
session comes back through here, whether we
want to play again or not
04
The game is actually contained all in
here, asking for the player input, getting
the computer input and passing these on to get
the results. At the end of that, it then asks if you’d
like to play again
05
Player input is done here. We give the
player information on how to play this
particular version of the game and then allow
their choice to be used in the next step. We also
have something in place in case they enter an
invalid option
06
There are a few things going on when we
show the results. First, we’re putting in a
delay to add some tension, appending a variable
to some printed text, and then comparing what
the player and computer did. Through an if
statement, we choose what outcome to print,
and how to update the scores
07
We now ask for text input on whether
or not someone wants to play again.
Depending on their response, we go back to the
start, or end the game and display the results
The Python Book 27
Python essentials
The breakdown
01
We need to start with the path to the
Python interpreter here. This allows
us to run the program inside a terminal or
otherwise outside of a Python-specific IDE
like IDLE. Note that we’re also using Python 2
rather than Python 3 for this particular script,
which needs to be specified in the code to
make sure it calls upon the correct version
from the system.
02
We’re importing two extra modules on
top of the standard Python code so
we can use some extra functions throughout
the code. We’ll use the random module to
determine what move the computer will throw,
and the time module to pause the running of
the code at key points. The time module can
also be used to utilise dates and times, either
to display them or otherwise.
03
05
06
We’re setting each move to a specific
number so that once a selection is
made by the player during the game, it will be
equated to that specific variable. This makes
the code slightly easier later on, as we won’t
need to parse any text for this particular
function. If you so wish, you can add additional
moves, and this will start here.
01
02
03
04
05
06
04
Here we specify the rules for the game,
and the text representations of each
move for the rest of the code. When called upon,
our script will print the names of any of the three
moves, mainly to tell the player how the computer
moved. These names are only equated to these
variables when they are needed – this way, the
number assigned to each of them is maintained
while it’s needed.
Python modules
There are other modules you can import with
basic Python. Some of the major ones are
shown to the right. There are also many more
that are included as standard with Python.
28 The Python Book
Similar to the way the text names of
the variables are defined and used only
when needed, the rules are done in such a way
that when comparing the results, our variables
are momentarily modified. Further down in the
code we’ll explain properly what’s happening,
but basically after determining whether or
not there’s a tie, we’ll see if the computer’s
move would have lost to the player move. If the
computer move equals the losing throw to the
player’s move, you win.,
although we have no scoring for tied games in
this particular version.
string
Perform common string operations
datetime and calendar
Other modules related to time
math
Advanced mathematical functions
json
JSON encoder and decoder
pydoc
Documentation generator and online help system
Python essentials of other
tasks if so wished. If we do stop playing the game, the score function is then
called upon – we’ll go over what that does when we get to it.
08
We’ve kept the game function fairly simple so we can break down
each step a bit more easily in the code. This is called upon from the
start function, and first of all determines the player move by calling upon
the move function below. Once that’s sorted, it sets the computer move. It
uses the random module’s randint function to get an integer between one
and three (1, 3). It then passes the player and computer move, stored as
integers, onto the result function which we use to find the outcome.
07
08
09
10
09
We start the move function off by putting it into
a while loop. The whole point of move is to obtain
an integer between one and three from the player, so the
while loop allows us to account for the player making an
unsupported entry. Next, we are setting the player variable
to be created from the player’s input with raw_input. We’ve
also printed instruction text to go along with it. The ‘\n’ we’ve
used in the text adds a line break; this way, the instructions
appear as a list.
10
The try statement is used to clean up code and
handle errors or other exceptions. We parse what the
player entered by turning it into an integer using int(). We use
the if statement to check if it is either 1, 2, or 3 – if it is, move
returns this value back up to the game function. If it throws
up a ValueError, we use except to do nothing. It prints an error
message and the while loop starts again. This will happen
until an acceptable move is made.
The code in action
The Python Book 29
Python essentials
11
The result function only takes the variables
player and computer for this task, which is
why we set that in result(player, computer). We’re
starting off by having a countdown to the result.
The printed numbers are self-explanatory, but
we’ve also thrown in sleep from the time module
we imported. Sleep pauses the execution of the
code by the number of seconds in the brackets.
We’ve put a one-second pause between counts,
then half a second after that to show the results.
12
To print out what the computer threw,
we’re using string.format(). The {0} in the
printed text is where we’re inserting the move,
which we have previously defined as numbers.
Using names[computer], we’re telling the code to
look up what the text version of the move is called
from the names we set earlier on, and then to
insert that where {0} is.
13
Here we’re simply calling the scores we
set earlier. Using the global function
allows for the variable to be changed and used
outside of the variable, especially after we’ve
appended a number to one of their scores.
11
12
13
14
15
16
14
The way we’re checking the result is
basically through a process of elimination.
Our first check is to see if the move the player
and computer used were the same, which is the
simplest part. We put it in an if statement so that
if it’s true, this particular section of the code ends
here. It then prints our tie message and goes back
to the game function for the next step.
15
If it’s not a tie, we need to keep checking,
as it could still be a win or a loss. Within
the else, we start another if statement. Here,
we use the rules list from earlier to see if the
losing move to the player’s move is the same
as the computer’s. If that’s the case, we print
the message saying so, and add one to the
player_score variable from before.
16
If we get to this point, the player has lost.
We print the losing message, give the
computer a point and it immediately ends the
result function, returning to the game function.
30 The Python Book
The code in action
Python essentials
17
The next section of game calls upon
a play_again function. Like the move
function, we have human input, asking the player
if they would like to play again via a text message
with raw_input, with the simple ‘y/n’ suggestion in
an attempt to elicit an expected response.
18 that it accepts both y and
Y. If this is the case, it returns a positive response
to game, which will start it again.
19
If we don’t get an expected response, we
will assume the player does not want to
play again. We’ll print a goodbye message, and
that will end this function. This will also cause
the game function to move onto the next section
and not restart.
17
18
19
20
21
ELIF
IF also has the ELIF (else if) operator, which can
be used in place of the second IF statement
we employed. It’s usually used to keep code
clean, but performs the same function.
20
Going back to the start function, after
game finishes we move onto the results.
This section calls the scores, which are integers,
and then prints them individually after the names
of the players. This is the end of the script, as far
as the player is concerned. Currently, the code
won’t permanently save the scores, but you can
have Python write it to a file to keep if you wish.
21
The final part won’t execute
the code when being imported.
The code in action
The Python Book 31
Python essentials
This section imports the extra Python
functions we’ll need for the code –
they’re still parts of the standard
Python libraries, just not part of the
default environment
We’re again providing variables so we
can keep score of the games played,
and they’re updated each round
Our very basic graphics involve ASCII
art of the game’s stages, printed out
after every turn
Program a
game of
Hangman
#!/usr/bin/env python2
Code listing
from random import *
player_score = 0
computer_score = 0
def hangedman(hangman):
graphic = [
“””
+-------+
|
|
|
|
|
==============
“””,
“””
+-------+
|
|
|
O
|
|
|
===============
“””,
“””
“””,
“””
+-------+
|
|
|
O
|
-||
/ \
|
===============
“””]
print graphic[hangman]
return
Learn how to do some more Python
coding by following our breakdown of a
simple Hangman game
Resources
Python 2:
IDLE:
32 The Python Book
One of the best ways to get to know Python is
by building lots of simple projects so you can
understand a bit more about the programming
language. This time round, we’re looking at
Hangman, a multi-round game relying on if
and while loops and dealing with strings of text
in multiple ways. We’ll be using some of the
techniques we implemented last time as well, so
we can build upon them.
Hangman still doesn’t require the Pygame
set of modules, but it’s a little more advanced
than rock-paper-scissors. We’re playing
around with a lot more variables this time.
However, we’re still looking at comparisons,
random selections and human input, along
with splitting up a string, editing a list and even
displaying rudimentary graphics.
You should continue to use IDLE for these
tutorials. As we’ve mentioned before, its builtin debugging tools are simple yet effective and
it can be used on any Linux system, as well as
the Raspberry Pi.
Python essentials
The actual game starts here, with a while loop to
let you continually play the game until you decide
otherwise, then ending the program
The game rules are decided here, as well as the
setup for the word and keeping track of tries and
incorrect answers
Each round of the game is played here, asking for
an input, then telling you if you were correct or not.
It prints out the graphic and changes any variables
that need to be updated, especially incorrect and
correct guesses
After each round, the code checks if you’ve won or
lost yet – the win condition being that you guessed
the word, or losing if you’ve made six guesses
def start():
print “Let’s play a game of Linux Hangman.”
while game():
pass
scores()
Code listing continued
def game():
dictionary = [“gnu”,”kernel”,”linux”,”mageia”,”penguin”,”ubuntu”]
word = choice(dictionary)
word_length = len(word)
clue = word_length * [“_”]
tries = 6
letters_tried = “”
guesses = 0
letters_right = 0
letters_wrong = 0
global computer_score, player_score
while (letters_wrong != tries) and (“”.join(clue) != word):
letter=guess_letter()
if len(letter)==1 and letter.isalpha():
if letters_tried.find(letter) != -1:
print “You’ve already picked”, letter
else:
letters_tried = letters_tried + letter
first_index=word.find(letter)
if first_index == -1:
letters_wrong +=1
print “Sorry,”,letter,”isn’t what we’re looking for.”
else:
print”Congratulations,”,letter,”is correct.”
for i in range(word_length):
if letter == word[i]:
clue[i] = letter
else:
print “Choose another.”
hangedman(letters_wrong)
print “ “.join(clue)
print “Guesses: “, letters_tried
The human input for the game takes the letter
and turns it into something the code can use. It’s
verified in the previous block of code and then
referred back to if you’ve entered an unsupported
or already used character
The same class as last time, which allows you to
select whether or not you wish to play again
Upon quitting the game, scores are given for the
duration of the play session. We also end the script
with the if __name__ code like before
if letters_wrong == tries:
print “Game Over.”
print “The word was”,word
computer_score += 1
break
if “”.join(clue) == word:
print “You Win!”
print “The word was”,word
player_score += 1
break
return play_again()
def guess_letter():
print
letter = raw_input(“Take a guess at our mystery word:”)
letter.strip()
letter.lower()
print
return letter
def play_again():
answer = raw_input(“Would you like to play again? y/n: “)
if answer in (“y”, “Y”, “yes”, “Yes”, “Of course!”):
return answer
else:
print “Thank you very much for playing our game. See you next time!”
Code highlighting
IDLE automatically highlights the code to make
reading your work that bit easier. It also allows
you to change these colours and highlighting in
IDLE’s Preferences, in case you’re colour blind
or are just used to a different colour scheme
in general.
def scores():
global player_score, computer_score
print “HIGH SCORES”
print “Player: “, player_score
print “Computer: “, computer_score
if __name__ == ‘__main__’:
start()
The Python Book 33
Python essentials
I see ASCII
Here’s a close-up of the seven
stages we’ve used for Hangman’s
graphics. You can change them
yourself, but you need to make
sure the quote marks are all in
the correct place so that the art
is considered a text string to be
printed out.
“””
+-------+
|
|
|
|
|
==============
“””,
“””
+-------+
|
|
|
O
|
|
|
===============
“””,
“””
+-------+
|
|
|
O
|
|
|
|
===============
“””,
“””
+-------+
|
O
|
-|
|
|
|
===============
“””,
“””
+-------+
|
|
|
O
|
-||
|
===============
“””,
“””
+-------+
|
|
|
O
|
-||
/
|
===============
“””,
“””
+-------+
|
|
|
O
|
-||
/ \
|
===============
“””]
34 The Python Book
01 #!/usr/bin/env python2
The rules
Although we’ve moved some of
the rules to the ‘game’ function
this month, you can always put
them back here and call upon
them using the global variable, as
we would do with the scores. For
the words, you could also create a
separate file and import them like
the random module.
02 from random import *
03 player_score = 0
computer_score = 0
04 def hangedman(hangman):
graphic = [
“””
+-------+
|
|
|
|
|
==============
“””,
“””
05 def start():
print “Let’s play a game of Linux Hangman.”
while game():
pass
scores()
01
We begin by using this line to enter the path
to the Python interpreter. This allows us to
run the program inside a terminal or otherwise outside
of a Python-specific IDE like IDLE. Note that we’re
also using Python 2 for this particular script, as it is
installed by default on most Linux systems and will
therefore ensure compatibility.
02
We’re importing the ‘random’ module slightly
differently this time, importing the actual
names of the functions from random rather than just
the module itself. This allows us to use the functions
without having syntax like random.function. The
asterisk imports all the functions from random,
although you can switch that for specific names of
any of random’s functions. We’ll be using the random
function to select a word for the player to guess.
03.
04
Our simple graphics consist of a series of
ASCII hanging man stages. We’re storing
these in a function as a list of separate string objects
so we can call upon them by passing on the number of
incorrect guesses to it. There are seven graphics in all,
like in the pen-and-paper version. We also include the
print command with the function, so when it’s called it
will completely handle the selection and display of the
hanging man, with the first one being printed after the
first letter is guessed.
Python essentials
06 def game():
dictionary = [“gnu”,”kernel”,”linux”,”mageia”,”penguin”,”ubuntu”] 07
word = choice(dictionary)
word_length = len(word)
clue = word_length * [“_”]
08 tries = 6
letters_tried = “”
guesses = 0
letters_right = 0
letters_wrong = 0
global computer_score, player_score
09 while (letters_wrong != tries) and (“”.join(clue) != word):
letter=guess_letter() 10
if len(letter)==1 and letter.isalpha():
if letters_tried.find(letter) != -1:
print “You’ve already picked”, letter
of other tasks if so wished. If we do stop playing the
game, the score function is then called upon –we’ll go
over what that does when we get to it.
06
We have put a majority of the game code
in the ‘game’ function this time around, as
there’s not as much that needs to be split up. You can
split it up further if you wish, using the style of code
from last issue, if it would make the code cleaner
for you or help you understand the building blocks a
bit more.
07
The first four lines quickly set up the word
for the player to guess. We’ve got a small
selection of words in a list here. However, these can be
imported via HTML or expanded upon. Choice is used
to select a random element from the list, which comes
from the random module we imported. Finally, we
ascertain how long the string is of the word to guess,
and then create the clue variable with a number of
underscores of that length. This is used to display the
word as you build it up from guesses.
08
We start to set up the rules and the individual
variables to keep track of during the game.
There can only be six incorrect guesses before the
hanging man is fully drawn, or in our case displayed,
so we set the tries variable to six. We’ll keep track of
the letters through letters_tried to make sure that not
only will the player know, but also the code for when
it’s checking against letters already played. Finally,
we create empty variables for the number of guesses
made, letters correct and letters incorrect, to make
the code slightly easier. We also import the global
scores here.
09
We’re starting a while loop to perform the
player selection and check the status of the
game. This loop continues until the player wins or loses.
It starts by checking if all the tries have been used up
by seeing if letters_wrong is not equal to tries. As each
try will only add one point to wrong, it will never go
above six. It then concatenates ‘clue’ and sees if it’s the
same as the word the computer selected. If both these
statements are true, it goes on to the next turn.
Indentations
While IDLE will keep track of the
indents in the code, if you’re using
a text editor to write some Python,
you’ll have to make sure you’re
using them correctly. Python is
very sensitive to whether or not
indents are used correctly, and it
does aid in readability as well.
10
We call upon the function we’re using to
input a letter and give it the variable ‘letter’.
We check what it returns by first of all making sure
it’s only a single letter, with len(letter), then by
using isalpha to see if it’s one of the 26 letters of the
alphabet. If these conditions are satisfied, we start
a new if statement to make sure it’s a new guess,
and tell the player if it’s already been chosen so they
can start again. If all this is acceptable, we move on
to the next section of the code to see if it’s a correct
guess or not.
The Python Book 35
Python essentials
Continuation
11 else:
This code is still part of the
game function we started on the
previous page, so make sure your
indentations are in alignment if
you’re not using an IDE. If you plan
to split this code up, we’d suggest
starting with the word selection
and results.
letters_tried = letters_tried + letter
first_index=word.find(letter)
if first_index == -1:
letters_wrong +=1
print “Sorry,”,letter,”isn’t what we’re looking for.”
else:
print”Congratulations,”,letter,”is correct.” 12
for i in range(word_length):
13
if letter == word[i]:
clue[i] = letter
else:
print “Choose another.” 14
hangedman(letters_wrong)
print “ “.join(clue)
print “Guesses: “, letters_tried
15 if letters_wrong == tries:
print “Game Over.”
print “The word was”,word
computer_score += 1
break
if “”.join(clue) == word:
print “You Win!”
print “The word was”,word
player_score += 1
break
return play_again() 16
elimination, we first print out a message to let
the player know that they’ve been successful and
then make a record of it.
11
If it’s a new letter that we find acceptable,
the first thing we do is add it to the list
of letters tried. This is done simply by adding
the strings together. We then use the find
command to search the word string for the letter
entered, which will then return a number of the
placement of the letter in the string. If it doesn’t
find the letter, it returns a -1 value, which we use
in the next if statement to see if the first_index
variable is -1. If so, it adds one to the number of
letters_wrong and then prints a message to let
the player know that it was an incorrect guess.
12
If we’ve got this far and the letter is not
incorrect, than we can only assume
it is correct. Through this simple process of
36 The Python Book
13
We’re going to start a small loop here so
we can update the clue with the correct
letter we’ve added. We use the range function to
tell the code how many times we wish to iterate
over the clue by using the word_length variable.
We then check to see which letter in the word
has been guessed correctly and change that
specific part of the clue to be that letter so it can
be printed out for the player to see, and for us to
check whether or not the game is over.
14
We end the original if statement by telling
the player to choose again if they did not
enter a supported input. Before we go on to the
next round of choices, we print out the hanging
man graphic as it stands, by calling the graphic
in the list that corresponds to the number of
incorrect guesses that have been made. We then
print how the clue currently looks, with a space
in between each character, and then print the
number of guesses that have been made.
15
Here we check to see if the game is
over again, first of all comparing the
letters_wrong to the number of tries. If that’s
true, we print a message that the game has
ended and reveal the mystery of the hidden word.
We increase the computer’s score and break the
loop. The next loop checks to see if the full clue
concatenated is the same as the original word – if
that’s the case, we print the win message, the full
word and add one point to the player score before
breaking the loop again. This can also be done
with ifs and elifs to avoid using breaks.
Python essentials
17
def guess_letter():
print
letter = raw_input(“Take a guess at our mystery word:”)
letter.strip()
letter.lower()
print
return letter
18
def play_again():
answer = raw_input(“Would you like to play again? y/n: “)
19 if answer in (“y”, “Y”, “yes”, “Yes”, “Of course!”):
return answer
20 else:
print “Thank you very much for playing our game. See you next time!”
21
def scores():
global player_score, computer_score
print “HIGH SCORES”
print “Player: “, player_score
print “Computer: “, computer_score
22 if __name__ == ‘__main__’:
start()
16
We end the entire game function loop by
calling upon return again, which we will
then pass all the way up to the start function once
it’s finished.
17
The human input function first of
all prints out a raw_input message.
Once the player enters the letter, the function
parses it to be used with the rest of the code.
Firstly, strip is used to remove any white space
from the input given, as we’ve not given it any
extra parameters. We then convert it into
lower-case letters, as Python will not be able
to correctly compare an upper-case character
with a lower-case alternative. We then print the
selection for the record and return it up to the
game function.
18
The last part of the game function is to
ask the player if they wish to try again.
The play_again function takes a human input
with a simple message and then analyses the
input so it knows what to send back.
19 it accepts both
y and Y. If this is the case, it returns a positive
response to game, which will start it again.
20
If we don’t get an expected response,
we will assume the player does
not want to play again. We’ll print a goodbye
message and that will end this function. This
will also cause the start function to move onto
the next section and not restart.
21
Going all the way back to the start
function, after game finishes we move
onto the results. This section is quite simple – it
calls the scores, which are integers, and then
prints them individually after the names of the
players. This is the end of the script, as far as
the player is concerned. Currently, the code will
Homework
Now that you’ve finished with the code, why
not make your own changes? Increase the
word count; create different, selectable word
categories; or even let people guess the full
word. You have all the tools to do this in the
current code and last month’s tutorial.
not permanently save the scores, but you can
have Python write it to a file to keep if you wish.
22
The final part of the code will not execute the code when
being imported.
The Python Book 37
Python essentials
Play poker dice using Python
Put on your poker face and get ready to gamble as you hone
your programming skill with a bit of poker dice
Resources
Python 2:
IDLE:
So you’ve learnt how to program tic-tac-toe
and guessed your way to victory at hangman.
Now it’s time to head to Las Vegas and play our
cards right. Or in this case, virtual dice, and more
like Reno as we continue with our Python game
tutorials and introduce you to some poker dice.
We’re again using some of the lessons we’ve
already learnt, including random number
generation, list creation and modification,
human input, rule setting, scoring and more.
But we’ll also be adding some new skills in this
tutorial. Namely, we’ll be creating and appending
lists with random numbers, and using functions
multiple times in one block of code to cut down
on bloat.
Again, we recommend using IDLE, and we’re
using Python 2 to ensure compatibility with a
wider variety of distros, including the Raspberry
Pi. So, we hope luck is a lady for you and that the
odds are ever in your favour – just keep those
fingers crossed that you don’t roll a snake eyes
(we are coding in Python, after all)!
The Start
Here we’re doing some minor setups so we can
get our code to run with some extra modules not
included with the basics
#!/usr/bin/env python2
The Rules
nine = 1
ten = 2
jack = 3
queen = 4
king = 5
ace = 6
We’re setting names for each dice roll so they can
be properly identified to the player – much more
interesting than numbers
The Score
Again we’ve got some basic variables set up so we
can keep score of the games if we want to
Code listing
import random
from itertools import groupby
names = { nine: “9”, ten: “10”, jack: “J”, queen: “Q”, king: “K”, ace: “A” }
player_score = 0
computer_score = 0
The Script
The game is handled here, passing the player onto
the next function to actually play, and handling the
end of the session as well
The Game
def start():
print “Let’s play a game of Linux Poker Dice.”
while game():
pass
scores()
We access the full game loop via here, and the
function that allows us to play again if we’re
so inclined
def game():
print “The computer will help you throw your 5 dice”
throws()
return play_again()
The Throw
def throws():
roll_number = 5
dice = roll(roll_number)
dice.sort()
for i in range(len(dice)):
print “Dice”,i + 1,”:”,names[dice[i]]
The initial hand is dealt, so to speak, at the start of
the throws function. This function handles all the
decision making in the game, while passing off the
dice rolls to another function
The Hand
We’ve also got a special function so we can inform
the player exactly what style of hand they have
The Decision
There are two rounds in this version of poker
dice, and you can select how many dice you wish
to re-roll in this small while loop that makes sure
you’re also using a correct number
38 The Python Book
result = hand(dice)
print “You currently have”, result
while True:
rerolls = input(“How many dice do you want to throw again? “)
try:
if rerolls in (1,2,3,4,5):
break
except ValueError:
pass
print “Oops! I didn’t understand that. Please enter 1, 2, 3, 4 or 5.”
Python essentials
The Re-roll
We’re doing the second set of rolls and starting
the end of the game here by calling on the same
function as before, but we’re also aware that
choosing no re-rolls means the end of the game
The Dice
Here we’re finding out which dice the player wants
to re-roll, and also making sure that they enter
a valid number. Just so they know they’re doing
something, we print something after every turn
Second Hand
We change and display the new dice hand to end
the game. Again, we make sure to tell the player
what the actual hand they have is
Code listing continued
if rerolls == 0:
print “You finish with”, result
else:
roll_number = rerolls
dice_rerolls = roll(roll_number)
dice_changes = range(rerolls)
print “Enter the number of a dice to reroll: “
iterations = 0
while iterations < rerolls:
iterations = iterations + 1
while True:
selection = input(“”)
try:
if selection in (1,2,3,4,5):
break
except ValueError:
pass
print “Oops! I didn’t understand that. Please enter 1, 2, 3, 4 or 5.”
dice_changes[iterations-1] = selection-1
print “You have changed dice”, selection
iterations = 0
while iterations < rerolls:
iterations = iterations + 1
replacement = dice_rerolls[iterations-1]
dice[dice_changes[iterations-1]] = replacement
dice.sort()
for i in range(len(dice)):
print “Dice”,i + 1,”:”,names[dice[i]]
The Rolls
The function we reuse to roll our virtual six dice
using a simple while loop. This allows us to keep
the codebase smaller
The Analysis
There are eight possible types of hands in poker
dice, and we can use a bit of logic to work out all
but one of them without checking against all 7,776
outcomes – in fact, we only specifically have to
check for two
The Question
Our simple ‘play again’ function that parses player
input so we can restart or end the script
The End
Scores are displayed at the end of the script, and
the very final part allows us to import this into
other Python scripts as a module
EXTRA FUNCTIONS
Splitting up actions into functions
makes it easier to not only perform
them multiple times, but reduce
the amount of code. On larger
projects, this can aid with speed.
result = hand(dice)
print “You finish with”, result
def roll(roll_number):
numbers = range(1,7)
dice = range(roll_number)
iterations = 0
while iterations < roll_number:
iterations = iterations + 1
dice[iterations-1] = random.choice(numbers)
return dice
def hand(dice):
dice_hand = [len(list(group)) for key, group in groupby(dice)]
dice_hand.sort(reverse=True)
straight1 = [1,2,3,4,5]
straight2 = [2,3,4,5,6]
if dice == straight1 or dice == straight2:
return “a straight!”
elif dice_hand[0] == 5:
return “five of a kind!”
elif dice_hand[0] == 4:
return “four of a kind!”
elif dice_hand[0] == 3:
if dice_hand[1] == 2:
return “a full house!”
else:
return “three of a kind.”
elif dice_hand[0] == 2:
if dice_hand[1] == 2:
return “two pair.”
else:
return “one pair.”
else:
return “a high card.”
def play_again():
answer = raw_input(“Would you like to play again? y/n: “)
if answer in (“y”, “Y”, “yes”, “Yes”, “Of course!”):
return answer
else:
print “Thank you very much for playing our game. See you next time!”
def scores():
global player_score, computer_score
print “HIGH SCORES”
print “Player: “, player_score
print “Computer: “, computer_score
if __name__ == ‘__main__’:
start()
The Python Book 39
1ZUIPOFTTFOUJBMT
01
#!/usr/bin/env python2
02
import random
from itertools import groupby
03
nine = 1
ten = 2
jack = 3
queen = 4
king = 5
ace = 6
RECYCLING
There are a few variables that
have duplicates throughout the
code – while we’ve been careful
to make sure they work where
we want them to, it’s not the best
code conduct. The names of the
variables don’t specifically matter
– it’s just best to label them in a
way you understand for bug fixing
and others to read.
names = { nine: “9”, ten: Ŏ10ŏ, jack: “J”, queen: ŎQŏ, king: “K”, ace: “A” }
04
player_score = 0
computer_score = 0
05
def start():
print ŎLetās play a game of Linux Poker Dice.ŏ
while game():
pass
scores()
06
def game():
print ŎThe computer will help you throw your 5 diceŏ
throws()
return play_again()
01
Begin
As before, we use this line to enter the
path to the Python interpreter. This allows us to
run the program inside a terminal or otherwise
outside of a Python-specific IDE like IDLE. Note
that we’re also using Python 2 for this script.
these as we go. While it’s not specifically used
in this version of the code, it’s easy enough
to expand on it and add your own simple
computer roll, or limited AI for both rolls.
05
Start
Game
02
Importing
As well as importing the random module
for our dice throws, we need to get the groupby
function so we can order the dice in a way that is
more readable and also easier for analysis when
telling the player what hand they have.
We’re starting the interactive part of the
code with the ‘start’ function. It prints a greeting
to the player, then starts a while loop that’ll allow
us to replay the game as many times as we wish.
The pass statement allows the while loop to stop
once we’ve finished. If we do stop playing the
game, the score function is then called upon.
03
Cards
06
While we’re using random numbers for
the dice rolls, unless we assign the correct cards
to each number, the player won’t know what
they’ve rolled and what constitutes a better
hand. We set each card to a number and then
equate what these should be printed out as.
04
Scores
As usual, we have the empty scores
for the player and computer so we can update
405IF1ZUIPO#PPL
Like our Rock, Paper, Scissors code,
def game pawns the rest of the game onto other
functions, with its main function allowing us to
keep repeating the game by passing the player
through to the play_again function.
07
it later with a different number that the player
chooses. We get five random numbers in a list
returned from the function, and we order it using
sort to make it a bit more readable for the player
and also later on for the hand function.
08
Dice display
09
Current hand
We print out each dice, numbering them
so the player knows which dice is which, and
also giving it the name we set at the start of the
script. We’re doing this with a loop that repeats
itself the number of times as the dice list is
long using the range(len(dice)) argument. The
i is increased each turn, and it prints out that
specific number of the dice list.
We want to find the type of hand the
player has multiple times during the game, so set
a specific function to find out. We pass the series
of dice we have on to this function, and print.
Throws
For our first throw, we want to have five
random dice. We’ve set a variable here to pass
on to our throwing function, allowing us to reuse
10
Throw again
Before we can throw the dice for the
second round, we need to know which dice the
1ZUIPOFTTFOUJBMT
07
def throws():
roll_number = 5
dice = roll(roll_number)
dice.sort()
for i in range(len(dice)):
08
print ŎDiceŏ,i + 1,ŏ:ŏ,names[dice[i]]
Watch the indentations again as
we split the else function. The
following page’s code is on the
same level as roll roll_number,
dice_rerolls and dice_changes in
the code.
09
result = hand(dice)
print ŎYou currently haveŏ, result
10
while True:
rerolls = input(ŎHow many dice do you want to throw again? Ŏ)
try:
if rerolls in (1,2,3,4,5):
break
except ValueError:
pass
print ŎOops! I didnāt understand that. Please enter 1, 2, 3, 4 or 5.ŏ
if rerolls == 0:
print ŎYou finish withŏ, result
else:
roll_number = rerolls
dice_rerolls = roll(roll_number)
WHITE SPACE
dice_changes = range(rerolls)
The big if function at the end of
print ŎEnter the number of a dice to reroll: Ŏ
throws doesn’t have many line
iterations = 0
breaks between sections – you
while iterations < rerolls:
can add these as much as you want
to break up the code into smaller
iterations = iterations + 1
chunks visually, aiding debugging.
while True:
selection = input(Ŏŏ)
try:
if selection in (1,2,3,4,5):
break
except ValueError:
pass
print ŎOops! I didnāt understand that. Please enter 1, 2, 3, 4 or 5.ŏ
13
dice_changes[iterations-1] = selection-1
print ŎYou have changed diceŏ, selection
11
12
player wants to roll again. We start this by asking
them how many re-rolls they want to do, which
allows us to create a custom while loop to ask
the user which dice to change that iterates the
correct number of times.
We also have to make sure it’s a number
within the scope of the game, which is why
we check using the try function, and print out
a message which tells the user if and how they
are wrong.
11
INDENTATIONS
Stick
One of the things we’ve been trying to do
in these tutorials is point out how logic can cut
down on a lot of coding by simply doing process
of eliminations or following flow charts. If the
user wants to re-roll zero times, then that means
they’re happy with their hand, and it must be the
end of the game. We print a message to indicate
this and display their hand again.
12
The re-rolls
Here’s where we start the second roll
and the end of the game, using a long else to the
if statement we just started. We first of all make
sure to set our variables – updating roll_number
to pass onto the roll function with the re-roll
number the user set, and creating the list that’s
the exact length of the new set of rolls we wish to
use thanks to range(rerolls).
13
Parse
We ask the player to enter the numbers
of the dice they wish to re-roll. By setting an
iterations variable, we can have the while loop
last the same number of times as we want rerolls by comparing it to the reroll variable itself.
We check each input to make sure it’s a number
that can be used, and add the valid choices to the
dice_changes list. We use iterations-1 here as
Python lists begin at 0 rather than 1. We also print
out a short message so the player knows the
selection was successful.
5IF1ZUIPO#PPL41
Python essentials
14
15
iterations = 0
while iterations < rerolls:
iterations = iterations + 1
replacement = dice_rerolls[iterations-1]
dice[dice_changes[iterations-1]] = replacement
HIGHER OR LOWER
Which hand is best? What are the
odds of getting certain hands in
the game? Some of the answers
are surprising, as the poker
hands they’re based on trump the
differing odds the dice produce.
We’ve ranked hands from highest
to lowest.
dice.sort()
for i in range(len(dice)):
print “Dice”,i + 1,”:”,names[dice[i]]
Five of a Kind ................. 6/7776
Four of a Kind ............150/7776
Full House .................300/7776
Straight ......................240/7776
Three of a Kind ........1200/7776
Two Pairs .................1800/7776
One Pair ...................3600/7776
High Card ...................480/7776
result = hand(dice)
print “You finish with”, result
16
14
def roll(roll_number):
numbers = range(1,7)
dice
= range(roll_number)
17
iterations = 0
while iterations < roll_number:
18
iterations = iterations + 1
dice[iterations-1] = random.choice(numbers)
return dice
New dice
We’re resetting and reusing the iterations
variable to perform a similar while loop to update
the rolls we’ve done to the original dice variable.
The main part of this while loop is using the
iterations-1 variable to find the number from
dice_changes l | https://pl.b-ok.org/book/2564640/2e1807 | CC-MAIN-2019-26 | refinedweb | 16,823 | 68.7 |
CodePlexProject Hosting for Open Source Software
I'm trying to create a recipe for setting up a site which is using the Advanced Menu module. The module does not yet have the import/export overrides in the drivers, so I added them to my local copy. This is something I've done before with no problem - they're
pretty straight forward.
However when I try to run the recipe on setting up a new site, it bombs with an exception from Nhibernate that the MenuName column can't be null (from the AdvancedMenuItemPartDriver). However I'm definitely mapping the value over to the part, and the part
has the value in the XML. Here's what the data element looks like from my recipe XML:
<SimpleMenuItem Id="/Identifier=05cfd800dd5742dea80cc0adb8e20d0d" Status="Published">
<IdentityPart Identifier="05cfd800dd5742dea80cc0adb8e20d0d" />
<CommonPart Owner="/User.UserName=admin" CreatedUtc="2011-09-01T18:59:52Z" PublishedUtc="2011-09-01T18:59:52Z" ModifiedUtc="2011-09-01T18:59:52Z" />
<AdvancedMenuItemPart Text="About Us" Position="1" Url="~/about" MenuName="TopMenu" DisplayText="true" DisplayHref="true" />
</SimpleMenuItem>
You can see the MenuName attribute is on the AdvancedMenuItemPart node. I tried adding an ILogger to the driver and logging some messages in the importing method to see what's going on, but they don't log anything (even using Error or Fatal). So then I tried
running it in debug and setting a breakpoint in the importing method, but it never hits the breakpoint.
Anyone have ideas on what could be wrong, or why I can't log or debug the importing method?
Can you show how you wired up the import?
Sure, here it is:
protected override void Importing(AdvancedMenuItemPart part, ImportContentContext context) {
string partName = part.PartDefinition.Name;
part.Text = GetAttribute<string>(context, partName, "Text");
part.Position = GetAttribute<string>(context, partName, "Position");
part.Url = GetAttribute<string>(context, partName, "Url");
part.MenuName = GetAttribute<string>(context, partName, "MenuName");
part.SubTitle = GetAttribute<string>(context, partName, "SubTitle");
part.Classes = GetAttribute<string>(context, partName, "Classes");
part.DisplayText = GetAttribute<bool>(context, partName, "DisplayText");
part.DisplayHref = GetAttribute<bool>(context, partName, "DisplayHref");
}
//Using TV for generic parameter here simply to avoid confusion with T Localizer property
private TV GetAttribute<TV>(ImportContentContext context, string partName, string elementName) {
string value = context.Attribute(partName, elementName);
if (value != null) {
return (TV)Convert.ChangeType(value, typeof(TV));
}
return default(TV);
}
That GetAttribute helper method is the same one I included in my pull request for Vandelay.Industries. On a side note, I'm interested in what people think about that method and whether they'd like to see something like that (maybe not that code exactly -
if anyone has suggestions for improvements let me know) in the core framework to make writing the import code easier.
Anyway, I tried removing that helper from the equation by just inlining the context.Attribute call but I got the same result, so it's not that (plus I've successfully used it elsewhere).
Did you add that to the existing driver or did you create a new one?
I added it to the existing driver.
Weird. You might want to debug above that, right in the import logic, to see why your override is not being called.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://orchard.codeplex.com/discussions/271297 | CC-MAIN-2017-13 | refinedweb | 562 | 56.66 |
From: bill_kempf (williamkempf_at_[hidden])
Date: 2002-01-30 10:08:07
--- In boost_at_y..., "davlet_panech" <davlet_panech_at_y...> wrote:
> --- In boost_at_y..., Beman Dawes <bdawes_at_a...> wrote:
> > At 12:24 PM 1/29/2002, davlet_panech wrote:
> >
> > > ... I recently ported (portions of)
> > >Boost.Threads to a platform (pSOS) ...
> >
> > Davlet,
> >
> > Please tell us a bit more of your experiences with Boost.Threads
> and pSOS.
> >
> > What problems? What successes?
>
>
> Beman,
>
> There isn't much to tell yet, as it is still work in progress; most
> of the problems we are having are due to the non-conformant
compiler
> we are forced to use (DIAB 4.3):
As your "work in progress" matures I'd love to hear about your
experiences.
> - The biggest problem with DIAB 4.3 is it's lack of support for
> namespaces, which makes many of the Boost libraries unusable. In
> contrast, we did "port" most of the pieces of (ANSI C++) standard
> library successfully as the current standard library doesn't use
> namespaces as extensively as Boost does. I would really prefer
Boost
> libraries not to use nested namespaces, but the only reason I am
> saying this is because we are stuck with an old compiler.
Ick.
> - pSOS supports semaphores, but neither mutexes nor condition
> variables (both of these can be implemented in terms of semaphores,
> so that's OK)
>
> - pSOS doesn't have a concept of threads per se, it has "tasks":
all
> tasks running on the same processor share all resources (these are
> analogous to threads), "remote" tasks (executing on a different
> processor) are also possible, these are analogous to processes. I
> guess Boost.Threads package would have to be limited to reprsent
> local tasks only.
>
> - Each pSOS thread ("task") has a 4-character name associated with
> it; it *must* be specified at creation time (these names are useful
> for accessing remote tasks). To support this Boost.Threads would
have
> to either generate those names somehow, or allow the user to
specify
> them.
I would think that generating the name would be simple and a "clean"
solution... but not if there's a need to use the name after the
thread is created. Hopefully in the future we'll be adding an
interface to allow for creation of threads using platform specific
parameters which will allow you to create these threads with this
name using a "standard" interface. I'm just not sure how I'll
implement this yet.
> > Have extensively is your port being used?
>
> Boost.Threads is the only library we are using at this time (our
> version has a slightly different interface -- for the reasons
> mentioned above). We are very pleased with it's design; I guess the
> only issue is the question on usage of `volatile' modifiers (see my
> previous post).
I'd like to know about any specific changes you have to make so that
I can consider the changes and how it might be possible to do it
portably.
Now that you've brought the volatile stuff to my attention I'll look
into how to fix things here. Thanks for the report.
> > What are your feelings about Boost.Threads in relation to C++
> > standardization?
> >
> I am all for it! I have used other C++ thread packages in the past,
> and I really like the direction this one has taken, especially with
> thread cancellation, which is (hopefully) coming up soon.
Hopefully.
> > One of the key questions the committee will ask is "how well does
> the
> > design work on various operating systems?"
> >
> > So even when a port of Boost.Threads is proprietary, and can't
> become part
> > of the Boost distribution, it is still helpful to hear about the
> > experience.
>
> I'm not convinced our port will be very useful (if and when it is
> completed), mainly because we have to change the interface to work
> around compiler defficiences; besides we are not porting the whole
> library, only portions of it that are most useful to us.
Your experiences are still valuable, both for the people on the
committee in evaluating things, and for me in designing the most
flexible interface I can.
Thanks,
Bill Kempf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/01/24114.php | CC-MAIN-2021-17 | refinedweb | 701 | 72.87 |
[quote]
2 days ago, Dr Herbie wrote
I have decided that the Tuple<> class is ugly (having to remember which type is 'Item1' and which is 'Item2') and that it should only be used within a class and never used as a result passed out of a public class method or property;
/quote]
how else do you return multiple values from a method? out arguments are a bit clunky in that they require the "out" modifier and all the out arguments have to be declared and specified on the call. Declaring a class to be returned is probably ideal, but takes time to code and is another class you have to add to the project. | http://channel9.msdn.com/Forums/Coffeehouse/How-are-you-using-Tuple-in-code/0dfec65bcae74a7d9dbaa0ed00f5ebd4 | CC-MAIN-2015-11 | refinedweb | 115 | 57.98 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
OpenERP get next Sequence number [Closed]
The Question has been closedby
OpenERP gives a sequence number when we saved a record. i have done that module.as per that module when i create a worker then load employee number as EMP001,EMP002..
My requirment is this. When I'm going to create a new employee then need to show next sequence number as a read only field. for ex : when i'm going to create a 9th employee then need to show EMP009 in my emp no field.
my current codes uploaded to below location
Dear Yannik, here is the code.is it correct .?
def _get_next_no(self, cr, uid, context=None): cr.execute("SELECT last_value + increment_by FROM ir_sequence_%03d;" % seq['id']) seq['number_next'] = cr.fetchone() return seq
There are no method to get this, you need to make your own by:
Getting your sequence id.
seq_id = self.pool.get('ir.sequence').search(cr, uid, [<search_cond>])
Then getting the next value based on
last_value column from the postgresql sequence
"SELECT last_value + increment_by FROM ir_sequence_%03d;" %seq_id
Then from this number you need to reconstruct the sequence according to your format. With preffix and suffix.
Read the
server/openerp/addons/base/ir/ir_sequence.py file you will see it in
_next method
Remind that if two users are creating a record at the same time, they will see the same value but for one of them it will change on create action.
thank. please give me that above sql in function.awareness to handling "sql with cr.execute command" is very low of mine :-)
You might also try to directly uses the
sequence.number_next attribute. In both case, you need to reconstruct your sequence number afterward.
For sql it is somthing like that:
cr.execute("SELECT last_value + increment_by FROM ir_sequence_%03d;" % seq['id']) seq['number_next'] = cr.fetchone()
However, using the
number_next might be simpler and better.
Dear Yannik, Please refer my edited post above & advice me
For everybody who is still searching. There is the next_by_code(sequence_code) method. I think that would do it for the new API. You can now do something like: next_seq = self.env['ir.sequence'].next_by_code('res.partner') Worked for me.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Your code is incomplete... You need to reconstruct the sequence number. And you also need to define the id of the sequence you are looking for. I only gave you the keys to open few doors. It's up to you to open those doors now.
not clear it friend.. plz explain & show me watz the errors in my above function.?
This forum is not intended for python learning, you may need to open you python book this time :) Just a hint, what is
seq['id']?
thanks Yannik | https://www.odoo.com/forum/help-1/question/openerp-get-next-sequence-number-7948 | CC-MAIN-2017-39 | refinedweb | 506 | 67.76 |
:
<Customers>
<Customer>
<FirstName>Bevis</FirstName>
<LastName>Dalton</LastName>
<Address>5394 Dis Ave</Address>
<City>Tamuning</City>
<State>Wisconsin</State>
<Telephone>(772) 462-2385</Telephone>
<are cantake XML data and put it into SQL for Data Populate, butyou also have a set of data that you can then run test againstand test that you got correct results.
In the next installment, we will examine the full completed LINQ middle tier of our application.
Have fun.
I am currently writing a service using WCF. One of the things that I need to expose is a container that wraps my results to provide counts. I came up with the below implementations. One of the things that becomes a problem is the resulting serialization of the container name. By default, the serialized object would be ContainerOf[T]. By defining a name with a String.Format type of syntax, [DataContract(Name="{0}sContainer")]. Would create the following for the Apple object: ApplesContainer.
Enjoy
/// <summary> /// Generic Container for holding results /// </summary> /// <typeparam name="T"></typeparam> [DataContract(Name="{0}sContainer")] public class Container<T> { /// <summary> /// TotalResults that can be returned. /// </summary> [DataMember] public int TotalResults { get; set; }
/// <summary> /// Container results. /// </summary> [DataMember] public IEnumerable<T> Results { get; set; } }
References::
I.
After.
I came across the situation, where I needed to use an embedded resource file in a class library file. After spending much time, scratching my head trying to figure out how to get to the embedded resource. This is what I found.
First, you will need to use System.Reflection.Assembly. There are two important methods that are useful for what we need. GetManifestResourceNames and GetManifestResourceStream. The first method allow you to see what the names of the resources are that are part of your class.
Example: To get a list of assemblies in a current class library.
string[] resources = Assembly.GetCallingAssembly().GetManifestResourceNames();
string[] resources = Assembly.GetCallingAssembly().GetManifestResourceNames();
After you know the name of the embedded resource, you can get a Stream for reading the resource using the GetManifestResourceStream method.
Second, to read resource files, you can use the System.Resources.ResourceReader class. The embedded resource file is a .resx resource file. The end result when the class is compiled, you will have the following reference as <Class Name>.<Name of Resource File>.resources. The compiled version automatically turns the resource in to a compiled version with a .resource extension. This can be done manually using the ResGen tool.
Sample Code:
/// ())); }}
KeyValueStruct Structure for holding the values
/// ; } }
}
Have fun, let me know if this was helpful.
I
I say this cartoon, and I said "That is my dog!!".
Link to Get Fuzzy
The ASP.NET AJAX Beta is now live. This release includes a number of significant changes:
1) An update to and to support the release
2) A comprehensive whitepaper detailing changes between the CTP’s and Beta
3) A simple to follow migration guide to help developers
4) Updated documentation
5) A revised ASP.NET AJAX Control Toolkit
6) And much, much more
Link to ASP.NET AJAX : The Official Microsoft ASP.NET AJAX Site
Wow, you have got to look at this!! The WPF team is making this training available free till the lauch of .NET Framework 3.0.:
Well,
I would like to share some useful functions for you to use. This is part of an application that I will eventually share when I get it completed. In my last post, I showed how to create a FlowDocument from HTML, well I put that code into a function. The other function, takes a URL and takes the HTML and returns it as a string.
public static string ConvertUrlToHtmlText(string url) {
try {
try {; }
public static FlowDocument ConvertTextToFlowDocument(string text) {
try { string xaml = HTMLConverter.HtmlToXamlConverter.ConvertHtmlToXaml(text, true); return XamlReader.Load(new XmlTextReader(new StringReader(xaml))) as FlowDocument; } catch (Exception) { return null;
}
try { string xaml = HTMLConverter.HtmlToXamlConverter.ConvertHtmlToXaml(text, true); return XamlReader.Load(new XmlTextReader(new StringReader(xaml))) as FlowDocument; } catch (Exception) { return null;
The full code can be found at
Well,.
Well, everyone it has been a while since I have blogged. I have not forgotten how. I just have been extremely busy. Have you gone out to the site? That was a project that I was working on. The is part of the reason for not blogging becuase I could not tell you about it until it was public. Take a look I know that you will like it.
Current Stuff: So I am done with that. Have you seen. That is another team with my group's work. So you are probably wondering what am I doing now. Well, I am working on creating WinFx clients and components. I am going to start sharing some of my experiences with WinFX (Avalon). I know that one of my stumbling block with such a new technology is that some of the examples are limited. I have put together a medium size application utilizing many new features and concepts. I will share some of what I have done in small pieces.
Here are some great resources for you get started with Avalon: the bit from here:
Trademarks |
Privacy Statement | http://blogs.msdn.com/DavidWaddleton/ | crawl-002 | refinedweb | 855 | 59.4 |
This tiny project shows how to interface LED to the microcontroller ATtiny13 and write a simple program delay function.
Parts Required
- ATtiny13 – i.e. MBAVR-1 development board
- Resistor – 220Ω, see LED Resistor Calculator
- LED
Circuit Diagram
Software
This code is written in C and can be compiled using the avr-gcc. More details on how compile this project is here.
#include <avr/io.h> #include <util/delay.h> #define LED_PIN PB0 // PB0 as a LED pin int main(void) { /* setup */ DDRB = 0b00000001; // set LED pin as OUTPUT PORTB = 0b00000000; // set all pins to LOW /* loop */ while (1) { PORTB ^= _BV(LED_PIN); // toggle LED pin _delay_ms(500); } } | https://blog.podkalicki.com/attiny13-blinky-with-delay-function/ | CC-MAIN-2020-05 | refinedweb | 106 | 72.16 |
Please let other users know how useful this tip is by rating it below. Do you have a tip or code of your own you'd like to share? Submit it here.
This code snippet uses System.Drawing namespace to convert image formats. This simple but effective VB.NET and C# application accepts an image file as input and converts it to a variety of file form
This application converts a supplied image file into .GIF format. With a simple modification, this application can be extended to a full fledged "image converter."
File ConvertImage.vb
Check out the relevent C# code to convert the image formats.
Happy. | http://searchwindevelopment.techtarget.com/tip/0,289483,sid8_gci902742,00.html | crawl-002 | refinedweb | 107 | 68.36 |
import "go.chromium.org/luci/appengine/mapper/splitter"
Package splitter implements SplitIntoRanges function useful when splitting large datastore queries into a bunch of smaller queries with approximately evenly-sized result sets.
It is based on __scatter__ magical property. For more info see:
type Params struct { // Shards is maximum number of key ranges to return. // // Should be >=1. The function may return fewer key ranges if the query has // very few results. In the most extreme case it can return one shard that // covers the entirety of the key space. Shards int // Samples tells how many random entities to sample when deciding where to // split the query. // // Higher number of samples means better accuracy of the split in exchange for // slower execution of SplitIntoRanges. For large number of shards (hundreds), // number of samples can be set to number of shards. For small number of // shards (tens), it makes sense to sample 16x or even 32x more entities. // // If Samples is 0, default of 512 will be used. If Shards >= Samples, Shards // will be used instead. Samples int }
Params are passed to SplitIntoRanges.
See the doc for SplitIntoRanges for more info.
type Range struct { Start *datastore.Key // if nil, then the range represents (0x000..., End] End *datastore.Key // if nil, then the range represents (Start, 0xfff...) }
Range represents a range of datastore keys (Start, End].
SplitIntoRanges returns a list of key ranges (up to 'Shards') that together cover the results of the provided query.
When all query results are fetched and split between returned ranges, sizes of resulting buckets are approximately even.
Internally uses magical entity property __scatter__. It is set on ~0.8% of datastore entities. Querying a bunch of entities ordered by __scatter__ returns a pseudorandom sample of entities that match the query. To improve chances of a more even split, we query 'Samples' entities, and then pick the split points evenly among them.
If the given query has filters, SplitIntoRanges may need a corresponding composite index that includes __scatter__ field.
May return fewer ranges than requested if it detects there are too few entities. In extreme case may return a single range (000..., fff...) represented by Range struct with 'Start' and 'End' both set to nil.
Apply adds >Start and <=End filters to the query and returns the resulting query.
IsEmpty is true if the range represents an empty set.
Package splitter imports 4 packages (graph) and is imported by 2 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/appengine/mapper/splitter | CC-MAIN-2020-05 | refinedweb | 413 | 67.96 |
Hello,
I'm new to programming for starters, my project is I want to make a python script to auto install all my favorite apps for linux. I'm trying to start the learning process with something simple, I know how to write a shell script using bash to do this because that is pretty straight forward but I want to start learning python so I want to write those same scripts in python code.
I can't find any information as to what code to use to call the correct information such as...
<code>sudo apt-get install geany</code>
How do you write this in a python script to get it to call to the shell do execute it.
Thanks For Your Help.
Python and Shell Commands
Page 1 of 1
How do I code a shell script using python
2 Replies - 1876 Views - Last Post: 17 November 2008 - 09:38 AM
#1
Python and Shell Commands
Posted 16 November 2008 - 05:15 PM
Replies To: Python and Shell Commands
#2
Re: Python and Shell Commands
Posted 17 November 2008 - 09:20 AM
import os os.system('sudo apt-get install geany')
Hope that helps,
#3
Re: Python and Shell Commands
Posted 17 November 2008 - 09:38 AM
Page 1 of 1 | http://www.dreamincode.net/forums/topic/72165-python-and-shell-commands/page__pid__466500__st__0 | CC-MAIN-2016-07 | refinedweb | 214 | 67.72 |
Swift Arrays and Closures for Mapping and Filtering
I like Erlang, though I can’t claim to be proficient in it. I haven’t even written a “real” program in Erlang yet, but I still enjoying working with it. The other night I was going back through the excellent online book Learn You Some Erlang for Great Good and started working with list comprehensions. In short, a list comprehension in Erlang is about “building sets from sets.” Observe:
This whacky looking code takes the list
[1,2,3,4] and creates a new list by binding each element to the variable
N, and then applying the function
2*N. The result is a new list:
[2,4,6,8]. You can even do something like convert an array of temperatures given in Celsius to Fahrenheit:
Slick. Of course, this is a Swift tutorial, so obviously you can do the same thing in Swift, and let’s face it, in a much more readable syntax:
Granted, this is a bit more verbose than the Erlang example, but if you’ve read any of our other posts you will know we’re a big fan of verbose (also known as readable) code. For those that appreciate a bit more compact form though, you can do:
The second form is a “single statement closure” that implicitly returns the value of its only statement, and we can omit naming the parameter type and return type. The parameter type is implied as
Double since we are mapping over an
Array<Double>. Our return type will be promoted to a
Double as well.
Of course we aren’t limited to numbers here, we can supply an array of tuples of
Points and output the distance of each point from the origin (0.0, 0.0).
import Foundation typealias Point = (x:Double, y:Double) var points:Array<Point> = [(1,1), (1,2), (-1,-1), (-1,-2)] println (points) var distances = points.map({ (x,y) in sqrt(pow(x,2) + pow(y,2)) }) println(distances)
Note the elegance in
(x,y) in. Each tuple element yielded to our closure is bound to
(x, y), thus allowing us to calculate the distance of the point from the origin using the standard
formula.
Of course, we aren’t limited to using
Array.map with just numbers, you can operate on any data type you like.
map isn’t the only nifty
Array function. Take a look a this Erlang code (we’re borrowing the example from Learn You Some Erlang):
Once again, Erlang takes the top prize for odd-looking syntax. The
[X || X <-
[1,2,3,4,5,6,7,8,9,10] looks familiar, but it's followed by
, X rem 2 =:= 0].. This is secret code for "supply only elements that are divisible by 2" thus the right half after the comma is a filter. Well, Swift can filter too!
Very handy. We can also chain
map and
filter, like this:
This code filters out the odd numbers from the array and then squares the remaining items. Or put another way it squares even numbers between 1 and 10.
The Swift
Array generic (yes, it's a generic) has a number of other hidden gems. Be sure and check out the Swift Standard Library Reference for more information.
Now, if only Swift would incorporate Erlang's bit syntax. That would be cool.
Editors Note: You may notice we don't show examples in the Xcode playground but rather use the Swift REPL interpreter from the command-line. For details on how to do this see our previous post. | https://dev.iachieved.it/iachievedit/swift-arrays-and-closures-for-mapping-and-filtering/ | CC-MAIN-2019-18 | refinedweb | 601 | 69.82 |
#include <jevois/Debug/Profiler.H>
Simple profiler class.
This class reports the time spent between start() and each of the checkpoint() calls, separately computed for each checkpoint string, at specified intervals. Because JeVois modules typically work at video rates, this class only reports the average time after some number of iterations through the start(), checkpoint(), and stop(). Thus, even if the time between two checkpoints is only a few microseconds, by reporting it only every 100 frames one will not slow down the overall framerate too much. See Timer for a lighter class with only start() and stop().
Definition at line 34 of file Profiler.H.
Constructor.
Definition at line 34 of file Profiler.C.
Note the time for a particular event.
The delta time between this event and the previous one (or start() for the first checkpoint) will be reported. Note that we create a new unique entry in our tables for each description value, so you should keep the number of unique descriptions passed small (do not include a frame number or some parameter value). The description is passed as a raw C string to encourage you to just use a string literal for it.
Definition at line 49 of file Profiler.C.
Start a time measurement period.
Definition at line 43 of file Profiler.C. | http://jevois.org/doc/classjevois_1_1Profiler.html | CC-MAIN-2017-13 | refinedweb | 218 | 64.61 |
Hello.
Why is this code from the Games of Chance ‘Heads or Tails’:
def flipping_coin(): bet = input('How much are you betting? ') outcome = input('Heads of tails? ') outcome.lower() bet = int(bet) coin = round(random.randint(1, 2)) if coin == 1 and outcome == 'heads': print('You won! You won this much ' + str(bet) + ' congratulations!') money += bet return money elif coin == 2 and outcome == 'tails': print('You won! You won this much %d congratulations!') % (bet) money += bet return money else: print('You lost! You lost this much ' + str(bet) + ' bad luck!') money -= bet return money flipping_coin()
throwing this error:
Traceback (most recent call last): File "script.py", line 24, in <module> flipping_coin() File "script.py", line 7, in flipping_coin bet = input('How much are you betting? ') EOFError: EOF when reading a line
I have tried putting the
input() for both
bet and
outcome outside the function. It threw the same error.
Thank you sincerely! | https://discuss.codecademy.com/t/eof-when-reading-line/461831 | CC-MAIN-2022-33 | refinedweb | 153 | 70.6 |
Gecko OS FAQ
Is there a way to run a script from the web setup?
See the setup command. See also: Configuration and Setup, Configuration Scripts.
Where does the module get the UTC time and which port needs to be open on our firewall for this to function?
See the ntp.server variable. The port is UDP port 123.
If I change the baud rate, does it go into effect after a save command, or immediately?
uart.baud changes after save and reboot. You can also apply changes without a save and reboot, using uart_update.
When should I use UART flow control?
You should always use UART flow control to ensure reliable data exchange with no dropped characters. See the uart.flow variable.
Is there anything to be careful about when enabling hardware flow control?
See the uart.flow variable. There are no special considerations about enabling flow control.
How do I turn off all log messages?
Use the command
set system.print_level 0. See system.print_level.
How do I run the system in stream mode (UART0) and get the logs and all debug and informational responses from UART1?
Use the following command sequence:
set bus.data_bus uart0 set bus.log_bus uart1 set bus.mode stream save
How do I verify my Azure certificates?
Tags: azure, tls client, cert, certificate, socket
You can use the following simple Python script to verify your Azure certificates. Change the cert file names and URL to fit your case.
from socket import * from ssl import * client_socket=socket(AF_INET, SOCK_STREAM) tls_client = wrap_socket(client_socket, ssl_version=PROTOCOL_TLSv1, cert_reqs=CERT_REQUIRED, ca_certs="BaltimoreCyberTrustRoot.crt", keyfile="client-key.pem", certfile="client-cert.pem") tls_client.connect(('HubName.azure-devices.net', 8883)) print('Connected...') # Close the socket client_socket.shutdown(SHUT_RDWR) client_socket.close() | https://docs.silabs.com/gecko-os/4/standard/latest/faq | CC-MAIN-2019-13 | refinedweb | 290 | 61.73 |
A better question would be "what is retained mode?" Wikipedia states that retained mode is a style of API design in which the graphics library retains the complete object model of the rendering primitives to be rendered. That means that the widget, e.g. an instance of a
JButton,.
Immediate Mode
So let's get back to Immediate mode. When using an immediate mode GUI library, the event processing is directly controlled by the application. There is no button object, there is just a
Button(bounds Rectangle, text string)function which immediately draws the button with given text at the given position and size (argument
bounds). The function returns
trueif list of immediate mode gui tutorials on StackOverflow.
The (immediate mode) UI framework I am going to use is raylib, a simple and easy to use library to enjoy video games programming. See its cheat sheet for an overview of its functions. It has a simple API which hides everything regarding windowing system and environment. I am writing code in Go, so I am using raylib, Christian Haas into running this experiment with me. We spent one full day working on the Login Form exercise.
The first test: There is a button
import ( "testing" ... rl "github.com/gen2brain/raylib-go/raylib" ) func TestForm_LoginButton(t *testing.T) { var form login.Form ui := newTestingUI() form.Render(ui) if !ui.buttonCalled { t.Errorf("Button() was not called") } }
Formis the
structcontaining the form's data, which is empty for now.
Renderis a receiver function on the form, which creates a button with no bounds and an empty text.
type Form struct{} func (form Form) Render(ui FormUI) { ui.Button(rl.Rectangle{}, "") }To check that certain calls into raylib have been made, there is an interface
FormUIbetween the application code and raylib. In the tests this interface is mocked to verify certain calls have been made. (In Go an interface type is defined as a set of method signatures. This is the way to achieve polymorphism.)
type testingUI struct { buttonCalled bool } func (ui *testingUI) Button(bounds rl.Rectangle, text string) bool { ui.buttonCalled = true return false }This follows an approach I have found as a possible TDD approach:
- Design and write your methods separated from the actual UI.
- TDD the elements and behaviour.
- Mock single UI elements to verify necessary calls but do not show them.
More Code.
type testingUI struct { // verify if Button method has been called (mock) buttonCalled map[string]bool // record button's text and bounds for later inspection (spy) buttonText map[string]string buttonBounds map[string]rl.Rectangle // return value of the Button method = user interaction (stub) buttonResults map[string]bool ... } func newTestingUI() *testingUI { ui := &testingUI{ buttonCalled: make(map[string]bool), buttonText: make(map[string]string), buttonBounds: make(map[string]rl.Rectangle), buttonResults: make(map[string]bool), ... } return ui } func (ui *testingUI) Button(id string, bounds rl.Rectangle, text string) bool { ui.buttonCalled[id] = true ui.buttonText[id] = text ui.buttonBounds[id] = bounds result := ui.buttonResults[id] ui.buttonResults[id] = false // reset button click after first call return result } func TestForm_LoginButton(t *testing.T) { var form login.Form ui := newTestingUI() form.Render(ui) if !ui.buttonCalled["login"] { t.Errorf("not found") } } func TestForm_LoginButtonText(t *testing.T) { var form login.Form ui := newTestingUI() form.Render(ui) if "Log in" != ui.buttonText["login"] { t.Errorf("is not \"Log in\"") } } func TestForm_LoginButtonBounds(t *testing.T) { var form login.Form ui := newTestingUI() form.Render(ui) expectedBounds := rl.Rectangle{300, 165, 110, 30} if ui.buttonBounds["login"] != expectedBounds { t.Errorf("expected %v, but was %v", expectedBounds, ui.buttonBounds) } }and the production code
type FormUI interface { Button(id string, bounds rl.Rectangle, text string) bool ... } func (form *Form) Render(ui FormUI) bool { buttonBounds := rl.Rectangle{X: 300, Y: 165, Width: 110, Height: 30} if ui.Button("login", buttonBounds, "Log in") { // TODO authenticate } return false }The third test,
TestForm_LoginButtonBoundschecks the position and size of the button. These properties are considered "layout". I do not like to test layout. I had to open GIMP to decide on the proper rectangle in
expectedBounds, which I really dislike. I also expect this values to change a lot during initial development. Additionally
Rectangleis a raylib type and so we depend on raylib in our code. Other options would have been:
- Ignore layout completely. But then we would need to revisit all calls and add the
Rectangles later.
- Use abstract coordinates, i.e. map my coordinates into raylib
Rectangles. That seemed like an extra overhead.
- Move the responsibility of layout into the wrapper. There would be a button method for each button in the application and there would be more code outside my tests.
- Move out the bounds and store them in the wrapper with a simple lookup on the id. Moving out stuff is against the nature of Immediate mode because the whole UI is expected to be in the code.
type RaylibFormUI struct{} func (ui *RaylibFormUI) Button(id string, bounds rl.Rectangle, text string) bool { return raygui.Button(bounds, text) }This should give you an idea how things worked out. If you want to follow our TDD steps, here are the individual commits.
Is this MVP?
Formstructure could be seen as an UI model. Later it will hold the user name and password data. The form receiver function
func (form *Form) Render(ui FormUI) boolcontains
FormUIinterface
FormUIwill delegate many functions to raylib, so it could be generated from its original source code. This shows the tight coupling of the
FormUIand the underlying UI library.
my initial experiment..
We used TDD but there was little pressure on the design of the code. There was some pressure on the design of the API of
Form, but there was no pressure on its internal workings nor on
FormUIat.
Try it yourself
I will continue my experiments and I would like to hear your ideas on the topic. The Go starting code with raylib-Go, its required dependencies and linter setup is available in the Login Form Kata. Try it yourself! | http://blog.code-cop.org/2020/03/ | CC-MAIN-2020-50 | refinedweb | 996 | 60.82 |
16 U.S. Code § 4910 - Prohibited acts
§ 4910.
Prohibited acts
(a) Prohibitions
(1) In generalSubject to paragraph (2), it is unlawful for any person to—
(A)
(B)
import an exotic bird of a species that pursuant to section 4905(a)(2)(B) of this title is included in a list under section 4905 of this title, if the bird was not captive bred at a qualifying facility; or
(b) Burden of proof for exemptions
Any person claiming the benefit of any exemption or permit under this chapter shall have the burden of proving that the exemption or permit is applicable or has been granted, and was valid and in force at the time of the alleged violation.
(Pub. L. 102–440, title I, § 111, Oct. 23, 1992, 106 Stat. 2230.)
LII has no control over and does not endorse any external Internet site that contains links to or references LII. | https://www.law.cornell.edu/uscode/text/16/4910 | CC-MAIN-2016-44 | refinedweb | 150 | 50.5 |
I have a sample code using java API to connect to MDMCE.
import com.ibm.pim.context.Context; import com.ibm.pim.context.PIMContextFactory;
public static main (String [] args) { Context ctx = null; try { ctx = PIMContextFactory ("admin", "password", "company"); ... } ... }
If I want to run this as a remote application that establish the connection to MDMCE and list the catalogs, where do I specify the server IP, port, or JNDI port to connect and execute my java application?
Answer by YiWa (107) | Oct 28, 2016 at 03:43 AM
You need to copy the files of $TOP from the server to your client machine and make sure the configuration files are correct. When you call a MDM PIM java program from the command shell you have to provide the following 2 parameters:
-DTOP=$TOP -DCCD_ETC_DIR=$CCD_ETC_DIR
so that the client application can read the configuration files and connect to the correct server.
88 people are following this question.
MDMCE How to move out of the box DocStore Folder to Physical Location 5 Answers
How to write a remote java application to connect to MDMCE? 2 Answers
How do I remove old GDS messages in the notification worklist? 1 Answer
MDM 11.6 cannot start/stop services JavaDUMP 0 Answers
Issue while parsing the JSON object however while validating the JSON object it looks good. 0 Answers | https://developer.ibm.com/answers/questions/316066/how-to-develop-a-remote-client-application-to-mdmc.html?smartspace=analytic-hybrid-cloud-core | CC-MAIN-2019-43 | refinedweb | 224 | 51.99 |
java.lang.Object
org.netlib.lapack.Dtrsenorg.netlib.lapack.Dtrsen
public class Dtrsen
Following is the description from the original Fortran source. For each array argument, the Java version will include an integer offset parameter, so the arguments may not match the description exactly. Contact seymour@cs.utk.edu with any questions.
* .. * * elemnts equal and its * off-diagonal elements of opposite sign. * * Arguments * ========= * * JOB (input) (input) CHARACTER*1 * = 'V': update the matrix Q of Schur vectors; * = 'N': do not update Q. * * SELECT (input) (input) INTEGER * The order of the matrix T. N >= 0. * * T (input/output) (input) INTEGER * The leading dimension of the array T. LDT >= max(1,N). * * Q (input/output) (input) INTEGER * The leading dimension of the array Q. * LDQ >= 1; and if COMPQ = 'V', LDQ >= N. * * WR (output) DOUBLE PRECISION array, dimension (N) * WI (output) (output) INTEGER * The dimension of the specified invariant subspace. * 0 < = M <= N. * * S (output) (output) DOUBLE PRECISION * If JOB = 'V' or 'B', SEP is the estimated reciprocal * condition number of the specified invariant subspace. If * M = 0 or N, SEP = norm(T). * If JOB = 'N' or 'E', SEP is not referenced. * * WORK (workspace/output) DOUBLE PRECISION array, dimension (LWORK) * On exit, if INFO = 0, WORK(1) returns the optimal LWORK. * * LWORK (input) INTEGER * The dimension of the array WORK. * If JOB = 'N', LWORK >= max(1,N); * if JOB = 'E', LWORK >= M*(N-M); * if JOB = 'V' or 'B', LWORK >= (LIWORK) * IF JOB = 'N' or 'E', IWORK is not referenced. * * LIWORK (input) INTEGER * The dimension of the array IWORK. * If JOB = 'N' or 'E', LIWORK >= 1; * if JOB = 'V' or 'B', LIWORK >=. * *ization of A is given * by A = (Q*Z)*(Z'*T*Z)*(Q*Z)', * * ===================================================================== * * .. Parameters ..
public Dtrsen()
public static void dtrsen(java.lang.String job, java.lang.String compq, boolean[] select, int _select_offset, int n, double[] t, int _t_offset, int ldt, double[] q, int _q_offset, int ldq, double[] wr, int _wr_offset, double[] wi, int _wi_offset, intW m, doubleW s, doubleW sep, double[] work, int _work_offset, int lwork, int[] iwork, int _iwork_offset, int liwork, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/Dtrsen.html | CC-MAIN-2017-17 | refinedweb | 344 | 57.87 |
Passing multiple arguments to map() function in Python
In this article, we are going to discuss various ways to use the map function of Python. We will also go through some examples to understand even better.
first of all, what map does, the map will take two arguments
map(function, *iterators)
it will return map object address
and what it does is, it will take all the iterators and map to function arguments and return the value of return function.
Example 1:
Let’s say I have a million length iterator (ex: list) and I want to impose my custom function on every object in the list.
def custom_function(x): if x%2==0: return "even" return "odd" myIterable = list(range(0,1000000,5)) print(map(custom_function,myIterable)) print(list(map(custom_function,myIterable)))
so line 6 gives output like <mapx0x6623a4> which means that the return object of the map function is stored at that address. line 7 we cast the map into the list and printed.
Example 2:
This is the most used line during my journey of competitive programming and love to share it. In general, you are asked to take an array of space-separated integers as input. I use the map here.
print(list(map(int,input().split())))
Above what happens is, first we take the string as input which contains space-separated integers. Then we split it based on space so that we have a list. now we have characters. Then passed each object of iterable to int function for casting.
Example 3:
Here we discuss the meaning of *Iterable in the function definition. which means that we can pass any number of iterators. Let’s see a simple code of how it works.
The problem we have two lists namely first and second. We need to add them index wise.
def myadd(a,b): return a+b print(list(map(myadd,[1,2,3],[10,10,10])))
This gives an output : [11,12,13] which is the addition of elements index wise. What happens is that the first element of the first list is mapped to a. Then b takes the first value in the second list. which are a=1,b=10. Then myadd function use these arguments and returns the value. likely it will continue till the end.
Final Example:
Let’s see one final example but a more sophisticated one. we are given three numbers to our function. The three numbers given to the function are side lengths and we have to tell whether it forms a triangle or not. I have thought of doing it in one line. so I’m using the lambda function.
print(list(map(lambda a,b,c: a+b>c and b+c>a and c+a>b,[3,8,1],[4,6,2],[5,10,3])))
the lambda function will take three integers and returns true if those are able to create a triangle. so the output is [True, True, False]
How it came:
first step: a=3,b=4,c=5. It will satisfy the three conditions of the triangle and returns true.
Second Step: a=8,b=6,c=10. It will satisfy the three conditions of the triangle and returns true.
Third Step: a = 1, b=2,c=3. since b+c<a it will return false.
Please feel free to comment down your doubts and thoughts. | https://www.codespeedy.com/passing-multiple-arguments-to-map-function-in-python/ | CC-MAIN-2020-50 | refinedweb | 565 | 75 |
The C++ function std::array::crend() returns a constant reverse iterator which points to the past-end element of array. An iterator returned by this method can be used to iterate array contents but cannot be used to modify array contents, even if array object itself is not constant.
Following is the declaration for std::array::crend() function form std::array header.
const_reverse_iterator crend() const noexcept();
None
Returns a reverse constant iterator pointing to the past-end element of the array. This is a place-holder location and doesn't store any actual data. So dereferencing this will cause undefined behavior.
This member function never throws exception.
Constant i.e. O(1)
Let us see how to use reverse iterator to print array contents reverse order.
#include <iostream> #include <array> using namespace std; int main(void) { array<int, 5> arr = {10, 20, 30, 40, 50}; auto s = arr.crbegin(); auto e = arr.crend(); while (s < e) { cout << *s << " "; ++s; } cout << endl; return 0; }
Let us compile and run the above program, this will produce the following result −
50 40 30 20 10 | https://www.tutorialspoint.com/cpp_standard_library/cpp_array_crend.htm | CC-MAIN-2020-05 | refinedweb | 182 | 56.05 |
1.5 Basic Data Structures
Now that we got into Hamlet, let's analyze the text a bit further. For example, for each Dramatis Persona, we'd like to collect some information, such as how many words they say in total, and how rich their vocabulary is. To do that, we need to be able to associate several data items with one persona.4 To group such information in one place, we can define a data structure as follows:
struct PersonaData { uint totalWordsSpoken; uint[string] wordCount; }
In D you get structs and then you get classes. They share many amenities but have different charters: structs are value types, whereas classes are meant for dynamic polymorphism and are accessed solely by reference. That way confusions, slicing-related bugs, and comments à la // No! Do NOT inherit! do not exist. When you design a type, you decide upfront whether it'll be a monomorphic value or a polymorphic reference. C++ famously allows defining ambiguous-gender types, but their use is rare, error-prone, and objectionable enough to warrant simply avoiding them by design.
In our case, we just need to collect some data and we have no polymorphic ambitions, so using struct is a good choice. Let's now define an associative array mapping persona names to PersonaData values:
PersonaData[string] info;
and all we have to do is fill info appropriately from
hamlet.txt. This needs some work because a character's paragraph may extend on several lines, so we need to do some simple processing to coalesce physical lines into paragraphs. To figure out how to do that, let's take a look at a short fragment from
hamlet.txt, dumped verbatim below (with leading spaces made visible for clarity):
␣␣Pol. Marry, I will teach you! Think yourself a baby ␣␣␣␣That you have ta'en these tenders for true pay, ␣␣␣␣Which are not sterling. Tender yourself more dearly, ␣␣␣␣Or (not to crack the wind of the poor phrase, ␣␣␣␣Running it thus) you'll tender me a fool. ␣␣Oph. My lord, he hath importun'd me with love ␣␣␣␣In honourable fashion. ␣␣Pol. Ay, fashion you may call it. Go to, go to!
Whether or not Polonius' enthusiasm about goto was a factor in his demise is, even to this day, a matter of speculation. Regardless of that, let's note how each character's line is preceded by exactly two spaces, followed by the (possibly contracted) character's name, followed by a period and a space, finally followed by the actual content of the line. If a logical line extends to multiple physical lines, the continuations are always preceded by exactly four spaces. We could do such simple pattern matching by using a regular expression engine (found in the std.regex module), but we want to learn arrays so let's match things "by hand." We only enlist the help of the Boolean function a.startsWith(b), defined by std.algorithm, which tells whether a starts with b.
The main driver reads input lines, concatenates them in logical paragraphs (ignoring everything that doesn't fit our pattern), passes complete paragraphs to an accumulator function, and at the end prints the desired information.
import std.algorithm, std.ctype, std.regex, std.range, std.stdio, std.string; struct PersonaData { uint totalWordsSpoken; uint[string] wordCount; } void main() { // Accumulates information about dramatic personae PersonaData[string] info; // Fill info string currentParagraph; foreach (line; stdin.byLine()) { if (line.startsWith(" ") && line.length > 4 && isalpha(line[4])) { // Persona is continuing a line currentParagraph ~= line[3 .. $]; } else if (line.startsWith(" ") && line.length > 2 && isalpha(line[2])) { // Persona just started speaking addParagraph(currentParagraph, info); currentParagraph = line[2 .. $].idup; } } // Done, now print collected information printResults(info); }
After we've equipped ourselves with information on how arrays work, the code should be self-explanatory, save for the presence of .idup. Why is it needed, and what if we forgot about it?
The foreach loop that reads from stdin deposits successive lines of text in the variable line. Because it would be wasteful to allocate a brand new string for each line read, the contents of line is reused every pass through the loop. As such, if you want to squirrel away the contents of a line, you better make a copy of it. Obviously currentParagraph is meant to indeed save text, so duplication is needed, hence the presence of .idup. Now, if we forgot .idup and subsequently the code would still compile and run, the results would be nonsensical and the bug rather hard to find. Having a part of a program modify data held in a different part of the program is very unpleasant to track down because it's a non-local effect (just how many .idups could one forget in a large program?). Fortunately, that's not the case because the types of line and currentParagraph reflect their respective capabilities: line has type char[], i.e., an array of characters that could be overwritten at any time; whereas currentParagraph has type string, which is an array of characters that cannot be individually modified. The two cannot refer to the same memory content because line would break the promise of currentParagraph. So the compiler refuses to compile the erroneous code and demands a copy, which you provide in the form of .idup, and everybody's happy. (The "i" in "idup" stands for "immutable.") On the other hand, when you copy string values around, there's no more need to duplicate the underlying data—they can all point to the same memory because it's known neither will overwrite it, which makes string copying at the same time safe and efficient. Better yet, strings can be shared across threads without problems because, again, there's never contention. Immutability is really cool indeed. If, on the other hand, you need to modify individual characters intensively, you may want to operate on char[], at least temporarily.
PersonaData as defined above is very simple, but in general structs can define not only data, but also other entities such as private sections, member functions, unittests, operators, constructors, and destructor. By default, each data member of a structure is initialized with its default initializer (zero for integral numbers, NaN for floating point numbers,5 and null for arrays and other indirect-access types). Let's now implement addParagraph that slices and dices a line of text and puts it into the associative array.
The line as served by main has the form "Ham. To be, or not to be-that is the question:" We need to find the first ". " to distinguish the persona's name from the actual line. To do so, we use the find function. a.find(b) returns the right-hand portion of a starting with the first occur-rence of b. (If no occurrence is found, find returns an empty string.) While we're at it, we should also do the right thing when collecting the vocabulary. First, we must convert the sentence to lowercase such that capitalized and non-capitalized words count as the same vocabulary element. That's easily taken care of with a call to tolower. Second, we must eliminate a strong source of noise: punctuation that makes for example "him." and "him" count as distinct words. To clean up the vocabulary, all we need to do is pass an additional parameter to split mentioning a regular expression that eliminates all chaff: regex("[ \t,.;:?]+"). With that argument, the split function will consider any sequence of the characters mentioned in between '[' and ']' as part of word separators. That being said, we're ready to do a lot of good stuff in just little code:
void addParagraph(string line, ref PersonaData[string] info) { // Figure out persona and sentence line = strip(line); auto sentence = line.find(". "); if (sentence.empty) { return; } auto persona = line[0 .. $ - sentence.length]; sentence = tolower(strip(sentence[2 .. $]));
// Get the words spoken auto words = split(sentence, regex("[ \t,.;:?]+")); // Insert or update information auto data = persona in info; if (data) { // heard this persona before data.totalWordsSpoken += words.length; foreach (word; words) ++data.wordCount[word]; } else { // first time this persona speaketh PersonaData newData; newData.totalWordsSpoken = words.length; foreach (word; words) newData.wordCount[word] = 1; info[persona] = newData; } }
The expression persona in info not only tells whether a given string is present as a key in an associative array, but also provides a handy pointer to the corresponding value in the array. In D there is no need (as it is in C and C++) to use '->' to access data referenced by a pointer—the regular field access operator '.' works unambiguously. If the key was not found, our code creates and inserts a brand new PersonaData, which concludes the addParagraph function.
Finally, let's implement printResults to print a quick summary for each persona:
void printResults(PersonaData[string] info) { foreach (persona, data; info) { writefln("%20s %6u %6u", persona, data.totalWordsSpoken, data.wordCount.length); } }
Ready for a test drive? Save and run!
Queen 1104 500 Ros 738 338 For 55 45 Fort 74 61 Gentlemen 4 3 Other 105 75 Guil 349 176 Mar 423 231 Capt 92 66 Lord 70 49 Both 44 24 Oph 998 401 Ghost 683 350 All 20 17 Player 16 14 Laer 1507 606 Pol 2626 870 Priest 92 66 Hor 2129 763 King 4153 1251 Cor., Volt 11 11 Both [Mar 8 8 Osr 379 179 Mess 110 79 Sailor 42 36 Servant 11 10 Ambassador 41 34 Fran 64 47 Clown 665 298 Gent 101 77 Ham 11901 2822 Ber 220 135 Volt 150 112 Rey 80 37
Now that's some fun stuff. Unsurprisingly, our friend "Ham" gets the lion's share by a large margin. Voltemand's (Volt) role is rather interesting: he doesn't have many words to say, but in these few words he does his best in displaying a solid vocabulary, not to mention the Sailor, who almost doesn't repeat himself. Also compare the well-rounded Queen with Ophelia: the Queen has about 10% more words to say than Ophelia, but her vocabulary is about 25% larger.
As you can see, the output has some noise in it (such as
"Both [Mar"), easy to fix by a diligent programmer and hardly affecting the important statistics. Nevertheless, fixing the last little glitches would be an instructive (and recommended) exercise. | http://www.informit.com/articles/article.aspx?p=1381876&seqNum=5 | CC-MAIN-2014-42 | refinedweb | 1,718 | 62.17 |
Good grief. Does the world really need another library for printing
trace messages? Probably not, but this one addresses some of the problems that tend
to afflict the squillions of other libraries out there:
LOG_MSG( "The answer is " << iFoo ) ;
Good logging is an essential part of any developer's toolkit. You don't always have
the luxury of being able to run your app from an IDE (e.g., NT services or CGI processes)
and if you don't know exactly what your program is doing, well, you're in trouble!
Furthermore, while this library gives you the option of disabling compilation of
all logging code, I'm a big fan of leaving it in for release builds. Then, when (not if!)
your customers start to have problems, you can just turn logging on via some hidden
switches and then have at least a clue as to what's going on.
In summary, these are the features offered by this library:
First, a quick primer for those of you who are new to streams. An ostream
(or output stream) is simply somewhere where you can send data. That's it! Well, not quite,
but I'm not going to go into the differences between a stream and a streambuf here.
Look it up. The point is that the code sending the data doesn't have to know the mechanics
of how the data gets to where it's going, or even where it's going to. It just gives
the ostream a pile of data and says "deal with it!".
ostream
So, if you wrote a function like this:
void foo( ostream& os )
{
os << "Hello world!" << endl ;
}
this accepts an ostream and sends the message "Hello world!" to it.
cout is a special ostream that sends its output to stdout, so writing:
cout
foo( cout ) ;
would print "Hello world!" to the console.
Similarly, ofstream is an ostream-derived class that sends its output to a file, so this would send the message to the specified file:
ofstream
ofstream outputFile( "greeting.txt" ) ;
foo( outputFile ) ;
So, how does all of this relate to this article? CMessageLog is my class
that manages log messages by forwarding them on to an ostream object that you specify.
By passing cout to the CMessageLog constructor, you can print your trace messages
to the console, but by installing an ofstream object, you can send your trace messages to a file.
But wait, there's more! I have in the past written an ostream-derived class that sends its data over a socket,
so with a single line of code, you could plug one of those babies into this library and have
instant remote logging. Cool! Or you could install a stringstream to keep your log messages
in memory. Or one that records log messages as rows in a database. One of the guys
I work with wants to write a ostream wrapper for OutputDebugString() so that we can send
log messages to the debugger (hi Pete - is it ready yet?). I've even written a library to generate PDF's
that had a stream-based interface and tried plugging that into this library. Sending trace messages
to a PDF: totally useless but a neat validation of the power of streams
CMessageLog
OutputDebugString()
Time for some examples.
This is how to use the library in its simplest form:
#define _LOG // need this defined somewhere to enable logging to be compiled
#include "log/log.hpp"
// create and configure a message log
CMessageLog myLog( cout ) ;
myLog.enableTimeStamps( true ) ;
myLog.enableDateStamps( true ) ;
// log a message
myLog << "Hello world!" << endl ;
This produces the following output:
01jan02 12:48:19 | Hello world!
Most applications will typically only need the one log and so a global instance is provided
for you as a convenience. This object can be accessed via the global function theMessageLog().
Some macros have been defined as well to send messages to this
global object:
theMessageLog()
// let's send our output to a file this time
ofstream logFile( "log.txt" ) ;
theMessageLog().setOutputStream( logFile ) ;
// log the message (to the file)
LOG_MSG( "Hello world!" ) ;
Now we'll create some message groups, that is, groups of messages that can be
enabled or disabled individually at runtime.
// create our message groups
CMessageGroup gMsgGroup1 ;
CMessageGroup gMsgGroup2 ;
CMessageGroup gMsgGroup3 ;
// enable/disable our message groups
gMsgGroup1.enableMsgGroup( true ) ;
gMsgGroup2.enableMsgGroup( true ) ;
gMsgGroup3.enableMsgGroup( false ) ;
// output some messages
LOG_GMSG( gMsgGroup1 , "This is a message from group 1." ) ;
LOG_GMSG( gMsgGroup2 , "This is a message from group 2." ) ;
LOG_GMSG( gMsgGroup3 , "This is a message from group 3." ) ;
In this example, only the first two messages would appear. The third would not because its
group has been disabled. Note that I used the LOG_GMSG() macro instead of LOG_MSG().
The former will check to see if the message group is enabled before outputting
the message, the latter doesn't check anything and unconditionally logs the message.
LOG_GMSG()
LOG_MSG()
Now for a real-life example. Let's say I'm writing a server application that
accepts requests on a socket, does some processing and sends back a response.
I might want to set up three message groups, one to log incoming requests, one
for the processing, and one to log the responses being sent back. Using the helper
macros, I might define them like this:
DEFINE_MSG_GROUP( gReqMsgGroup , "req" , "Log incoming requests." )
DEFINE_MSG_GROUP( gProcMsgGroup , "proc" , "Log request processing." )
DEFINE_MSG_GROUP( gRespMsgGroup , "resp" , "Log outgoing responses." )
Note that I gave each group a name which can be used identify each group in addition
to their automatically-assigned numeric ID's. Each one also has a brief description
which will be printed out if you call CMessageGroup::dumpMsgGroups().
Take a look at the demo to see how this works.
CMessageGroup::dumpMsgGroups()
I also usually define some helper macros of my own to log messages:
#define LOG_REQ_MSG( msg ) LOG_GMSG( gReqMsgGroup , msg )
#define LOG_PROC_MSG( msg ) LOG_GMSG( gProcMsgGroup , msg )
#define LOG_RESP_MSG( msg ) LOG_GMSG( gRespMsgGroup , msg )
Now, I could write my server to be something like this:
void main( int argc , char* argv[] )
{
// enable any message groups specified in the command line
CMessageGroup::disableAllMsgGroups( true ) ;
if ( argc > 1 )
CMessageGroup::enableMsgGroups( argv[1] , true ) ;
// main loop
for ( ; ; )
{
// wait for the next request (let's assume it's just a string)
string req = acceptRequest() ;
LOG_REQ_MSG( "Received a request: " << req ) ;
// process the request
string resp = processRequest( req ) ;
// return the response
LOG_RESP_MSG( "Sending response: " << resp ) ;
}
}
string processRequest( const string& req )
{
// process the request
LOG_PROC_MSG( "Processing request: " << req ) ;
// return the response (just the same string as the request)
return req ;
}
Now, when I start my server app, I can specifiy which message groups I want
enabled:
server.exe req,resp <== log requests & responses only, no processing
I would also add command line switches to turn on date/time stamping, etc.
You could also, of course, add a UI to dynamically enable or disable message
groups by calling enableMsgGroup() for the appropriate CMessageGroup
objects. Or perhaps periodically reload the settings from an INI file.
enableMsgGroup()
CMessageGroup
I've been lurking around CodeProject for a long time and figured it was about time
I got off my butt and put something back in. This is one of the hardest-working
libraries in my toolkit and while the implementation is a bit clunky - it was written
way back in '97, pretty early on in my C++ days - I hope you guys find it useful.. | http://www.codeproject.com/Articles/3131/Yet-Another-Logging-Library?fid=13400&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2014-42 | refinedweb | 1,224 | 69.52 |
Red Hat Bugzilla – Bug 71124
"initlog -c any_cmd" exits with -1 if executed from parent ignoring SIGCHLD
Last modified: 2014-03-16 22:29:56 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT)
Description of problem:
If a process sets the SIGCHLD handler to SIG_IGN and then does a fork/exec of
initlog with the -c option, initlog will always exit with -1. It is suppose to
exit with the exit code of the command specified with the -c option. I looked
at the source for initlog and in process.c (inside the monitor() function), the
waitpid() is failing with -1 and errno set to ECHILD. This is the expected
behavior for waitpid() if SIGCHLD is ignored (see waitpid man page). Programs
using waitpid() should always explicitly set the SIGCHLD handler to something
other than SIG_IGN.
I modified the version of initlog I was using (initscripts-5.00) by adding
a "signal(SIGCHLD,SIG_DFL);" inside the function forkCommand()
in process.c and this fixed the problem. I'm using Redhat 6.2. The bug also
exists in Redhat 7.3.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.Compile and run the following program that ignores SIGCHLD and then does a
fork and exec of "initlog -c /bin/true" (/bin/true always exits with 0). The
program then does a wait() and prints the status.
#include <signal.h>
#include <string.h>
#include <errno.h>
int main()
{
int rc, pid;
int status;
signal(SIGCHLD, SIG_IGN);
if ((pid=fork()) > 0) {
/* parent */
rc = wait(&status);
if (rc == pid) {
printf("status is %d (it should be 0)\n", status);
} else {
perror("waitpid");
}
} else if (pid == 0) {
execlp("/sbin/initlog", "initlog", "-c", "/bin/true", 0);
perror("execlp");
exit(1);
}
}
Actual Results: From the above program I get the following output:
status is 65280 (it should be 0)
Expected Results: I should've seen the following because initlog is suppose to
exit with the exit code of the command, which is 0:
status is 0 (it should be 0)
Additional info:
Note that the child process, initlog, inherits the ignored signal handling for
SIGCHLD from the parent. If I change the sample program so that the signal
handler is set to SIG_DFL, I get the expected output.
I discovered this bug from an rc script "startup failure" logged
in /var/log/messages. I have a program that ignores SIGCHLD and
executes "/etc/rc.d/init.d/dhcpd restart". dhcpd is successfully restarted but
I get a message logged stating that "dhcpd startup failed"
in /var/log/messages. By changing my program to not ignore SIGCHLD eliminates
failed message. Regardless, initlog should explicitly set the SIGCHLD handler
since it uses waitpid().
This is the same problem as described in report #64603. It's fixed with a simple
1 line addition to initlog.c: signal(SIGCHLD, SIG_DFL).
*** This bug has been marked as a duplicate of 64603 ***
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated. | https://bugzilla.redhat.com/show_bug.cgi?id=71124 | CC-MAIN-2018-13 | refinedweb | 502 | 64.1 |
Patrick Craig
NAG Ltd, Wilkinson House,
Jordan Hill Road, Oxford OX2 8DR, UK
AbstractThis paper is aimed at IRIS Explorer users who want to create their own data types. It is intended to be read in conjunction with the information given in the Creating User-defined Data Types chapter of the IRIS Explorer Module Writer's Guide. A new data type for handling statistical data is specified and the procedure for implementing and using the new type is described.
IRIS Explorer is a powerful scientific visualisation system that is currently aimed at computational physicists, chemists and engineers [1]. The IRIS Explorer data types are therefore designed to hold the data structures used by these workers. However, IRIS Explorer was never intended to be a closed system and as well as being able to create new modules using the existing IRIS Explorer types, users can create their own data types to handle unsupported data structures. The work described in this paper is part of an ongoing project to integrate the functionality of the Genstat statistical package into IRIS Explorer. Genstat is a very general statistics program that includes facilities for data management and manipulation, statistical analysis and graphical display In SS2 the new data type is described and specified as an IRIS Explorer type definition file. Section 3 describes how the type definition file is processed to produce the files required to use the type. In SS4 the automatically generated Application Programming Interface (API) for the type are described and example C code using the API functions is provided.
A data type was required that could hold the basic data structures that are used in the Genstat statistical package. These are variables that consist of an identifier (name) and a one-dimensional array of values. There are three types of variable that differ in the way the values are interpreted. A typical data set is made up of a number of variables of one or more types and each observation in the data set is represented by the values of each variable at a given position in the variable arrays. A data set can therefore be thought of as a variable by observation two-dimensional matrix. In section 2.1 the three types of Genstat variable are described and section 2.2 gives an example of how a data set could be stored in these variable types. The IRIS Explorer type definition file for the data type is described in section 2.3.
variate
The values of a variate are integer or floating point numbers. variates are normally used to store quantitative data.
text
text values are strings that are used as observation identifiers.
factor
factors are used to group points into subsets of the total data set. The values of a factor are therefore restricted to a limited set of possible levels. Each level has an identifier or label.
River Length Continent Nile 6695 Africa Amazon 6570 S.America Mississippi 6020 N.America Yangtze 5471 Asia Ob 5410 Asia 'Huang He' 4840 Asia Zaire 4630 Africa Amur 4415 Asia Lena 4269 Asia Mackenzie 4240 N.America Niger 4183 Africa Mekong 4180 Asia Yenisey 4090 Asia Murray 3717 Oceania Volga 3688 Europe
This data set shows the 15 longest rivers in the world. The three columns in this data set represent three data structures. Each column is headed by its respective identifier, subsequent rows represent observations in the data set, in this case rivers. The first column gives the name of the river and could be stored in a text structure called River. The second column would be stored in a variate called Length. The third column is an example of a factor called Continent with six levels.
The river data set could therefore be stored in the following three variables
Variable 1 Type = Text Identifier = River Values = Nile, Amazon, Mississippi, Yangtze, Ob, 'Huang He', Zaire, Amur, Lena, Mackenzie, Niger, Mekong, Yenisey, Murray, Volga Variable 2 Type = Variate Identifier = Length Values = 6695, 6570, 6020, 5471, 5410, 4840, 4630, 4415, 4269, 4240, 4183, 4180, 4090, 3717, 3688 Variable 3 Type = Factor Identifier = Continent Values = 0,1,2,3,3,3,0,3,3,2,0,3,3,4,5 Labels = Africa, S.America, N.America, Asia, Oceania, Europe
Each of the variable types could be defined as an individual IRIS Explorer data type. However, as many of the modules that would use the new data type would be able to use data in two or all of the above forms and to reduce the number of connections between modules it was decided to create a single data type that could hold all three data structures. The first step in creating a new data type in IRIS Explorer is to create a data type definition file that describes the type in a format that IRIS Explorer can understand. The type definition file for gnBase (Genstat basic type) is shown below.
#include <cx/DataCtlr.h> #include <cx/Typedefs.t> typedef enum { gn_Variate, gn_Factor, gn_Text } gnPrimType; shared typedef struct { long len "Length"; string identifier "Identifier"; gnPrimType gnType "Type"; switch (gnType) { case gn_Variate : double values[len] "Values"; case gn_Text : string values[len] "Values"; case gn_Factor : long values[len] "Values"; long levels "Levels"; string labels[levels] "Labels"; } d; } gnData; shared root typedef struct { long nVar "Num variables"; gnData data[nVar] "Data array"; } gnBase;
The gnBase structure is declared as a shared root structure with two elements, nVar, the number of variables, and data, the variable array. It is declared as a root structure so that it can be used as an input and output port data type in IRIS Explorer. The shared attribute of gnBase means that the data structure will be shared between modules and allocation and deallocation of the memory used for the structure will be controlled in IRIS Explorer by reference counting. The gnBase variable array is an array of gnData which is declared above it.
The gnData structure stores a single variable. Its elements are len, the number of values, identifier, the variable identifier, gnType, variable type, and values, the one dimensional array of values. The switch construct is used to set the type of the values array depending on variable type. The factor type has two additional elements, namely levels, the number of levels, and labels, the labels for the levels. The gnData structure is also shared because it will be shared between modules, but is not a root type because it was decided to only pass the complete gnBase structure between modules.
In this section, the process by which a new type is implemented on a UNIX operating system is described. This process has been simplified for the Windows NT operating system [3].
The type definition file is translated into the files required to use the new type by creating a text file called TYPES containing the single word gnBase in the same directory as gnBase.t and executing the IRIS Explorer makefile creation utility, cxmkmf. This creates the Makefile and executing the make command creates the files listed in section 3.1. To make the new type available to IRIS Explorer, the type has to be installed as described in section 3.2.
The C equivalent of gnBase.t, gnBase.h
#ifndef __GNBASE_H_ #define __GNBASE_H_ /* * Translated by cxtyper Tue Dec 3 17:13:31 1996 */ #include <cx/DataCtlr.h> typedef enum { gn_Variate, gn_Factor, gn_Text } gnPrimType; typedef struct gnData { cxDataCtlr ctlr; long len; char *identifier; gnPrimType gnType; union { struct { double *values; } gn_Variate; struct { char **values; } gn_Text; struct { long *values; long levels; char **labels; } gn_Factor; } d; } gnData; typedef struct gnBase { cxDataCtlr ctlr; long nVar; gnData **data; } gnBase; #endif
The cxDataCtlr elements of gnData and gnBase are used by IRIS Explorer for reference counting. The automatically generated API functions provide sufficient access to the data structures to make direct manipulation of structure elements by the programmer unnecessary.
Before installing a user defined type, the EXPLORERUSERHOME environment variable should be set to a directory in the user's file space. The make install command copies the files that are required to use gnBase to the relevant destination directories as shown below. If a directory did not exist it is created. If the files are created in $EXPLORERUSERHOME/types, the installation process will delete the .type file and gnBase will not be accessible in IRIS Explorer. The type is therefore normally built in a subdirectory of $EXPLORERUSERHOME/types before being installed.
$EXPLORERUSERHOME/types/ gnBase.type $EXPLORERUSERHOME/lib/ libgnBase.a $EXPLORERUSERHOME/include/cx/ gnBase.api.h gnBase.api.inc gnBase.h gnBase.inc gnBase.t $EXPLORERUSERHOME/man/man3/ gnBase.man3
In this section, the automatically generated Application Programmer's Interface (API) to gnBase is described (section 4.1) and examples of their use are provided in the form of user function files for modules that use the type (section 4.2).
Because the generation of the API functions is a general purpose automated process, some of the functions that are generated may be identical to others. For example, the gnBaseDataarrayLen function returns the length of the gnBase data array, i.e. the len element of gnBase, but there is also a function called gnBaseNumvariablesGet which also returns the value of this element.
The last group of API functions provide access to the elements of the gnData structure that are only relevant when the structure type is gn_Factor. The automatically generated API code for these functions performs a check to ensure that the passed structure is of type gn_Factor before accessing the structure elements. If it is of the wrong type an error is generated. For example gnDataLevelsGet contains the following code.
signed long gnDataLevelsGet( gnData *src ,cxErrorCode *ec ) { if (!src) { *ec = cx_err_error; return (signed long) 0; } if (src->gnType != gn_Factor) { *ec = cx_err_error; return (signed long) 0; } *ec = cx_err_none; return src->d.gn_Factor.levels; }
This module reads in Variate data from an ascii file and outputs it in a gnBase structure. The module has a single parameter input port connected to a file browser and a single gnBase output. The format of the ascii file is
Number of variables Number of values for first variable First Variable identifier First variable values Number of values for second variable Second Variable identifier Second variable values etc
Example data file for the Read ascii file module
3 7 Day 0 1 2 3 4 5 6 7 Temperature 10.2 12.7 15.9 13.6 14.4 11.6 12.3 7 Windspeed 25.2 20.6 20.8 22.8 15.3 14.8 15.7
User function file for the Read ascii file module
#include <cx/cxParameter.api.h> #include <cx/cxLattice.api.h> #include <cx/gnBase.api.h> #include <cx/DataAccess.h> #include <cx/DataOps.h> #include <stdio.h> #include <string.h> void MemError (gnBase *gnb) { if (gnb) cxDataRefDec(gnb); cxModAlert ("Unable to allocate memory"); return; } void ReadAscii (char *filename, gnBase **DataOut) { #define MAX 50 /* Maximum identifier length */ FILE *in; int i, j, var, len; float val; gnData **Array; cxErrorCode err; char Buffer[MAX]; char *id; /* Attempt to open file, return if file cannot be opened */ if (*filename == NULL) return; in = fopen(filename, "r"); if (in == NULL) return; /* Read number of variables and allocate new gnBase structure */ fscanf (in, "%d", &var); *DataOut = gnBaseAlloc(var); if (*DataOut == NULL) {MemError(NULL);return;} /* Get pointer to gnData array */ Array = gnBaseDataarrayGet(*DataOut, &err); /* Variable loop */ for (i = 0; i < var; i++) { /* Read length of this variate and allocate new gnData structure */ fscanf (in, "%d", &len); Array[i] = gnDataAlloc(len, gn_Variate, NULL); if (Array[i] == NULL) {MemError(*DataOut);return;} /* Read identifier and store in gnData structure */ fscanf (in, "%s", Buffer); id = (char *) cxDataMalloc (strlen(Buffer)); if (id == NULL) {MemError(*DataOut);return;} strcpy (id, Buffer); gnDataIdentifierSet (Array[i], id, &err); /* Read and store values */ for (j = 0; j < len; j++) { fscanf (in, "%f", &val); ((double *)gnDataValuesGet(Array[i], &err))[j] = val; } } fclose (in); }
This module prints out the contents of a gnBase structure. It has a single gnBase input port.
User function file for Print gnBase module
#include <cx/cxParameter.api.h> #include <cx/cxLattice.api.h> #include <cx/gnBase.api.h> #include <cx/DataAccess.h> #include <cx/DataOps.h> #include <stdio.h> #include <string.h> void PrintAscii (gnBase *DataIn) { #define FWIDTH 15 /* Field width of printed output */ FILE *in; long i, j, var; gnData **Array; cxErrorCode err; gnPrimType type; long maxlen; /* Get number of variables and gnData array pointer */ var = gnBaseNumvariablesGet(DataIn, &err); Array = gnBaseDataarrayGet(DataIn, &err); /* Write variable identifiers and store maximum variable length */ maxlen = 0; for (i = 0; i < var; i++) { printf ("%*s", FWIDTH, gnDataIdentifierGet (Array[i], &err)); if (gnDataLengthGet(Array[i], &err) > maxlen) maxlen = gnDataLengthGet(Array[i], &err); } printf ("\n"); /* Write values depending on type */ for (j = 0; j < maxlen; j++) { for (i = 0; i < var; i++) { type = gnDataTypeGet(Array[i], &err); if (j < gnDataLengthGet(Array[i], &err)) { switch (type) { case gn_Variate: printf ("%*g", FWIDTH, ((double *) gnDataValuesGet(Array[i], &err))[j]); break; case gn_Factor: printf ("%*s", FWIDTH, (char **) gnDataLabelsGet(Array[i], &err) [((long *)gnDataValuesGet(Array[i], &err))[j]]); break; case gn_Text: printf ("%*s", FWIDTH, ((char **) gnDataValuesGet(Array[i], &err))[j]); break; } } else { printf ("%*s", FWIDTH, ""); } } printf ("\n"); } }
If the input from this module came from a read ascii module that had read in the example file in SS4.2.1 the printed output would be
Day Temperature Windspeed 0 10.2 25.2 1 12.7 20.6 2 15.9 20.8 3 13.6 22.8 4 14.4 15.3 5 11.6 14.8 6 12.3 15.7
This module is an example of a gnBase filter that restricts the variate values to lie between a min and max set by the user. The usual way to create a filter module in the Module Builder [3][4] is to pass the parts of the structure that will not be affected by the filter directly from the input to the output port in the connections window and simply connect the parts of the structure to be changed to the function arguments. In this case just the type and values would need to be passed to the function arguments. However, gnBase differs from other IRIS Explorer types in that it contains a double pointer to a reference counted structure (gnData). The module builder is not currently able to create module data wrapper code for such a structure. Instead of casting the pointer as (gnData **), it attempts to cast it to (gnData), which fails. In effect, this means that the complete gnBase structure must be passed to the function arguments.
The module has gnBase input and output ports and two parameter input ports, min and max, that are connected to sliders or dials.
User function file for Filter module
#include <cx/cxParameter.api.h> #include <cx/cxLattice.api.h> #include <cx/gnBase.api.h> #include <cx/DataAccess.h> #include <cx/DataOps.h> #include <stdio.h> #include <string.h> void Filter (gnBase *DataIn, gnBase **DataOut, double min, double max) { long i, j, var; double *val; gnData **Array; cxErrorCode err; gnPrimType type; /* Create duplicate of input gnBase structure */ *DataOut = gnBaseDup(DataIn); if (*DataOut == NULL) return; /* Get number of variables and gnData array pointer */ var = gnBaseNumvariablesGet(DataIn, &err); Array = gnBaseDataarrayGet(*DataOut, &err); /* Variable loop */ for (i = 0; i < var; i++) { /* If this variable is a variate, restrict values */ type = gnDataTypeGet(Array[i], &err); if (type == gn_Variate) { for (j = 0; j < gnDataLengthGet(Array[i], &err); j++) { val = &(((double *)gnDataValuesGet(Array[i], &err))[j]); if (*val < min) { *val = min; } if (*val > max) { *val = max; } } } } }
It has been demonstrated that a new data type can be successfully incorporated into IRIS Explorer. The new data type was taken from an application that was previously completed unrelated to IRIS Explorer. Due to the flexibility of IRIS Explorer typing, the type could be specified to exactly match the required data structure. The automatically generated API functions provide the programmer with a means to manipulate all parts of the data structure, without having to know about the underlying type definition. Examples of how the API functions could be used within modules were provided.
The inability of the module builder to interpret a double pointer to a shared structure within another shared structure meant that module data wrapper code could only be generated by the module builder when the complete data structure was passed between ports and function arguments. This means that when writing filter modules, the programmer has to copy the parts of the data structure that remain unchanged within the user function, rather than leaving this to the module data wrapper.
1. IRIS Explorer User's Guide (1995). The Numerical Algorithms Group Ltd
2. Genstat 5 Release 3 Reference Manual (1993). Genstat 5 Committee of the Statistics Department Rothamsted Experimental Station. Oxford University Press.
3. IRIS Explorer Module Writer's Guide (NT) (1997). The Numerical Algorithms Group Ltd
4. IRIS Explorer Module Writer's Guide (1997). The Numerical Algorithms Group Ltd | http://www.nag.co.uk/doc/techrep/HTML/tr3_97.html | crawl-003 | refinedweb | 2,806 | 52.49 |
Using member functions to render the complex content
Using member functions to render the complex content
I need a component that shows info about offer. The best way i can thin of is to subclass the panel, create a template and set data for the panel. I achieved all that, but the template is not working as I'd like it to:
Code:
new Ext.XTemplate( '<img src="{this.getImageUrl(images)}" width="120" height="120" />', '<div>', '<strong>€ {this.formatPrice(price)}</strong>', '<span class="name">{name}</span>', '<p>{description}</p>', '</div>', '<div class="info">', '<span class="type">{this.renderType(isOffer, isPrivate)}</span>', '<span class="type">{this.renderDistance(distance)}</span>', '</div>', { getImageUrl: function (images) { return CJ.constants.DEFAULT_DEAL_IMAGE; }, formatPrice: function (price) { return price; }, renderType: function (isOffer, isPrivate) { if (isPrivate == 'false') { return CJ.t('Business deal') } if (isOffer == 'true') { return CJ.t('Offer'); } return CJ.t('Search'); }, renderDistance: function (dist) { return (dist - 0) + 'm'; } })
Is there a way to make it work as I want it to?
- Join Date
- Mar 2007
- Location
- Gainesville, FL
- 37,997
- Vote Rating
- 978
You need to surround it with square brackets:
Code:
{[this.renderType(isOffer, is official documentation where I can see these options?
I read everything about XTemplate, but did not find that.
Also there is a problem in the code above. If you're passing values as parameter, you should use
Code:
{[this.renderType(values.isOffer, values.isPrivate)]}
Code:
{[this.renderType(isOffer, isPrivate)]} | http://www.sencha.com/forum/showthread.php?164093-Using-member-functions-to-render-the-complex-content&p=696974&viewfull=1 | CC-MAIN-2015-18 | refinedweb | 238 | 52.05 |
Resetting the scroll position to the top when a page loads is very common in modern websites. By default, Ember actually retains the current scroll position as you navigate between pages. This is a little strange because new pages you navigate to may load halfway down the page.
Create a Mixin
To work around this, you’ll want to create a mixin, which allows the same group of code to be applied to many different classes.
For this example, I’ll use the following file name for the mixin:
reset-scroll-position.js. I chose to tie into the
activate method. According to the Ember docs, “[activate] is executed when the router enters the route. It is not executed when the model for the route changes.”
reset-scroll-position.js:
import Ember from 'ember'; export default Ember.Mixin.create({ activate: function() { this._super(); window.scrollTo(0,0); } });
This mixin will be applied to a route, so you’ll want to choose the routes where you’ll apply it. In my case, I wanted to apply it to all routes. I ended up creating a base route that all routes in my app extend.
base.js:
import Ember from 'ember'; import ResetScrollPositionMixin from 'my-app/mixins/reset-scroll-position'; export default Ember.Route.extend(ResetScrollPositionMixin, {});
some-other-route.js:
import BaseRoute from 'my-app/routes/base'; export default BaseRoute.extend({ ... });
An Option with More Flexibility
In case you are looking for more flexibility, such as having a different scroll position for each route, there’s a nice open-source library called ember-cli-reset-scroll. Here is their usage example:
import Ember from 'ember'; import ResetScrollMixin from 'ember-cli-reset-scroll'; export default Ember.Route.extend(ResetScrollMixin, { // Scrolls to top resetScroll: undefined // Scroll to a specific position (in px) resetScroll: 20 // Scroll to a specific position based on the route name (in px) resetScroll: { 'books.index': 30, 'authors.*': 210 } });
I hope this helps!
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy1 Comment
Great summary here of a really common problem.
We also needed to handle scrolling to the top of the page on clicking pagination when the route itself doesn’t necessarily change, but the query parameters do. We have a component that simply handles the `click()` event and yields.
Code is available here:
And some example usage here: | https://spin.atomicobject.com/2016/06/06/ember-scroll-to-top/ | CC-MAIN-2019-51 | refinedweb | 396 | 56.66 |
Opened 6 years ago
Closed 15 months ago
#12995 closed New feature (fixed)
"source" exception attribute no longer handled properly by debug exception handler
Description
The "source" attribute on exceptions handled by the debug view was very handy; I set it in my own custom templating system (which coexists with Django's) and it automatically showed up in errors generated by Django's, complete with line numbers, just like they do for when Django templates set that attribute. Duck typing made this work implicitly, which was extremely useful--that's how duck typing is supposed to work.
r12586 broke this, adding an isinstance check which makes it so only errors from Django's own templates can use this feature. This doesn't make sense; duck typing was the correct approach here.
Please revert r12586; that was an unnecessarily destructive change. If you're having namespace collisions, then use a less generic name ("source_with_lines" or something like that).
Change History (13)
comment:1 Changed 6 years ago by Alex
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
- Version changed from 1.1 to SVN
comment:2 Changed 6 years ago by Glenn
I don't care about backwards-compatibility here; I care about being able to do this at all. I can change my source attribute to eg. "source_with_lines" without significant problems, but I can't make all of my exceptions derive from TemplateSyntaxError. (It applies the source attribute to the existing exception and re-throws it without changing the exception type, simply augmenting the exception in-place as it passes out of the template. IMO, this is what Django's templating should be doing, too, to avoid destroying the original exception.)
comment:3 Changed 6 years ago by russellm
If you read up on the history of the original bug, the change to avoid the source attribute was done specifically to support non-Django template languages (in particular Jinja2), and to avoid clashes with other libraries (like PyXML) that use a source attribute but have nothing to do with the templating system.
So - we're not going to simply revert here.
However, I agree that a simple type check isn't ideal -- [and I've said as much on django-dev]. Patches implementing a more sophisticated approach to template exception handling are welcome.
comment:4 Changed 6 years ago by Glenn
That's exactly why I suggested renaming the attribute to a less generic name than "source". I don't see any problems with "source_with_lines" (other than sounding a little awkward). If you're still concerned about namespace collisions--it's inherent with duck typing--try "django_source".
comment:5 follow-up: ↓ 6 Changed 6 years ago by ubernostrum
comment:6 in reply to: ↑ 5 Changed 6 years ago by kmtracey
Replying to ubernostrum:
Is this a duplicate of #12992?
No. That one's about the file name not being known with the new loaders (from r11862). This one is about the source lines only being shown if the exception raised is a Django TemplateSyntaxError (due to r12586), breaking debugging for other templating systems.
comment:7 Changed 6 years ago by mitsuhiko
I think the correct solution would be adding a hook into the DebugView to inject your own code there.
comment:8 Changed 6 years ago by Glenn
Storing source information in the exception is cleaner. It means that any templating engine supporting the interface can supply source context to any exception renderer supporting it, without the user code sitting between the two needing to manually ferry the information across with a hook. (Not to suggest that the interface should be made public as it is; it should also allow getting template source for each stack frame.)
Another reason (the original reason I noticed this) is discussed in ticket #11461. DebugNodeList shouldn't be destroying the original exception to attach information; it should simply be attaching the data to the existing exception. r12586 makes cleanly fixing that impossible.
comment:9 Changed 5 years ago by lukeplant
- Type set to New feature
comment:10 Changed 5 years ago by lukeplant
- Severity set to Normal 15 months ago by aaugustin
- Resolution set to fixed
- Status changed from new to closed
This was fixed in [4397c587]. See also #16770.
django_template_source was chosen as a "less generic name".
This change isn't going to be reverted wholesale. Also, how is changing to the attribute "source_with_lines" any less backwards incompatible for your usecase? I am accepting this ticket as custom templating systems should have a way of providing debug info. | https://code.djangoproject.com/ticket/12995 | CC-MAIN-2016-07 | refinedweb | 764 | 58.92 |
J
Session How can we set the inactivity period on a per-session basis?
We can set the session time out programmatically by using the method setMaxInactiveInterval() of HttpSession
Session tracking basics
Session Tracking Basics
Session Tracking
Session tracking is a process that servlets use...");
PrintWriter out = response.getWriter();
String title = "Session Tracking
Session time
Session time I need to develop an application has a user id and password to login and maintain the session for 8 minutes. All the pages under the login are not available if there is no
activity for 8 minutes
Servlets Books
at a variety of techniques for saving session state, as well as showing how Servlets...
are Servlets
Servlets are modules that extend request/response-oriented servers...
Servlets Books
give the code for servlets session
give the code for servlets session k give the code of total sample examples of servlet session
Session ID - Java Beginners
Session ID Do we get new session id for a new domain after clicking..., IOException {
HttpSession session = req.getSession(true);
res.setContentType("text/html");
PrintWriter out = res.getWriter();
String title
servlets
The getSession(true) will check whether a session already exists for the user. If yes, it will return that session object else it will create a new session object and return it. While getSession(false) will check existence
servlets
regarding the user usage and habits. Servlets sends cookies to the browser client... the cookie information using the HTTP request headers. When cookie based session.... In this way the session is maintained. Cookie is nothing but a name- value pair
servlets
functionality to the servlets apart from processing request and response paradigm... the filters should extend javax.servlet.Filter. Every request in a web application
Session
subsequent HTTP requests.
There is only one session object available to your PHP scripts at any time. Data saved to the session by a script can be retrieved...Session What Is a Session?
Hi friends,
A session
Expired session
-out period, so they do not have to maintain sessions indefinitely. If the session...Expired session How can I recover an expired session?
If a session has expired, it means a browser has made a new request that carries
servlets
what is url rewriting what is url rewriting
It is used to maintain the session. Whenever the browser sends a request then it is always interpreted as a new request because http protocol is a stateless protocol
Session Related Interview Questions
time-out value for session variable is 20
minutes, which can be changed as per... a user session in Servlets?
Answer: The interface HttpSession can be used...
Session Related Interview Questions
servlets
pages to add one or more files into a web page and come out with given directives
servlets
servlets Even though HttpServlet doesn't contain any abstract method why it is declared as abstract class?
what benifits we can get by declaring like this?(i.e, with out containing the abstract methods, declaring a class
servlets
{
res.setContentType(?text/html?);
printWriter out=res.getWriter();
out.println(?<...?):
PrintWriter out=res.getWriter();
if(req.getParameter(?withdraw?)!=null)
{
if(amt>
Servlets
out=res.geWriter();
Connection con=null;
PreparedStatement pstm=null...");
int count=0;
PrintWriter out = res.getWriter();
String FirstName
Servlets
{
res.setContentType("text/html");
int count=0;
PrintWriter out... {
res.setContentType("text/html");
int count=0;
PrintWriter out
Servlets
,IOException {
res.setContentType("text/html");
PrintWriter out
Servlets
count=0;
PrintWriter out= res.getWriter();
String FirstName
Servlets
{
res.setContentType("text/html");
int count=0;
PrintWriter out
Servlets
count=0;
PrintWriter out = res.getWriter();
String
SERVLETS
("text/html");
int count=0;
PrintWriter out = res.getWriter
Session Bean
out.
A session bean can neither be shared nor can persist
(means its value can...
Session Beans
What is a Session bean
A session bean is the
enterprise bean that directly
How do servlets work? Instantiation, session variables and multithreading
How do servlets work? Instantiation, session variables and multithreading How do servlets work? Instantiation, session variables and multithreading
Servlet Session
() : This method is used to find out the maximum
time interval that the session... to find out the HttpSession
object. This method gives a current session...Servlet Session
Sometimes it is required to maintain a number of request
Servlet-session
Servlet-session step by step example on session in servlets
servlets - JSP-Servlet
:8080/projectname/servleturl
out put displayed
Thanks
Rajanikant
9986734636
Time
JSP Session Parameter rewrite
to set the time
out for each session. removeAttribute() method is used... the
session created time in milliseconds. The getLastAccessedTime()
method returns.... The getMaxInactiveInterval() method
returns
the maximum amount of time the session
Servlets - JSP-Servlet
;
ResultSet rs;
res.setContentType("text/html");
PrintWriter out...(Exception e){
out.println(e);
}
out.println("Time taken
Session Last Accessed Time Example
Session Last Accessed Time Example
... of session
and last access time of session. Sessions are used to maintain state...() method is used to
find the time when session was created
instead of extend
servlets - Java Beginners
;
if (daylypay < 8)
throw new Exception("More time needed.../html;charset=UTF-8");
PrintWriter out = response.getWriter();
try
Session Tracking Servlet Example
client or the other when it tries to interact next time to the server. Session... creation time, last accessed time, maximum
inactive interval. session id, and set... out = response.getWriter();
HttpSession session = request.getSession();
Date
Session Beans
a session bean if
its client times out.
A session bean can neither be shared nor can... Session Beans
What is a Session bean
A session bean is the
enterprise bean
session object
session object how to make session from one servlet to another servlet for an integer variable.
Please visit the following link:
JSTL: Set Session Attribute
in the session is <b><c:out...JSTL: Set Session Attribute
... are using the jstl and
there is a need to set a variable in the session. You all know
Advantages of Servlets over CGI
takes significant amount of time.
But in case of servlets initialization...
Advantages of Servlets over CGI
Servlets are server side components that provides
a powerful mechanism
What is the default session time in php and how can I change it?
What is the default session time in php and how can I change it? What is the default session time in php and how can I change
Servlets Programming
Servlets Programming Hi this is tanu,
This is a code for knowing... {
response.setContentType("text/html");
PrintWriter out = response.getWriter... visit the following links:
session maintanance
and also when a user log out the session should get destroyed...session maintanance Hi i am developing a small project using j2ee... i have some problem in maintaing session in my project... in my project when
Introduction to Java Servlets
to work with servlets. Servlets generally extend the HttpServlet class...
Introduction to Java Servlets
Java Servlets are server side Java programs that require
session management
session management i have a problem in sessions.when i login into my... of browser it goes to login page.and with out need of login ,admin page was opened.
iam new to java.
i dont have an idea on session and cookies
can any one give me
Keep servlet session alive - Development process
to server after every specific time interval so that servlet will not invalidate session which is created on user logon. When user explicitly logs out from client...Keep servlet session alive Hi,
I am developing an application.
Session scope
Session scope Hii i m java beginner i just started learning java and i just started the topic of session tracking . I want to know about session scopes that is application ,page ,session etc etc and also their uses if possible
Servlets Program
Servlets Program Hi, I have written the following servlet:
[code]
package com.nitish.servlets;
import javax.servlet.*;
import java.io.*;
import... ByteArrayInputStream(b);
Blob b1=new SerialBlob(b);
PrintWriter out
jsp and servlets
.
Developing a website is generic question you may have to find out the usage
Servlets - JDBC
;); PrintWriter out = response.getWriter(); Connection conn = null; String url
Login/Logout With Session
logged-in.) and session (Session
Time: Wed Aug 01 11:26:38 GMT+05:30 2007...Login/Logout With Session
In this section, we are going to develop a login/logout
application with session
Session Tracking in servlet - Servlet Interview Questions
Session Tracking in servlet
Hi Friend,
Can we use HttpSession for tracking a session
or else 1.URL rewritting 2.Hidden Form... out = res.getWriter();
String contextPath = req.getContextPath();
String
Servlets differ from RMI
.
Servlets are used to extend the server side functionality of a website...Servlets differ from RMI Explain how servlets differ from RMI... by the client. Servlets are modules that run within the server and receive
Can an Interface extend another Interface?
Can an Interface extend another Interface? Hi,
Can an Interface extend another Interface?
thanks
Features of Servlets 2.4
session allows zero or negative values in the
<session-timeout> element to indicate sessions should never time out.
If the object in the session...
Features of Servlets 2.4
php session timeout
php session timeout How to check if the session is out or timeout have occurred in the PHP application
Pre- Existing Session
Pre- Existing Session
In this example we are going to find out whether the
session... a
existing session. It is not always a good idea to create a new session
Destroying the session
(), '', time()-42000, '/');
}
session_destroy();
?>
The Output:
As the time...Destroying Session
Session_destroy() function is used for destroying all of the data associated with the current session. Neither it does not intervene any
change password servlets - JSP-Interview Questions
password servlet. Hi,
I dont have the time to write the code. But i... {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
String...\";";
showTable(driver, url, username, password, query, out);
}
public
Accessing Database from servlets through JDBC!
. But in case of servlets initialization
takes place very first time...
Java Servlets - Downloading and Installation
Java
Servlets are server
session tracking - Ajax
session tracking explain session tracking with example? Hi friend,
Session tracking is a mechanism that servlets use to maintain state... information about a shopping session, and each subsequent connection
can
Java httpsession
and is used for the purpose of session tracking while working with servlets.
Session... some period of time. The session object can be found using
getSession() method... view and manipulate information about a session,
for example, session id
Accessing Session Object
time. To access the session, you need an action class
implementing... a jsp page for viewing the session
object, session context and session time...() provides date and time when the last session is accessed.
GetSession.jsp
servlets - Servlet Interview Questions
servlets Hi i want to create class timetable using servlets.....
if suppose i create this using html
after some time i want to modify this timetable using servlets with colspans and rowspans
becuase this is my
Session management in php - PHP
a session until the user log out.
Thanks in Advance...Session management in php I am creating a simple program in PHP to manage the session of user. It's basically a simple form that will allow a user
Servlet Tutorials Links
of any kind of server. Servlets are most commonly used, however, to extend Web...;
Java
Servlet Technology:
Servlets are the Java platform technology of choice for extending and enhancing Web servers. Servlets provide
Index Out of Bound Exception
Index Out of Bound Exception
Index Out of Bound Exception are the Unchecked Exception that occurs at
run-time errors. This arises because of invalid parameter
servlets
servlets what is the duties of response object in servlets
servlets
servlets why we are using servlets
Is session depend on cookie ???
the cookie then my user logged out that means there is something behind session...Is session depend on cookie ??? Since I created one session & as we say that session store at server side that means if I clear browser cookie
servlet session - JSP-Servlet
the counter if new user logs on and decrement if session times out or user Hi... on and decrement if session times out or user log offs.Thanks
servlets
what are advantages of servlets what are advantages of servlets
Please visit the following link:
Advantages Of Servlets
Session Modification
();
session_register('count');
$count++;
if ($count==1) {
$mess= "one time...Session Modification in PHP
Session modification can be done through incrementing the loop. As the counting of loop increments, the session be modified.
Session creation and tracking
),
it should write the current session start time
and its duration in a persistent file...Session creation and tracking 1.Implement the information... information of last servlet instance:
a) start time, b) duration. On refresh
servlets deploying - Java Beginners
servlets deploying how to deploy the servlets using tomcat?can you...");
PrintWriter out = response.getWriter();
out.println("");
out.println...);
}
}
------------------------------------------------------- This is servlets
servlets - JSP-Servlet
servlets. Hi friend,
employee form in servlets...;This is servlets code.
package javacode;
import java.io.*;
import java.sql...., IOException{
response.setContentType("text/html");
PrintWriterSP-Servlet
servlets and jsp HELLO GOOD MORNING,
PROCEDURE:HOW TO RUN... FOR ME,IN ADVANCE THANK U VERY MUCH. TO Run Servlets in Compand... understand it..
have great time..
all the best
jsp,servlets - JSP-Servlet
that arrays in servlets and i am getting values from textbox in jsp... IOException, ServletException{
PrintWriter out = response.getWriter
servlets - JSP-Servlet
{
response.setContentType("text/html");
PrintWriter out = response.getWriter...
session
session is there any possibility to call one session from another session
Session
Session how to session maintaining in servlet with use of hidden fieds
session
session Session management in Java
Servlets
Servlets How to check,whether the user is logged in or not in servlets to disply the home page
Can a Class extend more than one Class?
Can a Class extend more than one Class? Hi,
Can a Class extend more than one Class?
thanks
session tracking - JSP-Servlet
session tracking hi,
i m working on a real estate web site....which i have to submit as final year project. im having problem regarding session... it working...actually i want to know when a user log in or register and log out from
servlets - JSP-Servlet
first onwards i.e., i don't know about reports only i know upto servlets...("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
try... link:
Thanks
servlets - Servlet Interview Questions
information.
... ServletException, IOException {
response.setContentType("text/html");
PrintWriter out... for more information.
java Servlets - JSP-Servlet
java Servlets Hi
i am having a doubt regarding servlets as i am in learning stage give me any clew how to retrive data from mysql database after..., IOException
{
PrintWriter out = res.getWriter();
//Write your connection
Doubt in servlets - JSP-Servlet
ServletException,IOException{
res.setContentType("text/html");
PrintWriter out=res.getWriter... the following link:
Thanks
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/73724 | CC-MAIN-2016-07 | refinedweb | 2,469 | 58.89 |
Scrape CENTCOM's data - part 222 Jan 2015
All right, now for part two of our quick tutorial on scraping CENCTOM.
Important note: I asked my pal Balto for advices in his field of expertise: Python. Long story short, he corrected a bunch of things in my script, from variables not capitalised to indentation, and also more important things.
Anyway, moving on.
Preparing the variables
BASE_NEWS_URL = "" NEWS_PAGE_URL = BASE_NEWS_URL + "/P"
As we did before with
BASE_URL, we're going to define the basic URLs that we'll use later.
Getting the content
We don't need to change anything in our
get_links() function, which grabs all the links from the index page.
However, we'll need a new function to scrape the press releases.
def get_content(link): print('Scraping press release from %s...' % (link)) soup = make_soup(link) table = soup.findAll("table", "contentpaneopen")[1] paras = table.findAll("p") for text in paras: press_releases.append(text)
So, we call another
soup() on our links, and then, as we did before for
get_link(), we throw in some parameters in, i.e. the DOM elements containing the body of the press releases.
Calling all our stuff
Balto made some adjustments to the boilerplate used to call the functions, so let's just re-use it as is:
if __name__ == '__main__': links, releases = [], [] urls = [BASE_NEWS_URL] + [NEWS_PAGE_URL + str(i) for i in range(0, 165, 11)]
Then I propose we simplify: instead of storing the URLs in a JSON file, let's just use the variable containing these URLs to scrape the press releases directly. Like this:
# Scrape following pages for url in urls: links.extend(get_links(url)) # Scrape press releases for link in links: releases.append(get_content(link)) with open('press-releases.json', 'w') as f: json.dump(releases, f, indent=4) print "Output result in press-releases.json"
Voila. Everything will be inputed in a (messy) JSON:
"[<p>January 9, 2015<br/>Release # 2015009<br/>FOR IMMEDIATE RELEASE</p>, <p><strong>SOUTHWEST ASIA -</strong> U.S. and partner nation military forces continued to attack ISIL terrorists in Syria, Jan. 8, using fighter and bomber aircraft(...)
Now, you noted that there are some HTML tags in there. That's quite good, because we can directly generate an HTML output by replacing
.json by
.html. then, some styling. Because it's 2015, people.
And, as promised, the Github Gist. | https://blog.basilesimon.fr/2015/01/22/scrape-CENTCOM-data-with-python-part-2/ | CC-MAIN-2018-34 | refinedweb | 392 | 66.94 |
A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Thank you!
Is there anyway i can re-scale the brush size like with a " + " and " - " button
Have you found a way yet? i am also looking for a way to do this
I have problems working with thermal camera, i only have the plug-ins from the Kinect SDK which comes with infrared, is there a way to convert it...
Actually i just found out how to fix it, instead of those sp.writes i did. It should be "sp.write(datalist, 0, datalist.length)" i hope i can help...
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.IO;
using System.IO.Ports;
using UnityEngine.UI;... | https://forum.unity.com/search/141224712/ | CC-MAIN-2020-10 | refinedweb | 136 | 69.58 |
Sorry, I know this is a rather lazy question, my server experience is limited to OS X, I'm hoping a Windows guy can "explain it to me like I'm five"
I'll need to help configure a bunch of iPads/iPhones to use Exchange shortly, and I'm sure some of the users will give me inaccurate authentication details. Rather than send them packing, I'd like to be able to make an educated guess at what it might be based on the info they do know, but I'm still a bit fuzzy on the following:
• do all versions of Windows Server follow the same rules for the AD Domain (eg: is it based on FQDN? NetBIOS name? totally arbitrary?)?
• is an AD Domain case-sensitive?
Edit: I'm not asking what is the difference between the two (yes, we use DNS on the Mac too). The question is rather what is the relationship between the two. Do they need to match, basically.
The DNS suffix of a domain joined computer is the name of the Active Directory domain to which the computer is joined, which is also the DNS namespace for the domain.
So, you have a computer named "computer1" in an AD domain named "mydomain.local":
The NetBIOS name for the computer is computer1
The name of the AD domain that the computer is joined to is mydomain.local
The DNS suffix for the computer is mydomain.local
The AD DNS zone for the domain is mydomain.local
The FQDN of the computer is computer1.mydomain.local.
The NetBIOS name for the domain is mydomain (although it is possible to create a NetBIOS name for the domain that doesn't match the DNS name for the domain).
EDIT
Incidentally, in Windows NT 4 it was possible for a computer to have a different DNS host name than the NetBIOS name (multiple DNS host names in fact), but I don't think that's been possible since Windows 2000, due to AD's integration with DNS.
An active directory domain name is a FQDN. The NETBIOS name is, by default, the shortened version of the FQDN. This can be changed, so it is not always the same.
I have no idea what you mean by the 2nd question.
It is case insensitive.
It is possible to address a Windows 2003 (and maybe 2008) server with multiple hostnames using DNS CNAMEs and adding SPNs to the AD object for the server. You would also need to modify the "DisableStrictNameChecking" registry entry. All three of these are required to address a server by multiple NetBIOS (single-label) hostnames.
Refer to:
By posting your answer, you agree to the privacy policy and terms of service.
tagged
asked
2 years ago
viewed
413 times
active | http://serverfault.com/questions/316675/can-someone-explain-the-relationship-between-a-servers-fqdn-and-active-director | CC-MAIN-2013-48 | refinedweb | 465 | 69.62 |
28 March 2011
By clicking Submit, you accept the Adobe Terms of Use.
To make the most of this tutorial you’ll need previous experience building applications with Adobe Flash Builder as well as some knowledge of development techniques using .NET and Microsoft Visual Studio.
Intermediate
In this tutorial you’ll learn about Remote Shared Objects (RSOs) and how to use them from either the client or server side. You’ll also develop a small application based on a simple word game that will access and modify RSOs using from client-side (ActionScript) and server-side (C#) code using the WebORB Integration Server to marshal the communications between client and server. You’ll need to have an IIS server running and WebORB installed on your server.
RSOs track, store, share, and synchronize data between multiple client applications.
An RSO is an object that lives on the server. It resides in the scope of a messaging application that clients connect to. More than one client can connect to an RSO, and all of them will have access to the data in the RSO. WebORB, in this case, is responsible for managing the RSO and providing access to it for various clients.
RSOs are particularly useful when they are used on several clients at the same time. When one client changes data that updates the RSO on the server, the server sends the change to all other connected clients, enabling you to synchronize many different clients with the same data. RSOs can also be updated and accessed by the server, giving developers more options for application development.
To sum it up, RSOs can be used to:
In this tutorial, you will use RSOs to create a simple online version of the Add-a-word game. The object of this game is to add a word to a sentence, one user at a time, and eventually come up with a very long sentence (that still makes sense).
Consider the following example:
The server-side code for this application keeps track of all connected users, assign turns, and add words to the sentence.
Note: The code snippets included in the steps below are not complete; rather they are used to illustrate the main concepts in the server-side implementation. For the complete code, see WeborbSharpRSO.cs in the sample files for this tutorial.
Follow these steps to create the server-side DLL:
namespace WeborbSharpRSO { public class WeborbSharpRSO : ApplicationAdapter { } }
The ApplicationAdapter class has several methods that let you know when an application is started, when a new room is created, and when a client connects or disconnects. An application can have several rooms running simultaneously. Each room may have one or more clients connected and sharing information. Clients connected to one room share information only with other clients in the same room.
For this example you’ll use the following two methods to detect when a client joins or leaves a room:
public override bool roomJoin(IClient client, IScope room) public override void roomLeave(IClient client, IScope room)
public string sharedObjectName = "addWord";
roomJoin()method, first check if the user can connect to the room. Then check if the room has the Remote Shared Object. If not, create it and add a SharedObjectListener to it. This listener will detect any changes in the RSO.
public override bool roomJoin(IClient client, IScope room){ if (base.roomJoin(client, room)) { ISharedObject so; if (!hasSharedObject(room, sharedObjectName)){ createSharedObject(room, sharedObjectName, false); so = getSharedObject(room, sharedObjectName); so.addSharedObjectListener(new MySharedObjectListener()); } else so = getSharedObject(room, sharedObjectName); … } }
Note: The code in roomJoin could also be placed inside of the appStart method.
The next step is to check and update the values of some attributes of the RSO, such as the number of users connected and whose turn is next. These attributes hold all the information you want to share across clients.
For example, your code will get the existing number of users connected to the room from the shared object (SO) and then increment that number by one. This value is then written back to the SO:
if(so.hasAttribute("totalUsers")) totalUsers = so.getLongAttribute("totalUsers"); totalUsers++; … so.setAttribute("totalUsers", totalUsers);
In addition to
totalUsers, the RSO shares the following data:
userList: List of all the clients connected to the same room
currentUser: id and name of the client whose turn it is to add the next word
sentence: the actual sentence being formed
word: the last word submitted
Each client that connects to a room is assigned unique ID based on the connection order.
roomJoin()function, add the following code, which sends this ID to the client using the
invokeClientsmethod (explained in more detail below):
object[] args = new Object[] {client.getId()}; invokeClients("SetUserId", args, client.getConnections(room));
public class MySharedObjectListener : ISharedObjectListener { }
This class provides methods that can be used to check for different types of changes on the shared object. This tutorial focuses on the
onSharedObjectUpdate() method, which is used to check for changes on the remote object. A change on the
word attribute of the RSO invokes a method that will process the information and set the next user.
onSharedObjectUpdate()as follows:
public void onSharedObjectUpdate(ISharedObjectBase so, string key, object value){ if (key == "word") WeborbSharpRSO.nextUser(so, value.ToString()); }
The
nextUser() method is called when there’s a new word to add to the sentence. Even though you could modify the RSO directly in this function, do not use this approach.
When the server-side code changes the RSO during a client-side request, the client that initiated the request does not get the changes added by the server. The server-side accumulates all the changes as independent events. The accumulated events are sent out to the original sender first and then to all other subscribers. Each event has a corresponding RSO version number. If there are multiple change events all going out with the same version number, Flash Player lets only the first one through. This will cause the client that initiated the request to get only the changes it requested, not the changes added by the server to the RSO on the same request.
public static void nextUser(ISharedObjectBase so,string word) { ThreadPool.QueueUserWorkItem(ChangeCurrentUser, new object[] {so, word}); }
ChangeCurrentUser()function to actually change the RSO on the server. Here’s where you add the word to the sentence and write it back to the RSO:
public static void changeCurrentUser( object state ){ … if( word != "" ){ string text = ""; if( so.hasAttribute("sentence") ) text = so.getStringAttribute("sentence"); so.setAttribute("sentence", text += " " + word); } … }
so.setAttribute("currentUser", newCurrentUser.ToString() );
roomLeave()method to handle clean-up tasks when a user leaves the room. This method updates the list of clients still playing the game.
Note the three-step process: the content of the
userList attribute is retrieved and set to the
users dictionary, the client that is leaving the room is removed from the dictionary, and then a copy of the content of this object is put into a
newUsers dictionary. The reason behind this awkward process is that the server won’t detect the change on the original
users object when an element is removed. As a result, if you send back the same object to the RSO, the changes will not be sent to the clients.
users = so.getMapAttribute("userList"); … if (users.Contains(client.getId())) users.Remove(client.getId()); newUsers = new Dictionary<string, string>(); foreach (DictionaryEntry de in users) newUsers[de.Key] = de.Value; … so.setAttribute("totalUsers", totalUsers);
invokeClients()to the main class. This function uses the
connection.invokemethod to call ActionScript methods on the client. You will need to pass the name of the method to call, any needed parameters, and the list of client connections. The name used in the
functionNamevariable must exist as a function name in the client application:
private void invokeClients(string functionName, object[] args, IList<IConnection> ILconn ){ foreach(IConnection conn in ILconn){ ((IServiceCapableConnection)conn).invoke(functionName, args); } }
A detailed explanation of the invoke() method is outside of the scope of this tutorial. For more information about this method please consult the WebORB documentation or see Invoke ActionScript functions from .NET.
You’ll need to configure WebORB before your application will work. Specifically, you need to add a messaging application so WebORB is aware of its existence and can manage the user connections. Follow these steps:
Open the WebORB management console (see Figure 1) using a web browser. If you installed WebORB using the default settings, the console is available at:
Note: If the MyRSO application doesn’t show up under the list of applications, but you can see the folder in the hard drive, you may need to restart IIS and then reload the WebORB console.
If you cannot see the MyRSO folder in your hard drive you may need to check your permissions. For more information on permissions, refer to the WebORB installation and deployment documentation available through the Help/Resources tab of the WebORB console.
With the server-side code in place, you’re ready to open Adobe Flash Builder and develop the ActionScript code.
Note that you could have made many of the changes to the RSO directly with ActionScript using WebORB’s own SharedObjectsApp messaging application. This tutorial used a more convoluted path to illustrate RSO access and modification from the server.
Note: The code snippets included in the steps below are not complete; rather they are used to illustrate the main concepts in the client-side implementation. For the complete code, see WeborbRSO.mxml in the sample files for this tutorial.
To create your ActionScript code, follow these steps:
<s:states> <s:State <s:State </s:states>
TitleWindowwith two
textInputcontrols (Room Name and User Name) and a Connect button:
<s:TitleWindow <s:layout> <s:VerticalLayout </s:layout> <s:HGroup <s:Label <s:TextInput </s:HGroup> <s:HGroup <s:Label <s:TextInput </s:HGroup> <s:Button </s:TitleWindow>
HGroupthat displays a list of the connected users on the left, the sentence the users are forming on the top right, and a
textareacontrol where the users can type the next word at the bottom right:
<s:HGroup <s:TitleWindow <s:List <!-- We use an item renderer to color red the name of the current user --> <s:itemRenderer> <fx:Component> <s:ItemRenderer <s:Label </s:ItemRenderer> </fx:Component> </s:itemRenderer> </s:List> </s:TitleWindow> <s:TitleWindow <s:layout> <s:VerticalLayout </s:layout> <s:TextArea <s:HGroup <s:TextInput <s:Button </s:HGroup> </s:TitleWindow> </s:HGroup>
The Connect button in the login state invokes the
onConnect() function. This function connects to the server and obtains the remote shared object.
onConnect()to the block:
public function onConnect():void{ roomName = txtRoomName.text; userName = txtYourName.text; SharedObject.defaultObjectEncoding = ObjectEncoding.AMF0; /** * Establish connection * */ nc = new NetConnection(); nc.client = this; nc.objectEncoding = ObjectEncoding.AMF0; nc.addEventListener( NetStatusEvent.NET_STATUS, onNetStatus ); nc.connect( urlServer + "/" + weborbApplicationName + "/" + roomName ); /** * Get Remote Object * */ so = SharedObject.getRemote( sharedObjectName, nc.uri, false ,false); so.client = this; so.addEventListener( SyncEvent.SYNC, onSync ); so.connect( nc ); }
The server URL, name of the WebORB application, and name of the remote object are previously declared in variables. The name of the chat room is obtained from the login window. Users can login to different chat rooms to work on different sentences.
public function setUserId(userId:String):void { this.userId = userId; this.currentState="game"; setName = true; }
onSync()function, which is called each time there’s a sync event on the remote shared object. Use this function to set the client’s name the first time they login. Also use this function to enable/disable the button to submit text as well as to update the list of users logged in. Each time there is a change, this function creates the list and sets the id of the user whose turn it is. (See WeborbRSO.mxml for the full implementation.)
onSendText()function, which send the word to the server. This function is called when the user clicks the Add Word button. It simply sets the
wordproperty on the shared object.
public function onSendText():void { so.setProperty("word",txtAddText.text); txtAddText.text = ""; }
This add-a-word game is just a simple application that showcases some of the possibilities of Remote Shared Objects. Of course, there are several ways to improve this application. To start with, you could limit the content sent by a user to just one word, and not allow any number of words. You could also use the RSO to keep track of several sentences at a time inside the same room if you wanted.
This tutorial demonstrated that RSOs can be used in many different situations, ranging from sharing information between clients to managing and synchronizing real-time online games. Now that you’re familiar with them, you can adapt these techniques for your own applications.
You can try this application at Anden Solutions.
For more information about WebORB for .NET visit its overview page.
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe. | http://www.adobe.com/devnet/flex/articles/flex-dotnet-remote-shared-objects.html | CC-MAIN-2016-07 | refinedweb | 2,169 | 55.24 |
Related guide:
Import Canon XF300 1080i MXF Files to FCP 7 without XF Plug-ins
Deinterlace and Convert Canon C300 1080i MXF files to ProRes for FCP 7
Deinterlace and Convert Canon XF305/300/105 1080i MXF to ProRes 422 for Editing in FCP 7
Import Canon XF100 MXF files to FCP X- Convert XF100 1080p MXF to Apple ProRes for FCP X on Mac
Convert/Transcode MXF files to QuickTime MOV for playback on Mac.
Although Canon try to ensure the widest compatibility with existing industry infrastructure and non-linear editing (NLE), there still a long way to go. The XF Utilities can not help all the XF105 users to import Canon XF105 1080i MXF to FCP XF105 1080i recordings XF105.
Now, follow the step-by-step guide and you will get the way to edit your XF105 1080i files in FCP easily and effortlessly.
Step 1: Import Canon XF100 1080i MXF files to the best MXF to FCP Converter;
Launch MXF to FCP Converter on Mac. Click the button “File” to add MXF files to it. XF105 1080i MXF files in the Video Editor.
Step 4: Start to convert and deinterlace Canon XF105 1080i MXF recordings.
Click the “Convert” button; it will convert XF105 1080i MXF files to ProRes105.
import Canon XF105 1080i MXF to FCP, edit XF105 files in FCP, convert XF105 1080i recordings to ProRes, transcode XF105 MXF to ProRes, transfer mxf files to FCP, MXF to FCP Converter, MXF to ProRes conversion, MXF Converter Mac to FCP, Mac MXF to FCP Converter, MXF Conversion to Mac, play MXF on Mac, MXF Converter for Mac | http://www.brorsoft.com/how-to/import-xf105-1080i-mxf-to-fcp7-mac.html | CC-MAIN-2013-48 | refinedweb | 269 | 70.26 |
.
The HyperText Transfer Protocol provisioned three return codes to explain that the requested content is somewhere else. So, instead of replying with a 200 ("Ok") or a 404 ("Not found") a server can reply with:
- 301: the content you are requesting has been moved permanently.
- 302: the content you are requesting has been moved temporarily.
- 307: the content you are requesting has been moved temporarily. Wait, what? Actually, 302 indicates that you should try the new location with a GET request, while 307 indicates that you should keep the same HTTP verb (keep using POST, for example).
So, it's really the equivalent of "Sorry Mario, your princess is in another castle", but with one major difference: the "Location" header tells where to look next. For example, here a 30X from google.com:
> GET / HTTP/1.1
> Host: google.com
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Location:
< Content-Length: 258
< Date: Sun, 18 Dec 2016 15:36:19 GMT
And with Varnish, it's super easy to produce such responses.
But, why?
If we already have a way to rewrite URLs seamlessly, this seems like a step backward since the user now needs two requests instead of one to get the desired content. However there are valid cases where you want this.
Notably, such cases include when you want users and applications to update the new resource location, for example because you are changing a sub-domain, or because you want to migrate an API. Plus, the new resource may be outside your sphere of influence, preventing you from serving it directly, making HTTP redirection a valid proposal.
I'll add one more example that is becoming more and more frequent: HTTPS: redirections allow you to gently redirect clients from HTTP to HTTPS.
Back to basics: return synthetic
So, let's see how we can push users in the right direction. As an example, let's re-create a common behavior: if the request targets a domain we don't know, we redirect to the main one. And like last time, we'll create a VTC to test things:
varnishtest "30X redirections" server s1 {} varnish v1 -vcl+backend { //VCL logic } -start client c1 { txreq -hdr "host: varnish" rxresp expect resp.status == 301 expect resp.http.location == "" expect resp.reason == "Moved Permanently" } -run
The logic to satisfy is pretty basic, but it's twofold. First, we have to check the need for redirection, this will be done as soon as we have the request, in vcl_recv. Second, we need to generate a synthetic response, in other words, the request will never reach the backends (s1 isn't even started) and we will produce a reply ourselves. In Varnish 3 it was called vcl_error, but it was renamed in Varnish 4 to vcl_synth since it does more than serving errors. The code looks like this:
sub vcl_recv { if (req.http.host != "") { set req.http.location = ""; return(synth(301)); } } sub vcl_synth { if (resp.status == 301 || resp.status == 302) { set resp.http.location = req.http.location; return (deliver); } }
Note: I placed the full file here to avoid repeating myself too much, but I encourage you to download it and to run varnishtest on it. Spoiler alert: it passes.
The code is pretty concise and should be readable even to VCL beginners, but there are at least a few points worth noting:
- the Location header isn't about just the path, but rather about the whole URL, including the "http://".
- the status message "Moved Permanently" is never specified in the VCL, yet it appears in the response. In the "synth(301)" call, we omitted the second argument and Varnish intelligently picked the default message corresponding to this code. You can replace the synth call with simply 'synth(301, "Moved Permanently")' and the test will still pass.
- there's sadly a little string copy to be made between req.http.Location and resp.http.Location because in VCL we don't have access to resp.* yet. We could have transfered part of the logic from vcl_recv to vcl_synth, but if you try it, you'll notice the split is uneasy. On the other hand, we are talking of just one header here and it won't make a difference.
None shall pass (if unsecure)
One case where HTTP redirection is more or less mandatory is when you want your users to upgrade to HTTPS. You could display a static 404 page saying "Sorry, but no", but that wouldn't be super friendly. Instead, what we can do is systematically redirect any HTTP request to its HTTPS counterpart.
Let's consider a basic setup using Hitch to allow our server to handle both HTTP and HTTPS:
Let's assume varnishd is started with "-a :80 -a 127.0.0.1:8443,PROXY", the first pair telling Varnish to listen to HTTP on port 80 (all addresses) while the second tells it to listen to HTTP via PROXY protocol (kindly decrypted by Hitch) on port 8443 (only localhost).
The VCL is super simple; we just need to "rebuild" the URL and send it back to the user:
sub vcl_recv { # the PROXY protocol allows varnish to see # hitch's listening port (443) as server.ip if (std.port(server.ip) != 443) { set req.http.location = "https://" + req.http.host + req.url; return(synth(301)); } } sub vcl_synth { if (resp.status == 301 || resp.status == 302) { set resp.http.location = req.http.location; return (deliver); } }
Notice how only the vcl_recv part changed? That unsatisfying header copy actually turned out okay!
Keeping Varnish dumb
The astute reader will have seen the parallel of how we can map new URLs to old ones and provide the same feature as in the first part of the blog, using redirection. A quick VCL would look like:
sub vcl_recv { if (req.url == "^/cmsa/post/") { set req.http.location = regsuball(req.url, "^/cmsa/post/(.*)", "\1");; return(synth(301)); } else if (req.url == "^/images/") { set req.http.location = regsuball(req.url, "^/images/(.*)", "\1");; return(synth(301)); } else ... } sub vcl_synth { if (resp.status == 301 || resp.status == 302) { set resp.http.location = req.http.location; return (deliver); } }
And it would work. However, that is a bit dumb because Varnish still has to know about the mapping AND the client needs an extra request. One could argue that it offers an opportunity to update their bookmarks, but still, that's a bit of a sad combination.
Of course, there's a better way! The first piece of the solution is to let the backend deal with redirections. Even though they don't represent a resource, redirects are still valid HTTP objects[1], and that's all that matters to Varnish, which will diligently cache them. With this, the smarts can be outside of our VCL, which is nice...
But we can do more! We can instruct Varnish to follow redirections and only cache the "true" resource:
sub vcl_backend_response { if (beresp.status == 301 && beresp.http.location ~ "^https?://[^/]+/") { set bereq.http.host = regsuball(beresp.http.location, "^https?://([^/]+)/.*", "\1"); set bereq.url = regsuball(beresp.http.location, "^https?://([^/]+)", ""); return (retry); } }
Here, we have to do the opposite of the previous VCL: previously we built the Location header from path and host, and now we have to dismantle it to retrieve the other two. It takes some regex-fu, but that's expected.
Again, here's the vtc file so you can check it passes and use it as a base for your own implementation. One thing to keep in mind is that you have a limited amount of retries per backend request so you may have to increase it a bit if you have a crazy amount of redirection (max_retries is 4 by default and can be changed using varnishadm, or specified in the command line).
Where to, now?
Redirections can be tricky, but Varnish has all the tools to make it easy for you to decide how to handle, and make it as painless as possible.
The two articles in this "series" were inspired by various questions on IRC, the mailing list and stackoverflow and I wanted to provide a solid answer to most of them, which I hope I did. If that was not the case, let me know, I'd be happy to help!
We get a lot of questions and also addressed some of them in a recent webinar, "Top 1- Varnish Cache mistakes and how you can avoid them", which you can watch on-demand.
[1]the same way a symbolic link doesn't contain data, but is still a file.
Image (c) 2012 astroshots42 used and modified under Creative Commons license. | https://info.varnish-software.com/blog/rewriting-urls-with-varnish-redirection | CC-MAIN-2019-18 | refinedweb | 1,447 | 64.41 |
Heya fellow programmers
I just ran into these two little variables that are used by convention for some time. argc and argv. How does one get to see the effects of this when running windows Me. For instance look at this program i pasted here below.
#include <iostream.h>
int main(int argc, char *argv[])
{
if(argc != 2) {
cout << "you forgot to type your name";
return 1;
}
cout << "Hello " << argv[1] << '\n';
return 0;
}
when i execute this tricky one all i get is the if statement output. Can someone just enlighten me a little please.
Have a nice day.
zbap:confused: | http://cboard.cprogramming.com/cplusplus-programming/12930-command-line-arguments-printable-thread.html | CC-MAIN-2013-48 | refinedweb | 102 | 84.57 |
. My specific purpose for the first wiki was to create an environment where we might link together each other's experience to discover the pattern language of programming. I had previously worked with a HyperCard stack that was set up to achieve the same kind of goal. I knew people liked to read and author in that HyperCard stack, but it was single user.. Discussion groups tend to keep covering the same ground over and over again, because people forget what was said before. I think the invention of the Frequently Asked Questions, the FAQ, was a response to that. A lot of times just reading the FAQ is more valuable than joining the discussion group..
Bill Venners: How does the reader get the big picture of what's all there in a wiki?
Ward Cunningham: The first thing you have to understand is that because we made wiki easier for authors, we actually made it harder for readers. There is an organization there, and the organization can be improved, but it isn't highly organized. So the feeling for a reader is one of foraging in a wilderness for tidbits of information. You stumble across some great ones and you say, "This is fantastic, why doesn't somebody just make a list of all the great pieces so I don't have to look at the rest." In other words, "Why doesn't somebody organize this so I can get answers to my questions quickly?" Sooner or later they realize, "Gee, I could do that." They put in a month or two of finding what they care about, and then they make a page, which is their take on what the organization of wiki is.
I'm not a fan of classification. It's very difficult to come up with a classification scheme that's useful when what you're most interested in is things that don't fit in, things that you didn't expect. But some people decided that every page should carry classification. They came up with a scheme, based on page names, to establish a classification structure for a wiki. And these people who care about classification maintain it. If someone authors a page and fails to classify it, somebody else will say, "Oh, this should be classified as wiki maintenance or design patterns."
Bill Venners: How would they categorize a page as wiki maintenance?
Ward Cunningham: They just make a reference to a page named WikiMaintenanceCategory. You click that link, it goes to the page that explains the category and why the category exists. So to put a page into a category, the convention is to put a link to a page that describes the category. That makes the page tagged. If you want to understand what the category is, you follow the link to the category page. If you want to see what pages are in that category, you search for every page that references that category page.
Bill Venners: I suppose searching is one way I could begin exploring a new wiki. In a sense a wiki is like a very small version of the internet. Everything is all over the place. How will I find what I'm looking for? I could start by searching with keywords.
Ward Cunningham: That's right. People decided that any wiki page whose name ends with "Category" is a search term that's worth searching for. You might look for fiction on Google, but if people didn't label their work fiction, you might not find it. The category system is a set of pages that explain the rationale for the categories, and you can read those pages. They took a small part of the namespace—all those words that end with the word "Category"—and established the precedent that those pages talk about categories of other pages. It's great. It's in balance. If I tried to engineer a solution, it couldn't be as simple, or even as good. And what I love about it is, there is an active community who manages what the set of categories are. Sometimes they get the categories wrong, but then they correct them.
Bill.
Bill Venners: I really like the idea of a wiki, but I find it hard to read many wiki pages. The readability issue is the main reason I've never put a wiki on Artima.com. Artima.com is also a kind of web-based collaborative document, but more structured. In wiki, there's no single editor organizing the material for the reader. All the pages are collaborative. The structure is collaborative. The editing is collaborative. What do you get from the collaboration in wiki that is worth the tradeoff in readability?
Ward Cunningham:.
Bill Venners: How do you tell?
Ward Cunningham: You can tell by whether they talk about things such
as "Mary Ann just couldn't get this part to work right." That's not in the scientific
tradition. If someone quotes an author and says, "So and so says bla de bla, and you guys
are stupid to not listen," there's a guy who admires the books he reads. On the other hand,
if someone says, "You know, for the last three projects we've tried to do this
and it hasn't worked one single time. We've always been forced to do something else to get
it out the door," there's a guy who's got it out the door, and he's telling me something
profound. How to interpret that is left to me. It's just his experience. And then you might
see a few more paragraphs that say, "Yeah, that happened to me but we got it out the
door this other way." Now there are two ways to get it out the door. All of a sudden
you're talking to the people who get software out the door, not the people who talk about
getting software out the door, and that's a big distinction.
Come back Monday, October 27 News & Ideas Forum topic,
Exploring with Wiki.: | http://www.artima.com/intv/wikiP.html | CC-MAIN-2015-18 | refinedweb | 1,018 | 72.05 |
Solved Problem: A Story About Big Data in a Little SpaceJul 02, 2012 Python Tweet
As part of the API project for this iPad app I'm working on, we have to pull down, filter and ingest thousands of Gb of song data from a major music information source. This data is offered to commercial partners in two formats - a relational database or a collection of plain text files. They don't offer any tools for filtering - you have to take all of the data and then whittle down to just what you need.
The relational database turned out to be not an option. It comes in four separate tables, but includes metadata for EVERYTHING this source provides - apps, songs, books, TV shows, etc. While it would have been easier to manage the filtering via SQL, it would have meant downloading the entire database and then eliminating what we didn't need. In a truly bonehead error in judgement, I did try pulling the database pieces down to our dev db server, not realizing how full it already was. A few bad things happened, mostly tools that broke down, but it was all on staging so no real harm done. Long story short, we didn't have adequate server space to go this route, and putting in a dedicated db server just for this project was not an option due to budget limitations. This may not be a problem for you.
So I took a look at the text-based option. Text files were at least available per object type - that is, I could get text files listing music records only, as opposed to having to take app, book, and other data as well. But the text files are split by region, and as of this writing, this music data source supports 51 different regions (they just added another 32, but that change hasn't been reflected in this flat file output yet). The current text file for each region is about 2Gb zipped, and 10Gb expanded. Again, due to budget and other various limitations, I'm processing these files on a staging box, so space is a concern. I couldn't download and expand and work with all 51 files at once, as much as I would have preferred to do that.
So what I came up with was a rotation system. This data is not something that changes rapidly - at least not the portion of it that we use - and updating our own tables once a month was perfectly acceptable. I set up a dictionary assigning two region codes to each day of the month.
import os, smtplib, datetime from time import strftime from email.mime.text import MIMEText allregions = { 'arg': 1, 'aus': 1, 'aut': 2, 'bel': 2, 'bgr': 3, 'bol': 3, 'bra': 4, 'can': 4, 'che': 5, 'chl': 5, 'col': 6, 'cri': 6, 'cyp': 7, 'cze': 7, 'deu': 8, 'dnk': 8, 'dom': 9, 'ecu': 9, 'esp': 10, 'est': 10, 'fin': 11, 'fra': 11, 'gbr': 12, 'grc': 12, 'gtm': 13, 'hnd': 13, 'hun': 14, 'irl': 14, 'ita': 15, 'jpn': 15, 'ltu': 16, 'lux': 16, 'lva': 17, 'mex': 17, 'mlt': 18, 'nic': 18, 'nld': 19, 'nor': 19, 'nzl': 20, 'pan': 20, 'per': 21, 'pol': 21, 'prt': 22, 'pry': 22, 'rou': 23, 'slv': 23, 'svk': 24, 'svn': 24, 'swe': 25, 'ven': 25, 'usa': 26, } def getregions(regions): """ For each region, check if the current day of the month matches the number listed in the region dict """ regionlist = [] for key, value in regions.iteritems(): if value == now.day: regionlist.append(key) return regionlist
Obviously, this is set up via cron to run daily. As each of the current day's regions is identified, I pull down the corresponding zipped text file, unpack it, extract the records I need and then discard the source (as it is too large to store permanently):
def getzipfiles(regionlist): """ Get the zip files for the current region Unpack each one, extract artist records to a new file Cleanup - remove old files and folders """ # regionlist is the current day's regions, e.g. ['arg', 'aus',] for region in regionlist: # this savepath var is the base filepath on the server where I'm storing everything path = savepath + region + "/" # attempt to get all of the current zipped files (extension is .tbz) for this region os.system("wget -r -nH -nd -P"+ path + " -A.tbz"+ region +"/current/") """ One of the unfortunate things I discovered in working with wget is that I can identify and pull files with a specific extension, but not a specific pattern in the file name. So until I find a better way, I'm downloading all of the tbz files in the region directory, then identifying and working with only the ones with the expression 'song-' prepended. """ newregionfile = '' for root, subFolders, files in os.walk(path): for file in files: if file[0:8] == 'song-'+region and file[-4:] == '.tbz': newregionfile = file if newregionfile: # unpack the file, give it a new, unique name, and move it to a # folder where I can work on it without impacting other region files os.system("tar xjvf " + path + newregionfile + " -C " + path) source = path + newregionfile[:-4] + "/song-" + region + ".txt " target = savepath + "song-" + region + "-MYARTIST.txt" os.system("mv " + source + " " + target) # extract any lines containing the string 'MYARTIST' and put them in a new file parsetxtfile(target) # remove the folder created by unpacking the tar rmpath = path + newregionfile[:-4] + "/" rmtree(rmpath) # remove the downloaded source files os.system("rm -f " + path + "*") # email myself to let me know whether the file was processed or not region = region.upper() sendMail("new file for " + region + " - " + newregionfile) else: region = region.upper() sendMail("no new file for " + region)
This is the parse function referenced above - for each valid source file, it does a few specific tasks - extracts only those records where the third col ('artist') matches the name of the band I need ('MYARTIST' for this example), writes all of those new records to a new file, reformats the release date so that I can compare them to current records in the db, then deletes the original source file:
def parsetxtfile(sourcefile): region = sourcefile[-10:-7] # name the new files to be created targetone = sourcefile.replace('.txt', '-int1.txt') targettwo = sourcefile.replace('.txt', '-int2.txt') targetfile = sourcefile.replace('.txt', '-final.txt') # just reading lines in the source file eats up too much memory, # so the first thing that must be done is to strip out just the lines you need os.system("grep -w ' MYARTIST ' " + sourcefile + " > " + targetone) os.system("sed 's/\"//g' " + targetone + " > " + targettwo) ft = open(targetfile, 'w') fh = open(targettwo,'r').readlines() for i in fh: if i.split('\t')[2] == "MYARTIST": r = '\t'.join(i.split('\t')[x] for x in range(17)) original_release_date = i.split('\t')[17].replace(' ', '-') + " 00:00:00" release_date = i.split('\t')[18].replace(' ', '-') + " 00:00:00" record = r + '\t' + original_release_date + '\t' + release_date record = record + '\t' + '\t' + region.upper() + '\t' + '\n' ft.writelines(record) ft.close() os.system("rm -f " + sourcefile) os.system("rm -f " + targetone) os.system("rm -f " + targettwo) # and again, send myself a notice sendMail("files ready for insert: " + targetfile)
And in case you're curious, my simple sendMail function:
def sendMail(message, addmessage, filename): """ send notification of file activity """ emailto = ['my.address@at.com',] fromaddr = "staging
" toaddrs = emailto msg = MIMEText(message + addmessage + " " + filename) msg['Subject'] = "Music data build - " + str(now.strftime("%Y-%m-%d")) msg['Subject'] = msg['Subject'] + str(" - " + message + addmessage) msg['From'] = fromaddr msg['To'] = ", ".join(toaddrs) server = smtplib.SMTP('localhost') server.sendmail(fromaddr, toaddrs, msg.as_string()) server.quit()
So this runs daily, from the 1st through the 26th of each month. Then on the 27th, I run another series of scripts** that packages all that extracted data, compares it to what we already have in the database, decides what's new, what needs to be updated, etc.
** This last series of scripts has to be run manually for a few reasons - the staging server I have to use to process all this data doesn't have direct write access to the production database server, and in spite of how much of this process is automated, the new inserts and updates still need to be reviewed by a human, someone familiar with this recording artist's music, to make sure there are no errors in the music source data. (You'd be surprised how often I spot errors - mostly songs that are attributed to this artist that shouldn't be. Unfortunately, there's just no substitute for a real, live human in this case.)
The first of these scripts takes a walk through the main folder, identifies all of those target files, and inserts the data in them into a holding table:
import os, smtplib, datetime, MySQLdb as Database from time import strftime from email.mime.text import MIMEText now = datetime.datetime.today() musicdatafolder = '/your/home/path/musicdata/' outputfile = "/your/home/path/musicdata_deletions.txt" def findfiles(): """ do a non-recursive walk of the main folder identify eligible files """ files = [] for file in os.listdir(musicdatafolder): fullpathname = os.path.join(musicdatafolder, file) if os.path.isfile(fullpathname): x = len(musicdatafolder) name = fullpathname[x:] if name[8:] == '-MYARTIST-final.txt': files.append(file) return files def do_inserts(files): """ parse the eligible music data files and insert the records into a holding table on dev """ columns=[] columns.append('song_name','album_name','artist_name','composer_name',) columns.append('isrc','upc','song_price','album_price','country_code') insertcolumns = ", ".join(columns) db = Database.connect("ip", "user", "pwd", "db") cursor = db.cursor() sqlTruncate = """TRUNCATE MYARTIST.music_data_holding""" cursor.execute(sqlTruncate) for file in files: sourcefile = musicdatafolder + file print sourcefile fh = open(sourcefile,'r').readlines() for i in fh: insertrow = '", "'.join(i.split('\t')[x] for x in range(21)) sql = """INSERT INTO MYARTIST.music_data_holding (%s) VALUES ("%s")""" %(insertcolumns, insertrow) # print sql cursor.execute(sql) print "rm -f " + sourcefile # os.system("rm -f " + sourcefile) cursor.close() db.close()
Next, I compare what's in the holding table to what's in our existing music data - any new record with an album name that we don't already have is marked for deletion. (When this artist releases a new album, I know about it - but just in case, that's another reason for the manual review.)
def recommend_deletions(): """ recommend records that should be deleted from the holding table this data needs to be reviewed manually """ db = Database.connect("ip", "user", "pwd", "db") cursor = db.cursor() sql = """SELECT music_data_holding.id, music_data_holding.song_name, music_data_holding.album_name, music_data_holding.country_code FROM music_data_holding WHERE music_data_holding.album_name NOT IN (SELECT DISTINCT album_name FROM music_data)""" cursor.execute(sql) mismatch = cursor.fetchall() ofile = open(outputfile, "wb") ofile.writelines('ID'+'\t'+'Song Name'+'\t'+'Album Name'+'\t'+'Country Code'+'\n') delete_list = [] for record in mismatch: ofile.writelines(str(record[0])+'\t'+str(record[1])+'\t') ofile.writelines(str(record[2])+'\t'+str(record[3])+'\n') delete_list.append(record[0]) ofile.close() cursor.close() db.close() deletions = ','.join(map(str, delete_list)) message = "DELETE FROM music_data_holding WHERE music_data_holding.id " message = message + "IN (" + str(deletions) + ");" + "\n\r" message = message + "Review attachment for new songs before deleting." + "\n\r" message = message + "After deletions, run musicdata_update.py." + "\n\r" send_mail(message) return delete_list
The next script identifies records that might be new or are updates to existing songs. This one generates two emails - both with attachments for me to review, both with SQL already printed out for me to run once I've completed the manual review.
import os, smtplib, datetime, MySQLdb as Database from time import strftime from email.mime.text import MIMEText now = datetime.datetime.today() outfile1 = "/your/home/path/musicdata_newrecords.txt" outfile2 = "/your/home/path/musicdata_updates.txt" def find_new_records(): """ determine which records are new, not already in music_data these need to be reviewed, inserted into the database """ db = Database.connect("ip", "user", "pwd", "db") cursor = db.cursor() sql = """SELECT music_data_holding.id, music_data_holding.country_code, music_data_holding.song_name, music_data_holding.album_name FROM music_data_holding WHERE CONCAT(music_data_holding.song_name, ' - ', music_data_holding.album_name, ' - ', music_data_holding.country_code) NOT IN (SELECT DISTINCT CONCAT(music_data.song_name, ' - ', music_data.album_name, ' - ', music_data.country_code) FROM music_data) ORDER BY music_data_holding.country_code""" cursor.execute(sql) newrecords = cursor.fetchall() ofile = open(outfile1, "wb") ofile.writelines('ID' + '\t' + 'Region' + '\t' + 'Song Name' + '\t' + 'Album Name' + '\n') insert_ids = [] for record in newrecords: ofile.writelines(str(record[0])+'\t'+str(record[1])+'\t') ofile.writelines(str(record[2])+'\t'+str(record[3])+'\n') insert_ids.append(record[0]) ofile.close() cursor.close() db.close() new_ids = ','.join(map(str, insert_ids)) message = "INSERT INTO music_data (song_name, album_name, artist_name, composer_name, isrc, upc, song_price, album_price, country_code, active) SELECT song_name, album_name, artist_name, composer_name, isrc, upc, song_price, album_price, country_code, '1' FROM music_data_holding WHERE music_data_holding.id IN (" + str(new_ids) + ");" + "\n\r" message = message + "Review attachment before inserting new records to music_data." + "\n\r" message = message + "After completing inserts and updates, run musicdata_regions.py" + "\n\r" send_mail(message, 'New Records', outfile1) def find_update_records(): """ determine which records are potential updates, versions already in music_data these need review only - no mapping necessary """ db = Database.connect("ip", "user", "pwd", "db") cursor = db.cursor() sql = """SELECT * FROM music_data_holding WHERE CONCAT(music_data_holding.song_name, ' - ', music_data_holding.album_name, ' - ', music_data_holding.country_code) IN (SELECT DISTINCT CONCAT(music_data.song_name, ' - ', music_data.album_name, ' - ', music_data.country_code) FROM music_data) ORDER BY music_data_holding.country_code""" cursor.execute(sql) updaterecords = cursor.fetchall() ofile = open(outfile2, "wb") ofile.writelines('id'+'\t'+'song_name'+'\t'+'album_name'+'\t'+'artist_name'+'\t') ofile.writelines('composer_name'+'\t'+'isrc'+'\t'+'upc'+'\t'+'song_price'+'\t') ofile.writelines('album_price'+'\t'+'country_code'+'\t'+'last_modified'+'\t'+'active') ofile.writelines('\n') update_ids = [] for record in updaterecords: ofile.writelines(str(record[0])+'\t'+str(record[1])+'\t'+str(record[2])+'\t') ofile.writelines(str(record[3])+'\t'+str(record[4])+'\t'+str(record[5])+'\t') ofile.writelines(str(record[6])+'\t'+str(record[7])+'\t'+str(record[8])+'\t') ofile.writelines(str(record[9])+'\t'+str(record[10])+'\t'+str(record[11])) ofile.writelines('\n') update_ids.append(record[0]) ofile.close() cursor.close() db.close() new_ids = ','.join(map(str, update_ids)) message = "SELECT * FROM music_data_holding WHERE music_data_holding.id" message = message+" IN ("+str(new_ids)+");"+"\n\r"+"Review attachment" message = message+" before updating records in music_data."+"\n\r" send_mail(message, 'Update Records', outfile2)
There is a final script that does some mapping once new records are in place, but it's written in much the same style as these others - pull the data, generate a text file for review, attach that to an email with the sql to run embedded in the body. | http://www.mechanicalgirl.com/post/solved-problem-story-about-big-data-little-space/ | CC-MAIN-2021-17 | refinedweb | 2,374 | 55.03 |
Hey, I'm trying to make a simple net send application with no GUI. The program basically asks for the hostname, the message, then proceeds with filling out the information which is needed for the system command net, command option send. (net send host message).
I have this so far, but it just types out the whole string into the system command, not including the values within the variables. I am quite new to C so I don't have much of an idea of what I should be doing. Here's how it looks:
Code:
#include <stdio.h>
#include <dos.h>
int main()
{
char host, msg;
printf("\nEnter the hostname: ");
scanf("%d", &host);
printf("\nEnter your message: ");
scanf("%d", &msg);
system("net send %d %d", &host, &msg);
system("pause");
}
Tell me what you think, any help is greatly appretiated.
Gordon. | http://cboard.cprogramming.com/c-programming/69018-help-using-variable-system-command-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 141 | 71.85 |
It is an int. ----- Original Message ----- From: "David Abrahams" <dave at boost-consulting.com> To: "pysig" <c++-sig at python.org> Sent: Tuesday, August 06, 2002 4:54 PM Subject: Re: [C++-sig] non-const arguments > What is the definition of Integer? > > if it's > > typedef X Integer; > > Where X is some builtin numeric type, well of course you can't publish that > interface in Python. All of the builtin numeric types are immutable. > > Decide on a Python interface that's consistent with Python's immutability > restrictions, and we can see how to wrap it. > > Perhaps > > tuple ran1(Integer x) > > would work better for you? > > ----------------------------------------------------------- > David Abrahams * Boost Consulting > dave at boost-consulting.com * > > > ----- Original Message ----- > From: "Enrico Ng" <enrico at fnal.gov> > To: <c++-sig at python.org> > Sent: Tuesday, August 06, 2002 5:41 PM > Subject: [C++-sig] non-const arguments > > > > I am new to boost and am attepting to use V2. > > > > I get the "TypeError: bad argument type for built-in operation" error > > message from python. It seems that since the variable "idum" is not > > const and changes, python can't handle it. > > > > I have looked around your documentation and the copy_non_cost_reference > > class seems close to what I need but I am not sure. > > > > Here is some of the relavent code: > > > > class MathLib { > > public: > > static Real ran1(Integer &idum); <- idum is modified > > }; > > > > ============================================== > > > > #include <boost/python/class.hpp> > > #include <boost/python/module.hpp> > > > > namespace python = boost::python; > > > > BOOST_PYTHON_MODULE_INIT(mathlib) > > { > > python::module("mathlib") > > .def("ran1", &MathLib::ran1) > > ; > > } > > > > > > _______________________________________________ > > C++-sig mailing list > > C++-sig at python.org > > > > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > > | https://mail.python.org/pipermail/cplusplus-sig/2002-August/001591.html | CC-MAIN-2017-04 | refinedweb | 268 | 68.36 |
We have all had bugs that we have trouble finding! Finding bugs can involve a lot of detective work and so the more information we can gather about the problem the better. The diagnostics library is a very useful way of outputting runtime values and trapping problems.
Visual Studio provides powerful debugging capabilities. One of the most useful is the ability to break into the code manually or when there is a crash and to examine the variable values at the time of the crash. Simply hover your mouse over the variable. You can also use the call stack window to go back through the calls leading up to the break point and examine variables at those points.
To force a code break you can place a breakpoint in your code either by pressing F9, selecting Debug / Toggle Breakpoint from the menu or left clicking to the left of the code. Note that you must have the cursor on a line of code to be able to insert a break point.
When you now run your code the execution will stop at the breakpoint. You can now examine values or advance a line through the code (F10 - Step Over) and even step into functions (F11 - Step Into) or continue execution (F5 - Continue).
The .NET diagnostics library provides some useful functionality. To use it first of all import it via:
using System.Diagnostics;
To output text to the output pane in Visual Studio at run time you can now write:
Trace.WriteLine("Hello World");
Alternatively you can use Trace.Write and use the '\n' code to get a new line. When you run your code the text will appear in the output pane in Visual Studio. If you cannot see it make sure it is enabled from the Debug / Windows menu.
Outputting the value of a variable is much easier in C# than in C++. In C++ you had to use stringstreams and write a whole function, but in C# you can use the types ToString() function and add strings together to create an output e.g.
Trace.WriteLine("The current game time is "+gameTime.ToString());
The ToString function exists for basic types like int, float as well as the library types. For your own types you can implement a ToString function yourself if you wish.
Outputting text and variable values is a very useful way of observing what your code is doing during a loop without interrupting it.
Asserts are really useful because they can show up bugs before the bugs become a big problem. Basically an assert is you saying 'I assert that something is true at this point in my program'. For example:
void DrawToScreen(int screenX, int screenY)
{
Trace.Assert(screenX<screenWidth);
There are two further assert overloads you can call which allow you to put a string that will be displayed when the assert is triggered. e.g.
Trace.Assert(screenX<screenWidth,"Trying to draw outside the screen");
If an assert fails or your program crashes with some other error you can use the Visual Studio debugger to help find problems. When the error occurs you will have the option to press retry. Pressing this drops you into the debugger. Depending on your set-up you will have a number of different debug windows. To change which can be seen go to the Debug menu and select from the. | http://www.toymaker.info/Games/XNA/html/xna_debugging.html | CC-MAIN-2016-50 | refinedweb | 563 | 71.65 |
This article is a quick getting started guide for the ESP32-CAM board. We’ll show you how to setup a video streaming web server with face recognition and detection in less than 5 minutes with Arduino IDE.
Note: in this tutorial we use the example from the arduino-esp32 library. This tutorial doesn’t cover how to modify the example.
Related project: ESP32-CAM Video Streaming Web Server (works with Home Assistant and Node-Red)
Watch the Video Tutorial
You can watch the video tutorial or keep reading this page for the written instructions.
Parts Required
To follow this tutorial you need the following components:
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Introducing the ESP32-CAM
The ESP32-CAM is a very.
The ESP32-CAM doesn’t come with a USB connector, so you need an FTDI programmer to upload code through the U0R and U0T pins (serial pins).
Features
Here is a list with the ESP32-CAM)
ESP32-CAM Pinout
The following figure shows the ESP32-CAM pinout (AI-Thinker module).
There are three GND pins and two pins for power: either 3.3V or 5V.
GPIO 1 and GPIO 3 are the serial pins. You need these pins to upload code to your board. Additionally, GPIO 0 also plays an important role, since it determines whether the ESP32 is in flashing mode or not. When GPIO 0 is connected to GND, the ESP32 is in flashing mode.
The following pins are internally connected to the microSD card reader:
- GPIO 14: CLK
- GPIO 15: CMD
- GPIO 2: Data 0
- GPIO 4: Data 1 (also connected to the on-board LED)
- GPIO 12: Data 2
- GPIO 13: Data 3
Video Streaming Server
Follow the next steps to build a video streaming web server with the ESP32-CAM that you can access on your local network.
Important: Make sure you have your Arduino IDE updated as well as the latest version of the ESP32 add-on.. CameraWebServer Example Code
In your Arduino IDE, go to File > Examples > ESP32 > Camera and open the CameraWebServer example.
The following code should load..
So, comment all the other models and uncomment this one:
// Select camera model //#define CAMERA_MODEL_WROVER_KIT //#define CAMERA_MODEL_ESP_EYE //#define CAMERA_MODEL_M5STACK_PSRAM //#define CAMERA_MODEL_M5STACK_WIDE #define CAMERA_MODEL_AI_THINKER
If none of these correspond to the camera you’re using, you need to add the pin assignment for your specific board in the camera_pins.h tab.
Now, the code is ready to be uploaded to your ESP32.
3. ESP32-CAM Upload. Press the Start Streaming button to start video streaming.
You also have the option to take photos by clicking the Get Still button. Unfortunately, this example doesn’t save the photos, but you can modify it to use the on board microSD Card to store the captured photos.
There are also several camera settings that you can play with to adjust the image settings.
Finally, you can do face recognition and detection.
First, you need to enroll a new face. It will make several attempts to save the face. After enrolling a new user, it should detect the face later on (subject 0).
And that’s it. Now you have your video streaming web server up and running with face detection and recognition with the example from the library.
The ESP32-CAM provides an inexpensive way to build more advanced home automation projects that feature video, taking photos, and face recognition.
In this tutorial we’ve tested the CameraWebServer example to test the camera functionalities. Now, the idea is to modify the example or write a completely new code to build other projects. For example, take photos and save them to the microSD card when motion is detected, integrate video streaming in your home automation platform (like Node-RED or Home Assistant), and much more.
We hope you’ve find this tutorial useful. If you don’t have an ESP32-CAM yet, you can grab it here.
If you like this project, you may also like other projects with the ESP32-CAM:
- ESP32-CAM Video Streaming Web Server (works with Home Assistant and Node-RED)
-
230 thoughts on “ESP32-CAM Video Streaming and Face Recognition with Arduino IDE”
TF card 4GB limit. Will larger capacity cards, i.e. 8GB work, but only 4GB will be usable? Smaller cards are getting harder to find. FAT-16 format required?
Hi Bruce.
I haven’t tested with 8GB sd cards. I’ll need to check if those work too.
It needs to beet FAT-32 format.
Regards,
Sara
Can you help me because my ESP 32cam keeps showing this again and again and it doesnt show the IP address…
Also my ESP 32 CAM doesnt have AI-THINKER marked in it.
Do you have any ideas about this? I badly need your help..
Car
Hi.
It seems that your board is booting constantly. Try pressing the RST button several times to see if it solves the problem.
If it doesn’t, double-check that you’re powering the ESP32-CAM with 5V on the 5V pin (not VCC).
It may also help taking a look at our troubleshooting guide bullet 2 and 3:
I hope this helps.
Regards,
Sara
Thanks very much for this ESP32-CAM project, I am looking forward to learning the camera applications, it is my first.
Unfortunatly I am getting the following error returned to the serial monitor after reset:
SCCB_Write [ff]=01 failed
SCCB_Write [12]=80 failed
[E][camera.c:1085] esp_camera_init(): Camera probe failed with error 0x20001
Camera init failed with error 0x20001
I have updated the arduino IDE to 1.8.9 and ESP32 boards as per instructions, but cant find the problem. If you have any ideas I really appriecate it.
Hi James.
Did you select the right camera module in the code?
Please double check that your camera is well connected to the board.
I also found this issue: github.com/espressif/esp32-camera/issues/5
It seems the same as yours, so it might help.
Regards,
Sara
Hi James. Did you ever get a resolution on this problem? I purchased two units and they are responding the same.
Hi Dan, yes I took Sara’s advice and selected the correct camera module in the code but commenting out the ones that don’t apply. I did also find reducing the upload speed made things more stable. I think my programmer is not the best.
Very happy it works very well. Thanks again
Hi Dan, did you found the solution. I also purchase two units with different brand with same issue. (the first one have successed before but when retry to reupload the issue came).
Try all suggestions here by changing board selection, changging cable, changging programmer device, changging pins selection, try with different PC and all have same problem.
I Had the Same Camera init failed with error 0x20004. I powered the ESP32 with 5v and works great. May try the 5V to see if it goes away.
THANKS For this great site and tutorials!!!!!
Any update on card sizes??? Brand name 4 GB cards are special order. When I find 4GB they are almost as expensive as 16/32GB sizes. Ebay takes forever anymore, and then you don’t know what you are getting. No name brand on Ali Express or Banggood.
Hi David.
You can use SD cards with larger capacity.
Hi. Great tutorial; worked like a charm once used a separate 5V supply.
Any way you know of to see the video stream or stills via a TFT display on another ESP through web browser or otherwise? I’ve used ESPNow between ESP12’s or 32’s for display of thermal cam images but they’re much smaller. Avoids need for phone or laptop tied up….
Thanks
Mel
Hi, thanks for the tutorial, but I’m getting 2 problems with the code :
1. I can’t include the zip file through “Add .ZIP library” from Arduino IDE
2. When I put it manually through extracting the zip file and moved it to my Arduino libraries folder, then compile the code, I got “no headers files (.h) found” error
Any help would be appreciated, thanks again for the tutorial.
Hi Mario.
You don’t need to install any library. You just need to have the ESP32 add-on installed.
The zip file that we provide contains all the code that you need.
You just need to unzip the file, open the CameraWebServer folder and open the CameraWebServer.ino.
Your arduino IDE should open the code and you’ll see three tabs at the top. Then, you just need to upload the code to your board.
Alternatively, if you have the latest updated ESP32 add-on, you should have the code in your examples. Go to File > Examples > ESP32 > Camera and open the CameraWebServer example.
I hope this helps.
Regards,
Sara
Hi, nice tutorial, Can I with ESP32CAM store not manually pictures and for example check if have a car in picture?
I was looking for something like this for my recent project, Thanks! Great tutorial! But I think ESP32-CAM is “unofficial” combination of ESP32 with a camera. I think Espressif themselves released a dedicated “official” ESP32+camera board called ESP-EYE with their own “official” software library called ESP-WHO.
Well I got all the information from here:
Have not tried that board myself. Can you make a tutorial on that as well since that is the “official” hardware and software and would have longer support from Espressif itself.
Also a comparison between the 2 would be great too.
I follow a lot of Random Nerd Tutorials. You guys make easy to follow guides. Cheers! Keep it up!
Thanks!
Hi Ryan.
Thank you for your nice words.
The ESP-EYE is an Espressif release.
We haven’t fully tested the ESP-EYE yet. We’ve played with the example firmware that they provide and we made a blog post about it that you can read here:
At the moment, we don’t have any more tutorials with the ESP-EYE.
Thank you for your interest in our content.
Regards,
Sara
What happened? ¿Qué ocurre? Thx
Hi David.
That’s usually a power issue.
Please see our troubleshooting guide bullet 8:
Regards,
Sara
Hello.
I faced the same problem with a recent FTDI.
Replacing it by an older one (with a large USB connector), the problem has been fixed.
Could be it is the same for you.
Regards.
F.Thomart
use 5V
I can somehow integrate this recognition face to my Home assistant?
Hi.
Follow this tutorial:
Thanks, it will be of great help, recently I was able to integrate my esp32 cam into an MQTT client library, every face detected a publisher is sent to the broker
That’s a great project.
Will you publish your project somewhere? Many people here might be interested.
Regards,
Sara
I have not shared it yet, but I can post here if you wish, I used the pubsubclient library to transform esp32cam into a Mqtt client
Hello Felipe, that would be a great add on to this !!!
Any development ?
It would be very useful if you share your project, also trying to do the same. Thanks in advance Felipe!
Greetings and congratulations for the tutorial. You are a very nice couple.
Is it possible to take this captured image to a server on the internet?
Can I have this camera in my house and see what happens from my work?
Thank you.
Hi Rui & Sarah,
How do you set up face recognition ?
I have the whole thing working as expected, however the Enroll face button does nothing ?
It seems that face recognition is no longer working (at least with the example program) when using the 1.02 ESP core.
Rolling back to the 1.01 core and using the example program belonging to that core, will ‘fix’ it (currently that is the program that Sara and Rui have on their Github
Hi, how do i roll back to the 1.01 core ? i have the same problem of Neil, the Enroll face button does nothing.. can you help?
You need to go to the Boards Manager, search for ESP32, select that version and install 1.0.1.
Hi Guys,
I purchased two units and both fail with the following:
[D][esp32-hal-psram.c:47] psramInit(): PSRAM enabled
[E][camera.c:1085] esp_camera_init(): Camera probe failed with error 0x20001
Camera init failed with error 0x20001
I’ve selected AI Thinker in the code and reduced the upload to 115200. Anyone have some insights? I have a M5Stack Camera which works pretty well with the code but these two are dead.
Thanks
Dan
Hi! good tutorial!, I need to put the upload speed 115200 and the flash frequency in 40Mhz to avoid a Guru Meditation Error: Core 0 panic’ed (InstrFetchProhibited) error if someone have the same problem 🙂
Hi Leonardo.
thank you for sharing.
It will definitely be useful for many people.
Regards,
Sara
Got my cameras today – your tutorial above works perfectly! 🙂
Any idea how to turn on the “flash light” LED?
Thanks much,
ben
There is a small red led (GPIO33 inverted) . The main led is controlled by GPIO4. In the example CamWebServer program there is AFAIK no possibility in the webserver to switch the main LED.
Should be possible though to program Switching the LED and control it via say HASSio or OpenHab, with an MQTT command or something. If there are any unused pins, you could add a switch
How about: arduinodiy.wordpress.com/2019/08/03/turning-led-off-and-on-on-the-esp32-camera-module-using-bluetooth/
I am having problems getting errors: camera_probe(): Detected camera not supported.
esp_camera_init(): Camera probe failed with error 0x20003.
That occurs selecting AI Thinker. The other two options give me the 0x20001 error. I bought the esp camera from DIYMORE.CC. The description in their ad prints AI Thinker on the chip, but my actual device does not have AI Thinker printed. It just has DM-ESP32-S.
Any ideas?
Did you find a solution or the correct IDE setting for your DM ESP32?
I have the same modules but haven’t used them yet.
I’d appreciate your input.
i have the same DM board, used the same IDE settings as mentioned here, no problem with the arduino sample, except must use 5v power otherwise will keep getting brownout error
Thanks Great Job
But i have almost the same problem as Neil.
face recognition works very bad i get almost no yellow square
how to ficks that?
Hi.
Face recognition is a bit slow, however we managed to make it work fine.
Please make sure that you have proper lighting to make the face recognition process easier and more efficient. Also, when enrolling a new face, you need to be steady and don’t move much, so that it properly saves your face features and can recognize it in the future.
Regards,
Sara
Hi,
Got my hardware last week from banggood. It had the issue “Brownout detector was triggered”. Seaching the web i found this video where they say to feed by 5v not 3.3v. ~2:30
This solved the brownout issue for me.
Then the web service did not appear in google chrome browser. Error message was something about too much header lines or so. In MS Edge it was ok. But i have no image from the cam. Cam must be broken. So i have to wait another month to get this as spare part. Have also ordered another ESP board with an external antenna hoping to get better connection to the router.
Hi Patrick.
I’m sorry you’re getting trouble using your ESP32-CAM.
The brownout detector error usually means that the ESP32 is not being powered properly. You can read more about this on our troubleshooting guide, bullet 8:
Our camera worked flawlessly following the steps we describe in our tutorial.
The ESP32-CAM should work fine being powered either with 3.3V through the 3.3V pin or 5V through the 5V pin. You’re probably not providing enough current.
Also, we didn’t have any trouble accessing the web server on Google Chrome.
After you get a new camera, let us know how it went.
Regards,
Sara
Any ideas what would cause a 20003 error? I have tried all three camera types. The AI Thinker gives 20003. The other two cause a 20001 error
Hi John.
I’m sorry you’re having that issue.
Those errors usually mean that the camera is not properly connected. So, or your camera module is faulty or it is not properly connected.
If these are not the reasons, it is very difficult for us to understand what is going on.
Can you try using a new camera probe?
Regards,
Sara
Thanks. The camera came installed. I bought 2 of them, and they both fail. I decided to buy from another source and see if that works.
I am not sure what you are referring to regarding a new camera probe.
Hi John.
I’m talking about the camera only (without the ESP32 board)
Regards,
Sara
Dear ALL
ESP32 doesn´t connect with mit Network and no text in Serial Monitor is being printed. SID and PW changed in coding. Any Ideas?
Message in Arduino 1.8.8:
Der Sketch verwendet 2233514 Bytes (71%) des Programmspeicherplatzes. Das Maximum sind 3145728 Bytes.
Globale Variablen verwenden 50692 Bytes (15%) des dynamischen Speichers, 276988 Bytes für lokale Variablen verbleiben. Das Maximum sind 327680 Bytes.
esptool.py v2.6-beta1
Serial port COM9
Connecting…….
Chip is ESP32D0WDQ6 (revision 1)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
MAC: cc:50:e3:b6:e5:90
Uploading stub…
Running stub…
Stub running…
Configuring flash size…
Auto-detected Flash size: 4MB
Compressed 8192 bytes to 47…
Writing at 0x0000e000… (100 %)
Wrote 8192 bytes (47 compressed) at 0x0000e000 in 0.0 seconds (effective 4096.1 kbit/s)…
Hash of data verified.
Compressed 17664 bytes to 11528…
Writing at 0x00001000… (100 %)
Wrote 17664 bytes (11528 compressed) at 0x00001000 in 1.0 seconds (effective 138.4 kbit/s)…
Hash of data verified.
Compressed 2233680 bytes to 1788374…
Wrote 2233680 bytes (1788374 compressed) at 0x00010000 in 158.5 seconds (effective 112.7 kbit/s)…
Hash of data verified.
Compressed 3072 bytes to 134…
Writing at 0x00008000… (100 %)
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.0 seconds (effective 768.0 kbit/s)…
Hash of data verified.
Leaving…
Hard resetting via RTS pin…
Hi.
It seems that your code was uploaded successfully.
Make sure you open the serial monitor at a baud rate of 115200, so that you can see the text on the serial monitor.
After uploading the code, you should disconnect GPIO from GND. Open the Serial monitor, and then press the ESP on-board reset button.
Please make sure you’ve inserted the right network credentials.
Can you access the web server when you insert the IP address on your browser?
Regards,
Sara
Dear Sara,
may I ask you please to advise on the issue below.
I purchased an AI Thinker, but it is not printed on the chip.
This product contains the OV2640 Camera Module.
Can you please advise on which Camera Model to use?
The use of #define CAMERA_MODEL_AI_THINKER refers to the error. Also the other led to issues.
Thanks
Hi Christian.
That’s probably a power issue.
Please read bullet 8 of our ESP32 troubleshooting guide:
Regards,
Sara
Check the camera pinout here: github.com/m5stack/m5stack-cam-psram/blob/master/README.md
I had to change pins 22 and 25 in camera_pins.h for the M5STACK_PSRAM
Hi!
I had same issue and that fixed but does not understand why.
if I set it like this:
#define CAMERA_MODEL_AI_THINKER
why should it g to the case:
#elif defined(CAMERA_MODEL_M5STACK_PSRAM)
???
Can you explain?
thanks!
Thank you very much for sharing. Using M5STACKcam I didn’t had image. After troubleshooting and comparing with other codes I changed setting for Y2_GPIO_NUM to 17. Now it works like a sharm 🙂 using ESP32 DevModule with Huge APP for partition scheme.
Hi, Problem solved. Any ideas to improve the video quality?
Regards
I faced different problems getting the module working. Since I am using the 5V-supply pin (instead of the 3.3V on the CAMERA_MODEL_AI_THINKER) everything is OK.
Hi . I have an esp32-cam and i went throught all the process to program the board and everything was going fine . At the end i’ve got the message telling me the ip adress to connect my board so i did in my browser and i ‘ve got the viewer that appeared in the screen but but when i press start stream or get still i don’t have any image on the screen !
I tried with 2 boards and still the same problem . The only things in common is the software …
Any idea ?
Thanks .
Hello Patrick, unfortunately I can’t replicate that error on my end… The default CameraWebServer scripts works fine for me out of the box.
Regards,
Rui
See my post 20/-4/2019, maybe this will also solve your problem
Patrick,
I have the same problem. After I hit the “Start Stream” button, no image shown on the screen.
Have you resolved the problem ?
Regards,
Ong Kheok Chin
Hi Patrick,
I have the same problem. I have a windows machine using Windows 10. I think the problem might be in the windows firewall. My camera streams fine on my android phone. Maybe someone can help us setup the windows firewall. If I can get it figured out, i’ll let you know.
Hi, i’m stuck right at the beginning with Arduino IDE 1.8.9 I have to select the board before i see any ESP32 examples – chose ESP32 Wrover module, however examples do not include Camera – any ideas? Thanks
Hi Mike.
I’m sorry you’re facing that problem. I don’t know why that is happening. But you can try to download the example from our repository:
Regards,
Sara
Just for anybody else having the same problem: Choose board ‘AI thinker ESP32-Cam’
Then in Examples go to “ESP32” and then “Camera”
After that you can alter the selected board again
update your ESP32 board driver version up to 1.0.2 or above .
Hello. Thanks for the tutorial – the camera is working 🙂
Nice tutorial, everything worked. Could you please show us how we can broadcast the video stream to the internet (so that we can see the video from any computer)? Maybe using port forwarding of the ESP32-cam or using a dedicated service? It would also be great to have an example working offline to record the video on a SD card (I haven’t managed to do that). Thanks!
Hi Oli.
At the moment, we don’t have any tutorial about that subject.
We’ve also been trying to use the SD card to save photos and record video, but at the moment, without success.
Regards,
Sara
Howdy Folks,
I am getting this major bug in my serial monitor after disconnecting the GPIO0 cable and resetting it:
Guru Meditation Error: Core 0 panic’ed (LoadProhibited). Exception was unhandled.
Core 0 register dump:
PC : 0x4012fea1 PS : 0x00060031 A0 : 0xca400000 A1 : 0x3ffe3ac0
A2 : 0x3ffaff7c A3 : 0x00000080 A4 : 0x3ffbf0ec A5 : 0x40090858
A6 : 0x02ffffff A7 : 0x00000c00 A8 : 0x4008f290 A9 : 0x3ffe3a90
A10 : 0x3ffbf0ec A11 : 0x000000fe A12 : 0x00000001 A13 : 0x00000000
A14 : 0x00000000 A15 : 0x00000000 SAR : 0x0000001d EXCCAUSE: 0x0000001c
EXCVADDR: 0x03000283 LBEG : 0x4000c2e0 LEND : 0x4000c2f6 LCOUNT : 0xffffffff
Backtrace: 0x4012fea1:0x3ffe3ac0 0x4a3ffffd:0x3ffe3ae0 0x400dea6d:0x3ffe3ba0 0x400de992:0x3ffe3bc0 0x40083ec3:0x3ffe3bf0 0x400840f4:0x3ffe3c20 0x40078f2b:0x3ffe3c40 0x40078f91:0x3ffe3c70 0x40078f9c:0x3ffe3ca0 0x40079165:0x3ffe3cc0 0x400806da:0x3ffe3df0 0x40007c31:0x3ffe3eb0 0x4000073d:0x3ffe3f20
Rebooting…
unhandled.
Guru Meditation Error: Core 0 panic’ed (StoreProhibited). Exception was unhandled. {Note that are about 60 of these in my Log}
Guru Meditation E⸮ets Jun 8 2016 00:22:57
rst:0x7 (TG0WDT_SYS
Any Ideas?
Hello Kurt, here’s what the error: “Brownout detector was triggered” means:.
Rui, I am using a USB CH340 and also a USB FTDI serial boards that connect directly to a Computer USB port, there are no cables, other than the Jumper wires. I have tried this on 3 different computers and about 3 to 4 USB ports on each one. I have also tested 2 CAM boards with the exact same results.
The Brownout is the only thing listed on my previous post, there’s also the:
“Guru Meditation Error: Core 0 panic’ed (StoreProhibited). Exception was unhandled. {Note that are about 60 of these in my Log}”
Which spawn 60 TO 100 Messages before it Reboots.
Hi.
Some of our readers reported that when they power the ESP32-CAM with 5V, they don’t have the brownout error or guru meditation error anymore.
Regards,
Sara
When I powered either one of them with 5V through the USB Serial dongle the LED on the ESP board lights up and stays on, while the Serial monitor shows nothing.
HI all,
I purchased a ESP32-Cam. I have had a lot of problems trying to get it to work.
I could nbot get the sketch to upload and a couple of other small issues.
What I found was (
Its all to do with the voltages…..
and the pin configuration is different on my usb-TTl compared to the pics on the web. ) –
1. Set the usb-TTl to 3.3V.
2. connect it to the ESP32-CAM as shown in all the diagrams, (but put the 3.3V from the usb-Tl to 3.3V on the ESP32-CAM.)
3. Strap the Io0 and gnd.
.. make sure the pins you have cables on are correct… very important.
4. Power up and upload the sketch.
Now to test the ESP32-CAM.
1. Remove the IO0 and gnd jumper.
2. Change the usb-TTl to 5v (changing the pin)
3. Change the voltage on the ESP32-CAM to 5V pin.
4. Power up.
5. Open up the serial monitor.
6. Press the reset button on the ESP32-CAM.
7. get the IP address.
Enter the IP address in your browser. Go to the bottom to Start streaming data.
And It works like a charm.
If I do not change the voltage on the pins (3.3v for uploading sketch and %v for operating then I could not get anything to work.
I hope this helps other people who are having Issues.
Wonderful tutorial, quick set up….I have 1 little issue…Stills OK, Steaming NOT OK…. Everything seems to work well and good but when I press Start Steam, nothing streams. I can tell through the Monitor, and TTL connection that the Steaming mode is going, and when I stop the monitoring shifts down to lower FPS. Still captures work just fine. Am I missing something? Do I need an SD card installed to allow streaming? Arduino 1.8.9, ESP32 Espressif v1.0.2
Hi.
You don’t need SD card to see the streaming.
I don’t known what can be the problem. Please note that you can only see the streaming on one client at a time. So, make sure that you don’t have any other browser tab making requests to the streaming URL.
I’m sorry that I can’t help much.
Regards,
Sara
I am facing the following error while uploading code. Please help
A fatal error occurred: Failed to connect to ESP32 cam: Timed out waiting for packet header
You probably don’t have the right connections to the FTDI programmer.
Also GPIO 0 needs to be connected to GND while uploading the code.
Regards,
Sara
hello, have you solved this error?
I edit code to use esp32 as accesspoint.
on serial monitor show:
IP address: 192.168.4.1
Starting web server on port: ’80’
Starting stream server on port: ’81’
Camera Ready! Use ‘’ to connect
E (5687) wifi: addba response cb: ap bss deleted
Hi.
Unfortunately, I don’t know what that message means.
If you find out, please share with us.
Regards,
Sara
Hola a alguien le ha dado el siguiente error
[E][sccb.c:154] SCCB_Write(): SCCB_Write Failed addr:0x30, reg:0x23, data:0x00, ret:-1
20:59:56.233 -> [E][camera.c:1215] camera_init(): Failed to set frame size
20:59:56.233 -> [E][camera.c:1270] esp_camera_init(): Camera init failed with error 0x20002
No se como solucionarlo, agradesco vuestra ayuda, saludos.
Hi Arturo.
Next time, post your questions in english so that everyone can understand.
Which camera board are you using?
Hello, sorry for my previous message in Spanish.
the problem is generated on a model plate ESP32-S Al-Thinker.
Hi ARturo.
That error you were referring to usually means that the camera is not properly connected or the ESP32 is not able to recognize the camera. That can be due to the following issues:
– Camera not connected properly: the camera has a tiny connector and you must ensure it’s connected in the the right away and with a secure fit, otherwise it will fail to establish a connection
– Not enough power through USB source: Some ESP32-CAM boards required 5V power supply to work properly. We’ve tested all our examples with 3.3V and they worked fine. However, some of our readers reported that this issue was fixed when they power the ESP32-CAM with 5V.
– Faulty FTDI programmer: Some readers also reported this problem was solved by replacing their actual FTDI programmer with this one:
– The camera/connector is broken: If you get this error, it might also mean that your camera or the camera ribbon is broken. If that is the case, you may get a new OV2640 camera probe.
Also, sometimes, unplugging and plugging the FTDI programmer multiple times or restart the board multiple times, might solve the issue.
I hope this helps.
regards,
Sara
Hi I did everything as explained and if I get ip and I can enter and start the camera but when selecting the face dectector does not work does not happen nothing does not detect the faces, I have remained still to see if it detects the face and does not work , esp32 I have it connected to the 5v pin because when I tried it with 3.3v I did not want to load the code
I had the same problem and the camera needs to be in good lighting conditions to get it to do any of the recognition functions…….
hello I did everything as established, I charge the code and it gives me the ip and the entry in my browser and if it enters the platform of the camera and I can start the camera only that when selecting for the face detector it does not work I have been still to see if it detects but nothing appears, and if you notice that the quality of the camera is somewhat low and I do not know if that could be the cause, there is no way to turn on the led that includes the esp32 cam to work as flash
Hi Jesus.
What is the camera module that you’re using?
If the camera board doens’t have PSRAM, it won’t be able to do face recognition and detection.
Regards,
Sara
hello i have esp32-s Ai thinker PSRAM IPS6404LSQ
The face recognition and detection should work with that camera.
Did you follow Neil suggestions?
You really need to have good lighting, otherwise it won’t be able to recognize faces.
Regards,
Sara
Hello! Excellent tutorial, got me started real easy with the ESP32-Cam. I did got a bit stuck though:
– First time I uploaded the CameraWebServer sample sketch, the upload process worked fine, though I could not see any traces back in the serial, even removing the GPIO 0 to GND jumper and resetting.
– Then I tried to upload *again* and got only an error back:
esptool.py v2.6
Serial port COM5
Connecting…….._____….._____….._____….._____….._____….._____….._____
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for
packet header
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for packet header
– Since I had a second ESP32-CAM, I repeated the steps above, and the results were the same: first upload from the IDE succeeded, the next one failed with the error above.
– I did try to change the upload speed to 115200 bps, but it did not change the results
– I did not (yet) try pressing the ‘reset’ button in the board, because it is in the back side and I have the module in a protoboard. Since the first upload worked without pressing reset, I’m not sure I need to do it this time, but I’m open for suggestions 🙂
Thoughts?
*Update*: it was indeed the ‘reset’ button underneath; if anyone is facing the same problem, just remember to briefly hit the reset button as you’re about to upload the compiled firmware.
Everything is working fine here now, thanks again for this nice tutorial.
Hi Claudio,
Yes, you need to press the reset button, otherwise you won’t be able to upload code.
Regards,
Sara
Hi, I want to thank you for all your articles, I learned a lot on this site.
Following this tutorial my ESP32 Cam worked the first try.
Now the part where I have some problems: I would like to connect some device through I2C like a BME280, a stepper motor and 2 relay but I have some difficult to locate the right pins (if available).
Could you help me?
TIA,
Vince
Hi Vince.
If you intend to use the SD card, there aren’t pins left (at least accessible pins).
If you don’t use the SD card, there are some pins available, but I haven’t experimented with those yet.
You can see the datasheet to check the internal connections to the pins: (page 4)
You can see the pinout here:
I hope this helps.
Regards,
Sara
Thank you for your reply.
My need is to understand how many pins are left unused.
It seems that GPIO0..4 and 12..16 are already used by the cam
so no other device could be used.
Maybe some other GPIO could be used connecting directly to ESP32 pins.
Regrds,
Vince
Hi again.
GPIO16, GPIO 2 and GPIO 3 are not being used by the camera.
However, GPIO 2 and GPIO 3 are used for serial communication. But you can try with those.
Regards,
Sara
I have 2 boards and cams and with both i have the same problem:
……………………………………………………….
I get endless dots , that’s it. Camera does not init. If I remove the cams from the boards that is detected and an error is printed.
Hi Mirko..
Some readers reported that powering the ESP32-CAM board with 5V solved the problem.
Regards,
Sara
Thank you. That solved the problem.
I was always thinking the ESP32 is opening up a own WiFi hotspot and so inserted credentials for that.
I did not realize that it wants to connect to my Wifi and needs that credentials.
I did not even think about that, because I thought that the ……. is a part of the camera initialisation 😉
So again, thanx for the hint.
Mirko
hello I still have the same problem that facial recognition does not work when I start it in the arduino ide serial monitor I start marking this
MJPG: 8205B 209ms (4.8fps), AVG: 210ms (4.8fps), 134+61+0+0=196 0
MJPG: 8220B 208ms (4.8fps), AVG: 210ms (4.8fps), 133+61+0+0=195 0
MJPG: 8234B 207ms (4.8fps), AVG: 210ms (4.8fps), 133+61+0+0=195 0
MJPG: 8253B 208ms (4.8fps), AVG: 210ms (4.8fps), 133+61+0+0=195 0
MJPG: 8258B 239ms (4.2fps), AVG: 211ms (4.7fps), 136+62+0+0=198 0
MJPG: 8244B 282ms (3.5fps), AVG: 215ms (4.7fps), 134+62+0+0=196 0
but nothing appears in the view of the camera and if I give it in enrroll face sometimes I throw this error
Guru Meditation Error: Core 0 panic’ed (LoadProhibited). Exception was unhandled.
Core 0 register dump:
PC : 0x40132f33 PS : 0x00060c30 A0 : 0x801333fb A1 : 0x3ffd5090
A2 : 0x3ffc73fc A3 : 0x00000000 A4 : 0x00000000 A5 : 0x00000000
A6 : 0x00000008 A7 : 0x00600002 A8 : 0x80132ea4 A9 : 0x3ffd5070
A10 : 0x00000000 A11 : 0x0000000b A12 : 0x00000005 A13 : 0x00000020
A14 : 0x00000020 A15 : 0x3ffbe140 SAR : 0x00000020 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000001 LBEG : 0x4000c2e0 LEND : 0x4000c2f6 LCOUNT : 0xffffffff
Backtrace: 0x40132f33:0x3ffd5090 0x401333f8:0x3ffd50c0 0x401334a0:0x3ffd50f0 0x40133755:0x3ffd5120 0x40094c89:0x3ffd5150 0x4008dae1:0x3ffd5190
It is also worth to say, that powering the Unit just from the Serial Converter leads to problems (at me) because the Module needs more/quicker Power than my Serial-Converter Module is able to deliver as you can see sometimes on the Serial-Monitor if there is “Brownout Detection …..”
I just power it from any other “good” Source to work against the “inrush current” that the Module aparently needs to kick in with WiFi.
Hello, thank you for posting this material, it is very explanatory. I would like to report a problem with the ESP32-CAM I’m using. The image was stuck and locked. So I switched the voltage to 5V and now it works fine. Thank you
Hi Tiago.
Thanks for sharing.
We now have a troubleshooting guide with the most common problems and how to fix them:
Regards,
Sara
Hi guys. Thanks a lot for this tutorial. I’m using the esp32-cam without problems. The only question i have for you is: is there any way to rotate the image in 90º?
Thanks again!
Hello,
I am having trouble with my diymore esp32 cam. I believe it is a dev module so this is what I pick under boards (there is nothing that says diymore). I am getting connection timeouts using my adafruit programmer friend wired up the same way as the diagram. Using 3v3. Any suggestions?
Hi David.
Can you try powering your board through the 5V pin and see if it solves the problem?
Regards,
Sara
I was able to get my sketch uploaded to the DIY more board using 3v and 40mhz. The 5v to run the sketch
Thanks for sharing.
Has anyone had any luck in integrating this tutorial with MQTT? I’d like to be able to publish a notification via MQTT to a topic when a recognised face is detected so I can integrate this with my Home Automation System – Thanks
Hi Jonathan.
We intend to work on something like that in the future. But at the moment we haven’t experimented with it yet.
Meanwhile you can take a look at our MQTT tutorial:
Regards,
Sara
Thanks for your post! i´m from Brazil and i trying using a board ESP32CAM of DiyMore but its no work…my first projeto with ESP32CAM was a AI-Thinker and works fine…
But when i using ESP32CAM DiyMore not work.
Maybe ESP32CAM DiyMore its a difrent pinout?
Hi Stefano.
I have no idea why one works and the other doesn’t.
Some users reported that some boards required 5V to operate.
That can be the case.
You can also take a look at our troubleshooting guide and see if it helps:
Regards,
Sara
my problem as that :
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for packet header
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for packet header
any solutions pleaseee , thanks
Hi.
Please check our troubleshooting guide, bullet 1:
I hope this helps
Regards,
Sara
Hi!
Hello,
My board is behaving little strange. Did anybody have this kind of message:
sptool.py v2.6
Serial port /dev/ttyUSB0
Connecting….
Chip is ESP32D0WDQ6 (revision 1)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
MAC: cc:50:e3:b6:db:f134.8 kbit/s)…
A fatal error occurred: Timed out waiting for packet header
A fatal error occurred: Timed out waiting for packet header
Best Regards,
Milan
Hi Milan.
That errors means that your ESP32-CAM is not in flashing mode.
Please read our troublehsooting guide bullet 1:
I hope this helps.
Regards,
Sara
Hi Guys !
Thanks very much for this tutorial !!, pretty straight forward and concise.
I’ve got my cameras from Aliexpress, they look very much alike to AI’s one. DM instead of AI is the brand that appears on the rfshield.
I’ve got a Raspberry Pi to serve as a WiFi HotSpot, assign the same IP to the ESP’s MAC address and from my mobile accessing the streaming.
A bonus: Checking the schematics, I saw that it operates with 3.3v, so the 5v go to a LM1117-3.3v voltage regulator, and this 3.3v regulator is rated up to 15V input !!!. Long story short, I’ve cramped 4 AAA batteries (6v) and the ESP32-CAM inside a GoPro-like waterproof enclosure and VOILA !!!.. .it worked… 🙂 Underwater. at least surrounded by 3 ft of water :-). I had to lower the res down to 320×240 to keep the 23fps but still 🙂
Guys, you’re awesome !.
thanks again
Gabriel
Hi Gabriel.
That’s awesome! Thank you for sharing your project!
It would be great if you could send us some photos of your setup as well as how the images look underwater.
Use our contact page and just say that you want to send your photos:
Regards,
Sara
Hi, Excellent Tutorial, but I can start the camera only that when selecting for the face detector it does not work. some Idea? thank. I use 5V/2A, the image is very good, I use 320×240, and I use ESP32 CAM-module, this: , but face detector dont work. thank you.
Hi fabian.
The example should work with your board.
To be able to get face recognition, you should have good lighting conditions so that it can detect the faces.
Without further information, it is very difficult to understand what might be the issue.
You can also take a look at our troubleshooting guide and see if it helps in some way:
Regards,
Sara
it is my understanding that face recognition does not work in the 1.02 core of the ESP32. It does work in the 1.01 core.
If you revert back to the 1.01 core, make sure you also use the Camera example that comes with that core
For this problem:
solution apply 5V to the card, to the 5v pin
Hi Leo.
Thanks for sharing that tip.
We’ve made a compilation with the most common problems and how to fix them:
And that is included in our guide.
Regards,
Sara
Hi, mine won’t detect faces for some reason. Do you have to install a MicroSD card for facial recognition?
Hi Trevor.
No, you don’t need to install a microSD card.
Regards,
Sara
hello,
Am having the issues of camera not supported
Hi Sunny.
Please read our troubleshooting guide and see if it helps:
Regards,
Sara
can i access it from internet any where in the world?
You would need to create a secure tunnel to your home network or setup router port forwarding.
Is there any other way without port forwarding? I need some help about this . Can u help me about this?
hello rajesh
Did you find a solution?
Can I send images from ESP 32 CAM to smartphone via bluetooth or USB so that I don’t have to connect to a network?
It’s possible, but I don’t have any tutorials on that exact subject at the moment.
Can you suggest where I should start in order to send image from ESP 32 CAM to smartphone via bluetooth or USB?
Is there any other way without port forwarding? I need some help about this . Can u help me about this?
Hello Rui and Sara,
there is a litte led on the board. Do you know if it is possible to put it ON via gpio ? The camera will be installed in birdhouse (almost dark) and I woul’d like to have a little bit more light inside. Otherwise I wil use other ports to lit external leds.
Thank you for your great job and in advance for your answer.
Hi Bernard.
The LED is connected to GPIO 4.
So, you just need to make the usual procedures to put a GPIO on.
pinMode(4, OUTPUT);
digitalWrite(4, HIGH);
Regards,
Sara
Thanks a lot, but this little led have not enough power to give good light.
I used extra leds strips to do the job via a wemos d1.
Regards,
Bernard
Hi Bernard.
Yes, that LED is not enough for a good light.
That’s a good idea.
Thank you for sharing.
Regards,
Sara 😀
Hi everyone.
Nice tutorial you’ve got here.
I’m working on a door security system that would require a cam to take a picture of a face, compare it with already registered images on a database and have it trigger a lock mechanism on successfull validation. (without streaming or accessing via wifi.)
Would this be possible with esp32 cam?
Thanks.
Hello everyone,
Can any one help i am getting following error while uploading the code.
Arduino: 1.8.9 (Windows 10), Board: “ESP32 Wrover Module, Huge APP (3MB No OTA), QIO, 80MHz, 921600, None”
Sketch uses 2241942 bytes (71%) of program storage space. Maximum is 3145728 bytes.
Global variables use 52696 bytes (16%) of dynamic memory, leaving 274984 bytes for local variables. Maximum is 327680 bytes.
esptool.py v2.6
Serial port COM8
Connecting…..
Chip is ESP32D0WDQ5 (revision 1)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
MAC: 24:0a:c4:bb:65:c369.0 kbit/s)…
A fatal error occurred: Timed out waiting for packet header
A fatal error occurred: Timed out waiting for packet header
Hi.
It seems that your board is not in flashing mode, so it is not able to upload the code.
Please take a look at our troubleshooting guide, bullet 1:
I hope this helps.
Regards,
Sara
Hello, your site and your instructions are amazing,i believe that i do everything like you said in the video but it stops at this point. It doesent show me that it connects to the internet,i tried either with an antenna or without one. Please help me get through this if you can
Hi Ilias.
Please take a look at our troubleshooting guide and see if it helps:
Regards,
Sara
Hi guys…
i have tired facing this problem.can you anyone please help me for solving this problem. I have got espressif ESP32-CAM two module. but i am unable to connect Camera,and i did not get any IP address with this module.
thanks in advanced
manets
Hi.
Please take a look at our ESP32-CAM troubleshooting guide and see if it helps:
Regards,
Sara
Hi Sara Santos. In a comment from may you mention that you have tried taking photos and saving them to the SD card, but failed. I managed to do this. Do you want me to dig out the code and show it to you?
I also managed to take photos when an “intruder” is detected from a sensor. The only problem with that is that I did not manage to connect the sensor directly to the camera module. I had to use an auxiliary Arduino board with the sensor, and make it then send a command to the ESP32 module to make it take a picture. I am pretty sure there are much better ways of doing this, ideally without needing an arduino board.
Hi Antonio.
Thank you so much for taking the time to read and answer to our comments.
Actually, one of our readers also shared a solution for that, and we end up writing a new tutorial about it.
Here is the tutorial:
Regards,
Sara
Hi all.
On a DIYMore Esp32-cam all I get from Arduino is
board esp32 (platform esp32 package esp32) is unknown
I installed the Esp32 addon, and tryed all the Esp32 boards on Arduino with the same result.
I am missing something, I am sure..
Thanks!!!!
Hi Federico.
I’ve never faced that issue.
I’ve found this discussion: github.com/espressif/arduino-esp32/issues/2388
See if some of the suggestions can help with your issue.
Regards,
Sara
Hi Sara!
Solved by removing all the esp32 stuff and reinstalling.
Thanks!!!
Have a nice WE!!!
Federico
Hi All,
Nice Tutorial !!! Have not seen this issue posted anywhere. So here Goes:
Followed tutorial, all worked perfectly until ESP32-Cam was removed from power. Then it acted like it had never been Flashed when power was restored. Even tried RST button, nothing shows up in the Serial Monitor. Can set back up to Flash and all goes well (all works) until power is removed then restored, again acts like it had never been Flashed. Bought 2 of these and both act the exact same way. Any help would be great. Thank You in Advance !!!
CharlieBob
Did you remove GPIO 0 from GND? If you leave that connection, the ESP32-CAM starts in FLASHING mode and it will not run your code…
Hi Sara
I connect esp32 cam with Lora but it cann’t be initiatized.
It seems that deinit(); of the esp32_cam doesn’t work as commented in esp32_camera.h. Please kindly suggest how to coexist cam and lora on this esp32_cam module.
Many thanks, PP
Hi PP.
Most of the GPIOs exposed on the ESP32-CAM are either being used by the camera or by the microSD card.
So, it will be very difficult to interface a LoRa module with this board.
Regards,
Sara
Hi,
Where i should buy this product? I am living in Denmark. I could not find suppliers for this product in my contry.
Kind regards
Salam
Hi Salam.
I have no idea.
We usually buy our electronics components and boards from stores like eBay, Banggood, Aliexpress, Amazon, etc…
Regards
Sara
i can’t run esp32cam, i tried to define all modules and nothing, help. My module has nothing written on the board, what manufacturer it is, how to detect what module it is
What error do you get? Or you don’t get any error at all?
How can I turn on the flash? I tried with “digitalWrite(4, HIGH)” but It doesn’t work.
Hi.
That should light up the flash. Have you defined the pin as an output?
pinMode(4, OUTPUT);
digitalWrite(4, HIGH);
Regards,
Sara
Oups, you’re right, I forgot setting the pin as output jejeje It’s been a long time since the last time I used an Arduino …
How can i iuse that URL on other network? i want to access that camera on my mobile network, how can i do that???? please answer..
You’ll need to do some router port forwarding. Search for “router port forwarding” and you’ll find how to make a web server accessible from anywhere.
hello omkar
Did you find a solution?
Arduino IDE 1.8.10
The hardware (ESP, USB serial etc.) is the same as yours.
Linux Mint (Xfce)
I followed each and every step and this is what I get:
Arduino: 1.8.10 (Linux), Board: “ESP32 Wrover Module, Huge APP (3MB No OTA), QIO, 80MHz, 921600, None”
Traceback (most recent call last):
File “/home/swift/.arduino15/packages/esp32/tools/esptool_py/2.6.1/esptool.py”, line 37, in
import serial
ImportError: No module named serial
Multiple libraries were found for “WiFi.h”
Used: /home/swift/.arduino15/packages/esp32/hardware/esp32/1.0.3/libraries/WiFi
Not used: /opt/arduino-1.8.10/libraries/WiFi
exit status 1
Error compiling for board ESP32 Wrover Module.
This report would have more information with
“Show verbose output during compilation”
option enabled in File -> Preferences.
I’ve installed “pyserial” and I don’t get the error “No module named serial” but I get this:
Arduino: 1.8.10 (Linux), Board: “ESP32 Wrover Module, Huge APP (3MB No OTA), QIO, 80MHz, 921600, None”
Sketch uses 2097154 bytes (66%) of program storage space. Maximum is 3145728 bytes.
Global variables use 53516 bytes (16%) of dynamic memory, leaving 274164 bytes for local variables. Maximum is 327680 bytes.
esptool.py v2.6
Traceback (most recent call last):
File “/home/swift/.arduino15/packages/esp32/tools/esptool_py/2.6.1/esptool.py”, line 2959, in
_main()
File “/home/swift/.arduino15/packages/esp32/tools/esptool_py/2.6.1/esptool.py”, line 2952, in _main
main()
File “/home/swift/.arduino15/packages/esp32/tools/esptool_py/2.6.1/esptool.py”, line 2652, in main
esp = chip_class(each_port, initial_baud, args.trace)
File “/home/swift/.arduino15/packages/esp32/tools/esptool_py/2.6.1/esptool.py”, line 222, in __init__
Serial port /dev/ttyUSB0
self._port = serial.serial_for_url(port)
File “/home/swift/.local/lib/python2.7/site-packages/serial/__init__.py”, line 88, in serial_for_url
instance.open()
File “/home/swift/.local/lib/python2.7/site-packages/serial/serialposix.py”, line 268, in open
raise SerialException(msg.errno, “could not open port {}: {}”.format(self._port, msg))
serial.serialutil.SerialException: [Errno 13] could not open port /dev/ttyUSB0: [Errno 13] Permission denied: ‘/dev/ttyUSB0’
An error occurred while uploading the sketch
When I run python -m serial.tools.list_ports in terminal I get this:
/dev/ttyUSB0
1 ports found
Just simply run it as root and it worked. Now I’ve got a problem with Brownout detector but nothing seems to be working :/
The FTDI programmer wasn’t able to supply 3,3V (only 2,7V), and 5V seemed to be too much for the ESP32. Now it works.
The dl_lib.h is related to the face recognition capabilities 2 (esp-face), and it was removed in version 1.0.3 of the Arduino core. That said, just comment it out and it should compile and work perfectly either if you are using the Arduino IDE. Other option is to revert to version 1.0.2 of the arduino core.
Regards
Sorry, but how do you comment it out for version1.0.3 and above
Hi, open the webserver, but I press the “start stream” button and the failure to open the image, this message appears in serial ,, …… already tried in 3 browsers, can anyone help me?
[E][camera.c:1344] esp_camera_fb_get(): Failed to get the frame on time!
Camera capture failed
Raphael i added a solution over in, but here it is in case you dont see it…
A solution to the “esp_camera_fb_get(): Failed to get the frame on time!” message….
Im using the ESP32-CAM Module 2MP OV2640 Camera sensor Module Type-C USB module from Aliexpress. Although not mentioned It doesn’t have the extra PSRAM the other M5 models do AND the camera has one changed IO pin. See here… and scroll down to Interface Comparison. The CameraWebServer Arduino example we’re probably all using doesnt have this ESP32-CAM model defined. You need to add it yourself eg in the main tab add #define CAMERA_MODEL_M5STACK_NO_PSRAM , and in the camera_pins.h tab add…
.
Also note that the max resolution of the bare ESP32-CAM Module is XGA1024x768 i assume also because of the lack of PSRAM.
Thanks for sharing that.
We need to add this to the troubleshooting guide.
REgards,
Sara
Hello Sara and Rui.
Tried the Esp32 camera for the first time today.
Sketch upload is only possible with 5 Volts.
Also works very nicely and reliably.
Thank you for your work.
Greetings from the Netherlands from Bert.
hello Sara and Rui ….
i m finished this beautiful project.Everything working well but when i forwarding port and conect camera via internet,GET STLL working but VIDEO STREAM not ,maybe you know whats the problem ???
thanks in advance ,73 de 9a3xz Mikele Croatia
Hi Mikele.
I don’t know what can be the problem.
In this example, video streaming only works on one client at a time. This means that if you have the web server opened in another tab, it will not work. Just one tab at a time.
Thanks for following our work.
Regards,
Sara
Hi again. Great turtorial again. My Hiletgo ESP32-Cam runs as a Ai-thinker. Noticed the image is mirror image (reversed right to left). Module design should have had the reset button on the camera’s side or a reset pin available. so it can work in a breadboard.
Is it possible for facial recognition to send a signal to turn on a servo / LED? depending if it is intruder or subject
Hi.
Yes, it is possible.
However, at the moment, we don’t have any tutorial about that.
Regards,
Sara
Hi Sara,
I am getting below error, please help…!
Sketch uses 2100647 bytes (66%) of program storage space. Maximum is 3145728 bytes.
Global variables use 53552 bytes (16%) of dynamic memory, leaving 274128 bytes for local variables. Maximum is 327680 bytes.
esptool.py v2.6
Serial port COM12
Connecting….
Chip is ESP32D0WDQ5 (revision 1)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
MAC: 24:6f:28:46:97:64
Uploading stub…
Running stub…
Stub running… 5041.2 kbit/s)…
A fatal error occurred: Timed out waiting for packet header
A fatal error occurred: Timed out waiting for packet header
Hi Ravi.
Read our troubleshooting guide, bullet 1:
Regards,
Sara
Just received my ESP32-CAM Ai-Thinker board. Everything works fine except no ‘Toggle settings’ pane on the webpage. Perhaps I received a hacked firmware in mine or did I do something wrong?
I’ve backed up the firmware with esptool. Does anyone have a .bin file from a board that shows the toggle settings pane?
Thanks Rui and Sara for your work.
I installed esp-idf and esp-who (), then built the example code ‘camera web server’ demo. I now have the settings pane.xff, data:0x01, ret:263
[E][sccb.c:154] SCCB_Write(): SCCB_Write Failed addr:0x30, reg:0x12, data:0x80, ret:263
[E][sccb.c:119] SCCB_Read(): SCCB_Read Failed addr:0x30, reg:0x0a, data:0x00, ret:263
[E][sccb.c:119] SCCB_Read(): SCCB_Read Failed addr:0x30, reg:0x0b, data:0x00, ret:263
[E][camera.c:1049] camera_probe(): Detected camera not supported.
[E][camera.c:1249] esp_camera_init(): Camera probe failed with error 0x20004
I get this error, any solution?
Hi.
The 0x2004 error means the camera is not supported.
On your camera ribbon, which label do you have? Ours is LA AF2569 0927XA
What label do you have in your camera model?
Regards,
Sara
my camera is OV2640.
Hello. The image disappears or freezes after 2 seconds. What may be failing ???
Hi Antonio.
It may be the Wi.Fi signal.
Are you using the on-board antenna or an external antenna.
If you’re using the on-board antenna, you need to be close to your router. The best way is to have an external antenna.
Regards,
Sara
Hi Sara,
I cannot get the IP address.
Here is what I am getting at 115200.
I moved close to the router, and added an external antenna.
Please help.
I had a similar problem.
The fact that you get occasional words suggests that the baud rate is right.
I forget which problems that I had were solved by what, but I started out powering the module on the Vin pin and having a power line connected from the TTL and I ended up cutting all the power lines to the TTL and powering the module on the 5V pin.
Cancel that.
I can upload by using on 5V only.
Hi Antonio,
Thanks for this example. I would want to know how to capture and send a base64 encoded image to external server.
Thanks, worked like a charm.
Bought 4 of these clones for 15€, took me 10 minutes to set all of them up.
Hello Mr Rui Santos
Iam using in Example Arduino IDE – CameraWebServer
Camera OV2640 – stack with high resolution
camera resolution UXGA (1280 x 1024)
Please select CIF or lower resolution before enabling this feature!
then when iam try to get low resolution QVGA (320 x 240)
face detection and face recognition done
1. how to make camera OV2640 with high resolution using face detection and face recognition ?
when iam opnened the web IP Adress ESP32 Cam – button get still
then iam check on program i cant find the button is
2. how to make button “Get Still” – save to SD Card ?
then the code checked the image when false detected 0
3. how to read the image then checked the image scan detected=true ?
like this web, when camera detect (show on micro SD Card – intruder alert)
Hi.
I’m sorry but I don’t have answers to all your questions.
I recommend taking a look at all our ESP32-CAM projects and see if you find something that you can modify to use in your own projects.
See all the projects here:
Regards,
Sara
Hi, thanks for your tutorial.
I followed all the steps above and get the ip address, but when I did it on browser it just can’t be reached.
Whats going wrong? I have no idea whats happened.
Hi.
Does your ESP32-CAM have an external antenna?
Or are you close to your router?
If you don’t have an external antenna, the ESP32-CAM needs to be close to your router, so that it is able to catch the wi-fi signal.
Read the section about the antenna “7. Weak Wi-Fi Signal” on our troubleshooting guide:
Regards,
Sara
Is it necessary to connect to the ESP32 Cam using the FTDI Programmer? I have a USB/TLS cable, meaning one side is USB, and on the other are red (VCC) , black (gnd), white (TX), and green (RX) which I use frequently to connect upload to ESP8266 w/o the FTDI programmer.
With the ESP32 Cam I tried connecting the USB/TLS cables as follows:
Red > VCC
Black > Gnd
White > U0T
Green > U0R
No luck yet: there’s still the issue of getting the ESP32 into bootloader mode.
There’s the Reset button, but I’m used the the ESP32 and ESP8266 where you need two buttons
I’ve read you can get ESP into bootloader mode by grounding certain pins.
Overall the question is: Can I flash the the ESP32 Cam using a USB/TLS?
Answering my own question: ESP32 Cam can be flashed without the FTDI Programmer using a USB/TLS cable wiring as described above with one change:
Red > 5V (Thanks to RandomNerdTutorial diagram above / link below)
Black > Gnd
White > U0T
Green > U0R
Getting into Boot mode, thanks to the same diagram was about grounding GPIO0, taping Reset, then releasing GPIO0.
Great tutorial!
Diagram mentioned:
Hi.
That’s right! It doesn’t have to be an FTDI programmer. It can be a USB/TLS cable, as long as you have the right wiring.
Regards,
Sara
Hey Sara i want Arduino to take pictures when i am out for walks for example and send them to a web server do you think this is possible?
Hi John.
Take a look at this tutorial:
Regards,
Sara
Hey Sara and Rui in this example how you connect the ftdi programmer with the computer ?
Hi.
The FTDI programmer we’re using has a mini-USB port. So, we just connect a mini-USB to USB cable to the FTDI programmer and then to the computer.
Regards,
Sara
Hellow sara santos
I have problem can you help me please that when I connected esp32 with my arduino and upload the code from arduino ide to arduino and it uploaded but when I opened the serial monitor that the IP address not appear that write camera_probe(): detected camera not supported
esp_camera_init(): camera probe failed with error 0×20004
So ,I can’t solved this problem can any one help me please …
Hi.
Please take a look at our troubleshooting guide.
I’m sure it will help:
Regards,
Sara
Hello i have the ftdi i have the esp32-cam and females jumper wires my problem is how to i connect the ftdi programmer with my pc ?
Hi Sara,
After uploading the file [CameraWebServer] to the ESP32-CAM board, the following message is shown on the Serial Monitor.
After the ESP32-CAM IP address is typed on the browser, no any video. Pressing the Start Streaming button, also no video .
I try it on Win10 & Win7 machine, same!
I follow all your steps; AI Thinker board and ‘CAMERA_MODEL_AI_THINKER’ in the file are chosen. How to solve it ?
message on Serial Monitor: (an SC card is inserted)x91, data:0xa3, ret:-1
[E][sccb.c:154] SCCB_Write(): SCCB_Write Failed addr:0x30, reg:0xff, data:0x00, ret:263
[E][sccb.c:154] SCCB_Write(): SCCB_Write Failed addr:0x30, reg:0xff, data:0x01, ret:-1
.
WiFi connected
Starting web server on port: ’80’
Starting stream server on port: ’81’
Camera Ready! Use ‘’ to connect
Hello.
I recently started using this ESP32cam programmer.
This works a lot better and faster than the loose wires.
I have no connection with this company.
Greetings Bert.
Hello.
When I Verify/Compile I get this space error. I’ve not even connected my ESP32.
Sketch uses 2053883 bytes (156%) of program storage space. Maximum is 1310720 bytes.
Your trouble shooting mentions (for a space different error)
Tools > Partition Scheme, select “Huge APP (3MB No OTA)“.
but I don’t have this in Tools.
I’m using IDE 1.8.12
Thanks.
Solved: I uninstalled the board and reinstalled it and this time I saw Huge APP… and it has now compiled.
Thanks for the good tutorial, very helpful!
I read that the logic level for the ESP should be 3.3 V!
Not sure if you can fry your board with 5V logic levels.
I had success using a FT232RL USB to TTL Serial Converter, using 5V from the side to power the ESP with cam and having the jumper set to 3.3V for the logic levels.
Hi Thomas.
You are right that you should use 3.3V with the ESP32-CAM.
However, many of our readers had troubles when using 3.3V and those were solved when they used 5V instead.
We didn’t have any problems when using one option or the other.
Regards,
Sara
Its stil not working, need help.
Dear Sir
very good work, it works for me, but the problem it is not saving recognized faces to microsd-card. Each time power on we have to start recognition again.
To make sure there is no problem with the microsd-card or the board I try this “” and it works and save photos to microsd-card.
So where is the problem ???
Hi.
This particular example doesn’t save faces on the microSD card.
Regards,
Sara
Ok
thanks for your concern, do you know any other code to save and read faces to microSD card ??
or web help to do that ??
Ok I need a small help ==> in web server I need to know where is reference to button “Start Stream” in code
Why ??? ===> I need to start Stream with Device Startup without clicking on “Start Stream” button
How to do this ??
Hi,
the installation went fine and when I enter the IP address (in chrome) I had video; also the integration with Home assistant went smoothly – so far so good.
I however don’t have the “camera streaming server” with the config buttons and sliders and I can figure out how to fix that.
Ideas / clues would be very appreciated.
Best Regards,
Ko (Netherlands, The Hague) | https://randomnerdtutorials.com/esp32-cam-video-streaming-face-recognition-arduino-ide/?replytocom=370938 | CC-MAIN-2020-29 | refinedweb | 11,301 | 74.9 |
Overview and the excellent Pygame framework.
We will build a version of the classic Breakout game. When all is said and done, you'll have a clear understanding of what it takes to create your own game, you'll be familiar with Pygame's capabilities, and you'll have a sample game.
Here are the features and capabilities we'll implement:
- simple generic GameObject and TextObject
- simple generic Game object
- simple generic button
- config file
- handling keyboard and mouse events
- bricks, paddle, and ball
- managing paddle movement
- handling collisions of the ball with everything
- background image
- sound effects
- extensible special effects system
What you should not expect is a visually pleasing game. I'm a programmer and not an artist. I worry more about the esthetics of the code. The result of my visual design can be quite shocking. On the plus side, if you want to improve how this version of Breakout looks, you have tons of room for improvement. With that dire warning out of the way, here is a screenshot:
The full source code is available here.
Quick Introduction to Game Programming
Games are about moving pixels on the screen and making noise. Pretty much all video/computer games have most of the following elements. Out of scope of this article are client-server games and multi-player games, which involve a lot of network programming too.
Main Loop
The main loop of a game runs and refreshes the screen at fixed intervals. This is your frame rate, and it dictates how smooth things are. Typically, games refresh the screen 30 to 60 times a second. If you go slower, objects on the screen will seem jerky.
Inside the main loop, there are three main activities: handling events, updating the game state, and drawing the current state of the screen.
Handling Events expires after 10 seconds).
Updating State
The core of each game is its state: the stuff it keeps track of and draws on the screen. In Breakout, the state includes the location of all the bricks, the position and speed of the ball, and the position of the paddle, as well as lives and the score.
There is also the auxiliary state that helps manage the game:
- Are we showing a menu now?
- Is the game over?
- Did the player win?
Drawing
The game needs to display its state on the screen. This includes drawing geometrical shapes, images, and text.
Game Physics
Most games simulate a physical environment. In Breakout, the ball bounces off objects and has a very crude rigid-body physics system in place (if you can call it that).
More advanced games may have more sophisticated and realistic physics systems (especially 3D games). Note that some games like card games don't have much physics at all, and that's totally fine.
AI (Artificial Intelligence)
There are many games where you play against an artificial computer opponent or opponents, or there are enemies that try to kill you or worse. These figments of the game's imagination often behave in a seemingly intelligent way in the game's world.
For example, enemies will chase you and be aware of your location. Breakout doesn't present an AI. You play against the cold, hard bricks. However, the AI in games is often very simple and just follows simple (or complex) rules to achieve pseudo-intelligent outcomes.
Playing Audio
Playing audio is another important aspect of games. There are in general two types of audio: background music and sound effects. In Breakout, I focus on sound effects that play briefly when various events happen.
Background music is just music that plays constantly in the background. Some games don't use background music, and some switch it every level.
Lives, Score, and Levels
Most games give you a certain amount of lives, and when you run out of lives, the game is over. You also often have a score that gives you a sense of how well you're doing and a motivation to improve next time you play or just brag to your friends about your Breakout mad skills. Many games have levels that are either completely different or raise the level of difficulty.
Meet Pygame
Before diving in and starting to implement, let's learn a little about Pygame, which will do a lot of the heavy lifting for us.
What's Pygame? you need something else, follow the instructions in the Getting Started section of the Wiki. If you run macOS Sierra as I do, you may run into some trouble. I was able to install Pygame with no trouble, and the code seemed to run just fine, but the game window never showed up.
That's kind of a bummer when you run a game. I eventually had to resort to running on Windows in a VirtualBox VM. Hopefully, by the time you read this article, the issue will have been resolved.
Game Architecture
Games need to manage a lot of information and perform similar operations on many objects. Breakout is a mini-game, yet trying to manage everything in one file would be overwhelming. Instead, I opted to create a file structure and architecture that would be suitable for much larger games.
Directory and File Structure
├── Pipfile ├── Pipfile.lock ├── README.md ├── ball.py ├── breakout.py ├── brick.py ├── button.py ├── colors.py ├── config.py ├── game.py ├── game_object.py ├── images │ └── background.jpg ├── paddle.py ├── sound_effects │ ├── brick_hit.wav │ ├── effect_done.wav │ ├── level_complete.wav │ └── paddle_hit.wav └── text_object.py
The Pipfile and Pipfile.lock are the modern way of managing dependencies in Python. The images directory contains images used by the game (only the background image in this incarnation), and the sound_effects directory contains short audio clips used as (you guessed it) sound effects.
The ball.py, paddle.py, and brick.py files contain code specific to each one of these Breakout objects. I will cover them in depth later in the series. The text_object.py file contains code for displaying text on the screen, and the background.py file contains the Breakout-specific game logic.
However, there are several modules that form a loose, general-purpose skeleton. The classes defined there can be reused for other Pygame-based games.
The GameObject Class
The GameObject represents a visual object that knows how to render itself, maintain its boundaries, and move around. Pygame actually has a Sprite class that has a similar role, but in this series I want to show how things work at a low level and not rely on too much prepackaged magic. Here is the GameObject class:
from pygame.rect import Rect class GameObject: def __init__(self, x, y, w, h, speed=(0,0)): self.bounds = Rect(x, y, w, h) self.speed = speed @property def left(self): return self.bounds.left @property def right(self): return self.bounds.right @property def top(self): return self.bounds.top @property def bottom(self): return self.bounds.bottom @property def width(self): return self.bounds.width @property def height(self): return self.bounds.height @property def center(self): return self.bounds.center @property def centerx(self): return self.bounds.centerx @property def centery(self): return self.bounds.centery def draw(self, surface): pass def move(self, dx, dy): self.bounds = self.bounds.move(dx, dy) def update(self): if self.speed == [0, 0]: return self.move(*self.speed)
The GameObject is designed to serve as a base class for other objects. It exposes directly a lot of the properties of its self.bounds rectangle, and in its
update() method it moves the object according to its current speed. It doesn't do anything in its
draw() method, which should be overridden by sub-classes.
The Game Class
The Game class is the core of the game. It runs the main loop. It has a lot of useful functionality. Let's take it method by method.
The
__init__() method initializes Pygame itself, the font system, and the audio mixer. The reason you need to make three different calls is because not all Pygame games use all components, so you control what subsystems you use and initialize only those with their specific parameters. It creates the background image, the main surface (where everything is drawn), and the game clock with the correct frame rate.
The self.objects member will keep all the game objects that need to be rendered and updated. The various handlers manage lists of the handler function that should be called when certain events happen.
import pygame import sys from collections import defaultdict class Game: def __init__(self, caption, width, height, back_image_filename, frame_rate): self.background_image = \ pygame.image.load(back_image_filename) self.frame_rate = frame_rate self.game_over = False self.objects = [] pygame.mixer.pre_init(44100, 16, 2, 4096) pygame.init() pygame.font.init() self.surface = pygame.display.set_mode((width, height)) pygame.display.set_caption(caption) self.clock = pygame.time.Clock() self.keydown_handlers = defaultdict(list) self.keyup_handlers = defaultdict(list) self.mouse_handlers = []
The
update() and
draw() methods are very simple. They just iterate over all the managed game objects and call their corresponding methods. If two game objects overlap, the order in the objects list determines which object will be rendered first, and the other will partially or fully cover it.
def update(self): for o in self.objects: o.update() def draw(self): for o in self.objects: o.draw(self.surface)
The
handle_events() method listens to events generated by Pygame, like key and mouse events. For each event, it invokes all the handler functions that are registered to handle this type of event.
def handle_events(self): for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: for handler in self.keydown_handlers[event.key]: handler(event.key) elif event.type == pygame.KEYUP: for handler in self.keydown_handlers[event.key]: handler(event.key) elif event.type in (pygame.MOUSEBUTTONDOWN, pygame.MOUSEBUTTONUP, pygame.MOUSEMOTION): for handler in self.mouse_handlers: handler(event.type, event.pos)
Finally, the
run() method runs the main loop. It runs until the
game_over member becomes True. In each iteration, it renders the background image and invokes in order the
handle_events(),
update(), and
draw() methods.
Then it updates the display, which actually updates the physical display with all the content that was rendered during this iteration. Last, but not least, it calls the
clock.tick() method to control when the next iteration will be called.
def run(self): while not self.game_over: self.surface.blit(self.background_image, (0, 0)) self.handle_events() self.update() self.draw() pygame.display.update() self.clock.tick(self.frame_rate)
Conclusion
In this part, you've learned the basics of game programming and all the components involved in making games. Then, we looked at Pygame itself and how to install it. Finally, we delved into the game architecture and examined the directory structure, the
GameObject class, and the Game class.
In part two, we'll look at the
TextObject class used to render text on the screen. We'll create the main window, including a background image, and then we'll learn how to draw objects like the ball and the paddle.
Additionally, please
| https://code.tutsplus.com/tutorials/building-games-with-python-3-and-pygame-part-1--cms-30081 | CC-MAIN-2019-35 | refinedweb | 1,848 | 67.15 |
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
What is ASP.NET Core?7:22 with James Churchill
ASP.NET Core is a new open-source and cross-platform web framework from Microsoft. In this video, we'll take a closer look at ASP.NET Core, how if differs from previous versions of ASP.NET, and an overview of the features and benefits it offers.
Additional Learning
- 0:00
[MUSIC]
- 0:04
Hi there.
- 0:05
This is James.
- 0:06
I'm a developer and teacher at tree house.
- 0:09
In this workshop we'll be taking a look at ASP.NET core,
- 0:13
Microsoft's latest web framework.
- 0:15
We'll start with answering the question, what is ASP.NET Core?
- 0:20
From the official documentation, ASP.Net Core is a new, open-source and
- 0:25
cross-platform framework for building modern cloud based Internet connected
- 0:30
applications, such as web apps, IoT apps, and mobile backends.
- 0:35
ASP.Net Core is a significant redesign of ASP.Net.
- 0:40
It's the biggest release of ASP.NET since version 1.0.
- 0:44
While some parts of ASP.NET Core will feel familiar such as MVC Controllers and
- 0:49
Views, other parts will seem completely new.
- 0:52
ASP.NET Core is a complete rewrite of the ASP.NET web framework.
- 0:58
It's no longer based on the System.Web assembly.
- 1:01
Breaking away from ASP.NET legacy code was necessary in order for
- 1:06
the ASP.NET development team to meet their stated design goals
- 1:10
of producing a fast cross-platform cloud friendly web framework.
- 1:14
ASP.NET Core places the focus on MVC and
- 1:18
Web API which have been merged into a single API.
- 1:22
We'll take a closer look at the convergence of MVC and
- 1:25
Web API later in this workshop.
- 1:27
Web forms and webpages are not currently available in ASP.NET Core,
- 1:32
nor are they likely ever to be.
- 1:35
Microsoft has made it clear that web forms and web pages will
- 1:40
remain as part of the full .NET framework and not brought into ASP.NET Core.
- 1:45
The ASP.NET team does have a new feature on the roadmap named view pages
- 1:50
that is similar to the functionality of web pages.
- 1:53
See the teacher's notes for more information.
- 1:55
ASP.NET Core runs on either .NET Core or the full .NET framework.
- 2:01
This gives you the flexibility to choose the target framework
- 2:04
that makes the most sense for your situation.
- 2:06
.NET Core is a new cross-platform version of .NET that runs on Windows,
- 2:12
Linux or Mac OS.
- 2:14
.NET Core has a flexible deployment model that gives you the option to deploy it
- 2:19
along with your application in addition to the more traditional side-by-side user or
- 2:24
machine-wide deployment options.
- 2:27
Only a subset of the .NET frameworks API surface has been implemented in .NET Core.
- 2:32
For instance,
- 2:33
the system drawing namespace is currently only partially implemented in .NET Core.
- 2:39
If your application needs to manipulate bitmaps
- 2:41
you'll need to use a third party library that's compatible with .NET core.
- 2:46
Or target the full .NET Framework.
- 2:49
.NET Core only supports a single app model, console apps.
- 2:53
While that might seem limiting,
- 2:55
it's possible to build other app models on top of it which is what ASP.NET Core does.
- 3:01
We'll see it later in this workshop how that works.
- 3:04
The .NET command line interface or CLI shipped as part of the .NET Core SDK.
- 3:10
The .NET CLI is a set of commands that allows you to create, build, run,
- 3:16
publish, test and package .NET Core apps all from the command line.
- 3:21
Having the CLI available, ensures that you can develop .NET Core apps
- 3:25
including ASP.NET Core apps, regardless of what platform or tools you're using.
- 3:30
For instance,
- 3:31
you can develop apps on Linux using the text editor of your choice.
- 3:36
CLI is even used by Visual Studio.
- 3:39
When using Visual Studio to develop .NET Core apps Visual Studio
- 3:43
delegates to the .NET CLI to build, run and publish your app.
- 3:48
ASP.NET Core and .NET Core are fully open source projects being hosted on GitHub.
- 3:54
Development is being done completely out in the open.
- 3:58
You can monitor or
- 3:59
contribute to the teams ongoing development discussions via GitHub issues.
- 4:04
Or you can fork any of the repos, fix a bug or
- 4:07
implement a feature and issue a poll request against the main repo.
- 4:11
ASP.NET Core is another example of the new Microsoft.
- 4:16
A Microsoft that is embracing open source development.
- 4:20
Unlike previous versions of ASP.NET,
- 4:22
ASP.NET Core is not a single monolithic assembly.
- 4:27
Instead it's delivered as a set of granular and well factored NuGet packages.
- 4:32
This gives you a true pay-for-what-you-use-model.
- 4:35
You only reference and deploy the packages that your application needs.
- 4:40
In order to realize ASP.NET Cores cross-platform design goal,
- 4:44
Microsoft needed a cross-platform server for running ASP.NET Core apps.
- 4:49
Kestrel is that server.
- 4:51
Kestrel is a cross-platform, managed web server based on libuv.
- 4:56
libuv is a multi-platform support library with a focus on
- 5:00
asynchronous IO that was developed primarily for
- 5:04
NodeJS, but is used by other projects including now, Kestrel.
- 5:09
Kestrel is the only supported web server for running ASP.NET Core apps.
- 5:14
IIS is no longer directly supported,
- 5:17
meaning that IIS does not host ASP.NET Core apps within its own process.
- 5:23
Instead IIS is used as a reverse proxy to Kestrel using
- 5:28
the ASP.NET Core module, HTTP module.
- 5:32
This is the same overall approach used for hosting NodeJS apps in IIS.
- 5:37
ASP.NET.Core apps running on Castro have
- 5:41
been able to achieve amazing performance benchmarks.
- 5:44
In February 2016 ASP.NET.Core achieved 1.15 million requests per second.
- 5:52
In a sense exceeded that number.
- 5:54
To put that number into perspective,
- 5:58
1.15 million requests per second represents a 2300%
- 6:03
gain over ASP.NET 4.6 or 800% gain over NodeJS.
- 6:08
The second decimal place, 0.05 million or
- 6:12
50,000 is around the total number of requests per second that
- 6:16
ASP .NET 4.6 could perform of the same type on the same hardware.
- 6:22
Most applications will never need this kind of throughput.
- 6:25
But having this kind of headroom will help ensure that your applications feel fast
- 6:30
and responsive.
- 6:32
As previously mentioned, ASP.NET Core runs on Windows, Linux, and macOS.
- 6:38
This is the first time that this is possible
- 6:41
using Microsoft supported runtimes and frameworks.
- 6:44
Cross-platform support opens up new development and deployment scenarios,
- 6:49
including being able to support mixed environment development teams.
- 6:53
And deploying your applications on to cloud hosted Linux virtual machines or
- 6:58
containers.
- 6:59
In this workshop I'll be working with both Windows and macOS.
- 7:04
To get started with ASP.NET Core development visit the new dot.net website,
- 7:10
where you can find detailed installation instructions and
- 7:13
downloadable installers for the platform of your choice.
- 7:17
In the next video we'll use the .NET CLI to create our first project. | https://teamtreehouse.com/library/what-is-aspnet-core | CC-MAIN-2018-13 | refinedweb | 1,385 | 77.74 |
#include <db.h>
int db_env_set_pageyield(int pageyield);
Yield the processor whenever requesting a page from the cache. Setting pageyield to a non-zero value causes Berkeley DB to yield the processor any time a thread requests a page from the cache. This functionality should never be used for any other purpose than stress testing.
The db_env_set_pageyield interface affects the entire application, not a single database or database environment.
The db_env_set_pageyield interface may be used to configure Berkeley DB at any time during the life of the application.
The db_env_set_pageyield function returns a non-zero error value on failure and 0 on success.
The db_env_set_pageyield function may fail and return a non-zero error for the following conditions:
The db_env_set_pageyield function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the db_env_set_pageyield function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way. | http://pybsddb.sourceforge.net/api_c/env_set_pageyield.html | crawl-001 | refinedweb | 169 | 50.77 |
Pike 7.8.866 Release Notes
Changes since Pike 7.8.700 (fourth 7.8 release):
Core
- Improve robustness against outputting wide strings.
master()->handle_error() now survives if, for example, an object has an _sprintf() that returns a wide string.
master()->compile_{error,warning}() now survive messages about wide symbols.
Made a similar fix in Hilfe to survive compiler warnings and errors about wide symbols.
Fixes [bug 6805].
- Avoid recursion on the C-stack, to avoid running out of C-stack when counting huge argument lists.
Fixes [bug 6860].
- Do not use alloca in sprintf()
This fixes a stack corruption bug which occurs when using certain compilers (e.g. msvc).
- Pike.gc_parameters(): Added some gc callbacks.
Adds callbacks that are called from the gc to simplify debugging of memory leaks and similar.
- Improvements in searches on wide strings.
- Fixed padding bug in string_builder_append_integer().
The support for left padding was broken, and would always add the full padding string and also would add erroneous left padding if the field was full.
This bug also affected sprintf().
- Fixed Coverity Scan IDs [SCAN 742690] [SCAN 742691] [SCAN 742477]
Debug
- Activate __CHECKER__ and CLEANUP when --with-valgrind
- Get rid of some harmless valgrind warnings
Build
- Fixed --enable-dlmalloc on systems with struct mallinfo.
- Improved support for building with clang.
- New RPM definition for Pike 7.8 originally based on spec from repoforge.
- Build improvements on Windows/sprshd (avoiding I/O redirection).
- Enable the full address space on NT.
The default address space on NT is just 31 bits. Attempt to get the linker to enable the full addressspace.
- Don't execute $CFLAGS as a command
- Add -mcpu=niagara to CFLAGS on sun4v
- Disable machine code when compiling using GCC 4.6.0 and above, to avoid a broken machine code generator.
Threads
- Fixed hang in co_wait_interpreter().
co_wait_interpreter() would hang (waiting for threads to be reenabled) if called in a disabled_thread context. This happens on OSes using USE_WAIT_THREAD (eg Solaris) if a process is waited on in a disabled_thread context.
Backend
- Improved thread safety in find_call_out() et al.
backend_find_call_out() called is_eq() (which may call Pike code and release the interpreter lock) in a PROTECT_CALL_OUTS() context. This could cause call_out operations performed in other threads to either (no debug) mess with the hash table being traversed or (with debug) cause the fatal "Recursive call in call_out module.".
- Fixed typo in out of band data handling.
Fixes [LysLysKOM 20481103]/[Pike mailing list 13683].
Bug-fixes and New Methods in Modules
ADT
- Fix initialization bug in CircularList.allocate().
Fixes LysLysKOM 20179471/Pike mailinglist 13520.
- Working int32/SWord in Struct.
Calendar
- Improve reentrancy of Timezone.compile().
The runtime timezone compiler was not thread safe, and could fail with the compiler error "Undefined identifier forever." when multiple concurrent threads compiled the same timezone.
Potentially fixes [bug 6816] #1:1.
- Fixed bug where the month was lost with %a to Calendar.parse.
Crypto
- RSA: generate_key() now ensures that the key has the correct size.
Fixes [bug 6620].
- Added module Crypto.Password, for easy handling of password hashes.
- MD5: Added crypt_hash().
- SHA: Added the crypt_hash() function from SHA-crypt.
This implements the hashing function used in modern POSIX operating systems. Implemented from the reference document
Database
- Enable Oracle 11
- Improved compatibility with newer versions of FreeTDS ODBC driver. Fetch date types as fixed length as a workaround for bugs in FreeTDS.
- Support the MariaDB client library.
The MariaDB client library is a forward port of the LGPL mysql client library from MySQL 3.23 to support modern MySQL and MariaDB.
- Potential workaround for race-condition in Mysql.create().
It seems mysql_real_connect() and/or mysql_close() aren't fully thread-safe. The bug has been observed as recently as in MySQL 5.5.30.
Errors
- Errors: Improved emulation or arrays in object errors.
Implement _sizeof(), _indices() and _values() in the generic error class.
Fixes "Index 2 is out of range 0..1." from describe_backtrace().
GTK2
- Add connect_before option to signal_connect so you can connect the signal before or after the default hooks.
- Fixed infinite loop in encode_truecolor_24_rgb_al32().
- Fixed various issues in get_doc() et al.
- Change gobject signal connect to before instead of after.
- Fix a refcounting crash in GDKEvent.
- Pass arguments to accel_group callbacks separately rather than as one array.
- Call the correct callback when an accelerator is hit.
- TreePath: Query the depth for get_indices() rather than looking for a terminator.
- Add a signal_stop() method to prevent signal propagation.
Image
- Fonts: Add PS_NAME attribute in info mapping if possible.
Needed to facilitate compatibility with code relying on behaviour of older versions of FreeType.
- Fonts: Enhanced compatibility with newer versions of FreeType.
- JPEG: Disable the module if empty
- Add basic CMYK/YCCK support to Image.JPEG.decode(). Fixes [bug 6163].
- ColorTable: Fixed some memory leaks in add().
- XPM: Fixed memory zapping bug in _xpm_write_rows().
- Handle orientation information contained in JPEG EXIF information.
Since the default processing of JPEG in Image.JPEG was changed to take EXIF Orientation into account, let's update Image.Dims to also take that into account, so it correctly predicts the dimensions that will result from loading the image.
Added Pike-level wrappers to Image.JPEG.decode() and Image.JPEG._decode() in order to flips/rotates the decoded image based on EXIF Orientation information, if such information is present. Also added an Image.JPEG.raw_decode() to decode an image without rotating/flipping, which works much like Image.JPEG._decode() did before overloading the EXIF handling.
- Add support for native PSD files in Standards.IIM.
- Do not crash when decoding certain PNG files [TURBO2-80].
Java
- Improve diagnostics on failure on NT.
- Attempt to support loading of Java 6 and 7 on NT.
- Use SetDllDirectory() to find required dlls. Fixes [bug 6471].
Oracle's jvm.dll has dependencies on runtime libraries that it doesn't install in the global dll path or in the same directory. This patch adds the directory where they do install the required dlls to the dll search path, with a fallback to using the current directory on older NT.
The main change in this patch is to fix some calling-convention bugs in earlier attempts, and to use the Unicode APIs.
Parser.XML
- Tree: Fixed several issues in namespace handling.
- Tree: Fix some regressions. Fixes InfoKOM 731715.
- Tree: Improved namespace handling in default mode.
Process
- Fixed multiple issues with search_path().
- Unified handling of $PATH.
- Added path_separator.
- search_path() now invalidates the cached path if $PATH is changed.
- search_path() now uses locate_binary() to scan the path.
- Moved an __NT__ special case from locate_binary() to search_path().
- spawn_pike() now uses search_path().
Protocols.DNS
- Destruction of server now results in port closure.
- Support multiple strings per TXT record in the client, via new txta mapping entry.
Protocols.HTTP
- Don't modify the mapping sent to response_and_finish.
Unsuspecting users that pass a constant mapping to response_and_finish when a particular error occurs (or when a particular URL is requested) can fail if response_and_finish alters the mapping. For example, future requestes may retain a 416 error if one request uses the Range: header.
- Support async keep-alive in Protocols.HTTP.Query. Fixes [bug 7143].
Protocols.SMTP
- send_message() now punicodes the hostname.
Potential fix for [bug 6531].
- Fixed error in GetRequest variable bindings.
The variable value should be ASN1 "Null" rather than a bogus integer. C.f. RFC 1905 section #3.
Sql.rsql
- Implemented generic proxy of functions.
- Implemented support for all big_query() variants.
This implements support for big_typed_query(), streaming_query() and streaming_typed_query().
- Implement {get,set}_charset().
- Default to not reconnect on broken connection.
This behaviour is in line with the other SQL modules, and is necessary to avoid corruption due to lost state.
- Implement ping() API.
ping() now signals if any part of the connection was reconnected (1), or has been broken (-1).
- Fix reconnect code.
The state-machine on the client side when the rsqld server died was broken in several ways. It now seems to work.
- Implemented insert_id().
SSL
- Attempt to protect against some timing attacks.
Move around some code and attempt to get it to execute in constant time. This is in an attempt to alleviate the "Lucky Thirteen" TLS attack.
- Avoid rescheduling the ssl_read_callback on no read_callback.
This could lead to call_out loops taking 100% cpu, since no data would be read from the read_buffer.
Potential fix for [bug 6582].
- Added linger().
Implemented linger() API. The linger time is propagated to the raw socket, and additionally a linger time of zero inhibits sending of the close packet.
Stdio
- cp() detects ouroboros and avoids infinite loops and file truncation.
- Reduce number of system calls in mkdirhier().
- Call fd_select() before fd_accept() in my_socketpair() to make sure it's ready.
Added support for poll() to the fd_accept() check in my_socketpair() and lowered the select() timeout.
- Added linger(), to change the linger time on sockets.
- Fixed a bug in Stdio.FakeFile::read_function() where read data wasn't returned.
- Open files in binary mode, for systems that care (OS/2, etc).
Standards
- Standards.EXIF supports rationals with a zero denominator.
The denominator may be zero to indicate infinites.
Fixes [bug 6729].
ZXID
- Update the zxid_conf::path_len field.
Fixes [roxen.com #16333] where assertion data was lost after the redirect.
Web
- Web.CGI.Request: rest_query may be zero. Fixes [bug 6685]. | http://pike.lysator.liu.se/download/notes/7.8.866.xml | CC-MAIN-2020-40 | refinedweb | 1,533 | 61.73 |
Hello programmers, today’s article is all about the Matplotlib Contourf () function in Python. The contourf () function in the pyplot module of the matplotlib library helps plot contours. Level plots are also termed Contour Plots. They are tools for doing multivariate analysis and visualizing 3-D plots in 2-D space. We will look into examples and implementations of the Matplotlib contourf() function. But before that, let me just brief you about the syntax and parameters of the contourf() function.
Syntax of contourf() function:
matplotlib.pyplot.contourf(*args, data=None, **kwargs)
Call Signature: contour([X, Y,] Z, [levels], **kwargs)
Parameters of Matplotlib Contourf:
X, Y: Both the parameters should have the same shape as Z.
They must both be 1-D such that len(X) is the number of columns in Z and len(Y) is the number of rows in Z.
Z: Height values over which the contour is drawn. (Array-like)
levels: Determine the numbers and positions of the contour lines/regions. For integer n, use n data intervals, i.e., draw n+1 contour lines. For arrays, draw contour lines at the specified levels. The values must be in increasing order.
Return type:
Returns a contour plot based on the desired parameters passed as arguments to the contourf() function.
Example of Matplotlib contourf()
import matplotlib.pyplot as plt import numpy as np feature_x = np.linspace(-5.0, 3.0, 70) feature_y = np.linspace(-5.0, 3.0, 70) # Creating 2-D grid of features [X, Y] = np.meshgrid(feature_x, feature_y) fig, ax = plt.subplots(1, 1) Z = X ** 2 + Y ** 2 # plots filled contour plot ax.contourf(X, Y, Z) ax.set_title('Filled Contour Plot') ax.set_xlabel('feature_x') ax.set_ylabel('feature_y') plt.show()
Output:
Explanation:
In the above example, the Numpy meshgrid() function creates a 2-dimensional grid containing the coordinates of the values in Z. X and Y have the same dimensions as Z. The Z array contains the height values on which the contour. Thus, passing X, Y, and Z as arguments to the contourf() function, we get a filled contour plot. The title of the contour plot ‘Filled Contour Plot.’ The x-label and y – the label is for the contour plot as ‘feature x’ and ‘feature y’, respectively.
Setting Colorbar Range with Matplotlib contourf() in Python
import numpy as np import matplotlib.pyplot as plt x = np.arange(20) y = np.arange(20) data = x[:,None]+y[None,:] X,Y = np.meshgrid(x,y) vmin = 0 vmax = 15 fig,ax = plt.subplots() contourf_ = ax.contourf(X,Y,data, 400, vmin=vmin, vmax=vmax) cbar = fig.colorbar(contourf_) cbar.set_clim( vmin, vmax )
Output:
Explanation:
In the above example, the color bound from the graphs are set with “vmin” and “vmax”, but the colorbar bounds are not modified. Passing contourf as an argument to the colorbar() method, we get to set the colorbar range for the contour created.
Plotting 3D contour with Matplotlib contourf() in Python
from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np a=np.array([-3,-2,-1,0,1,2,3]) b=a a,b=np.meshgrid(a,b) fig = plt.figure() axes = fig.gca(projection="3d") axes.plot_surface(a,b,a**2+b**2,cmap="rainbow") plt.contour(a,b,a**2+b**2,cmap="rainbow") plt.show()
Output:
Explanation:
In this example, numpy and matplotlib library are imported. A numpy array is created to store A and B. After that, the meshgrid function is used, and A and B are passed inside it. The contour is then plotted to bypass 3 arguments: A, B, A**2+B**2. For three-dimensional contour plotting, module Axes3D from mpl_toolkits.mplot3d needs to be imported specifically.
Matplotlib contourf() v/s contour()
Both contourf() and contour() functions of the Matplotlib library are used for contour plotting in Python. The only difference between them is that the contourf() is used to plot filled contours while contour() only plots the contour lines. Below is an example to demonstrate the Matplotlib contour() function in Python.
import matplotlib.pyplot as plt import numpy as np feature_x = np.arange(0, 50, 2) feature_y = np.arange(0, 50, 3) # Creating 2-D grid of features [X, Y] = np.meshgrid(feature_x, feature_y) fig, ax = plt.subplots(1, 1) Z = np.cos(X / 2) + np.sin(Y / 4) # plots contour lines ax.contour(X, Y, Z) ax.set_title('Contour Plot') ax.set_xlabel('feature_x') ax.set_ylabel('feature_y') plt.show()
Output:
Also Read: Matplotlib Quiver Plot
Conclusion:
In this article, we discussed contour plots with examples and implementations. Contour plots are widely used to visualize the mountain’s density, altitudes, or heights by representing its three-dimensional surface in a two-dimensional plane. Unlike the MATLAB version, contourf() cannot draw polygon edges. The contourf() function fills intervals that are closed at the top(i.e., includes the lowest values). You can refer to this article for clear and concise knowledge on Matplotlib contourf() in Python.
However, if you have any doubts or questions do let me know in the comment section below. I will try to help you as soon as possible.
Happy Pythoning! | https://www.pythonpool.com/matplotlib-contourf/ | CC-MAIN-2021-43 | refinedweb | 856 | 61.22 |
Originally published at rossta.net
I recently encountered a Rails app at work that was spending nearly seven minutes precompiling assets:
I looked in the
Gemfile and found the project was using Webpacker. My spidey sense started to tingle.
I've seen this before.
Leaning on prior experience, I found the problem, moved some files around, and pushed a branch with the fix up to CI.
The build step dropped from nearly seven minutes to less than one. Big improvement! When I heard from the team, the fix also greatly improved the local development experience; before, re-compiling Webpack assets on page refreshes would take a painfully long time.
So what were the changes?
A Common Problem
First, let's take a step back. If you're new to Webpack and Webpacker for Rails, chances are you may be making some simple mistakes.
I know this because I was once in your shoes struggling to learn how Webpack works. I've also spent a lot of time helping others on my team, on StackOverflow, and via
rails/webpacker Github issues.
One of the most frequently-reported issues I've seen is slow build times. This is often coupled with high memory and CPU usage. For Heroku users on small dynos, resource-intensive asset precompilation can lead to failed deploys.
More often than not, the root cause is a simple oversight in directory structure—a mistake I call "overpacking".
Overpacking explained
Here's the layout of the
app/javascript directory in the Rails app before I introduced the fix:
rake assets:precompile — 6:56
app/ javascript/ packs/ application.js components/ # lots of files images/ # lots of files stylesheets/ # lots of files ...
Here's what the project looked like building in under a minute:
rake assets:precompile — 0:44
app/ javascript/ components/ images/ stylesheets/ ... packs/ application.js # just one file in packs/
See the difference?
The primary change here was moving everything except
application.js outside of the
packs directory under
app/javascript. (To make this work properly, I also had to update some relative paths in
import statements.)
Webpack Entry Points
So why did this matter?
Webpack needs at least one entry point to build the dependency graph for produce the JavaScript and CSS bundles and static assets (images, fonts, etc).
The Webpacker project refers to entries as packs.
"Entry" is listed as the first key concept on Webpack's documentation site:.
Webpack will build a separate dependency graph for every entry specified in its configuration. The more entry points you provide, the more dependency graphs Webpack has to build.
Since Webpack*er*, by default, treats every file in the
packs directory as a separate entry, it will build a separate dependency graph for every file located there.
That also means, for every file in the
packs directory, there will be at least one, possibly more, files emitted as output in the
public directory during precompilation. If you're not linking to these files anywhere in your app, then they don't need to be emitted as output. For a large project, that could be lot of unnecessary work.
Here's a case where Rails tries to make things easier for you—by auto-configuring entry files—while also making it easier to shoot yourself in the foot.
A Simple Rule
Is your Webpacker compilation taking forever? You may be overpacking.
If any file in Webpacker's "packs" directory does not also have a corresponding
javascript_pack_tagin your application, then you're overpacking.
Be good to yourself and your development and deployment experience by being very intentional about what files you put in your "packs" directory.
Don't overpack. At best, this is wasteful; at worst, this is a productivity killer.
Interested to learn more about Webpack on Rails? I'm creating a course.
Cover photo by Brandless on Unsplash
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/rossta/overpacking-a-common-webpacker-mistake-1pac | CC-MAIN-2021-49 | refinedweb | 640 | 65.32 |
SystemJS has been added as a module kind for TypeScript, so you should be able to use SystemJS out of the box and compile with the module flag:
tsc --module system app.ts
This was added alongside the ES6 module importing syntax, so you should be able to use that style and have it compiled to what you need.
import * as $ from "jquery";
If you are using the SystemJS syntax, you can declare the parts you need like this:
systemjs.d.ts
interface System { then: (cb: Function) => void; } interface SystemStatic { import: (name: string) => System; } declare var System: SystemStatic; export = System;
You should then be able to use it like this:
/// <reference path="jquery.d.ts" /> import System = require('systemjs'); System.import('jquery').then(($: JQueryStatic) => { $('#id').html('Hello world'); }); | https://codedump.io/share/UuwDk4cZWg6H/1/using-jspm-with-typescript | CC-MAIN-2017-13 | refinedweb | 128 | 60.35 |
Streaming media has become ubiquitous on the Web. It seems like everyone—from news sites to social networks to your next-door neighbor—is involved in the online video experience. Due to this surge in popularity, most sites want to present high-quality video—and often high-quality bandwidth-aware video—to their consumers in a reliable and user-friendly manner.
A key element in the online media delivery experience is the player itself. The player is what the customer interacts with, and it drives every element of the user’s online experience. With so much attention centered on the player, it’s no surprise that modern, Web-based media players have become a great deal more complicated to implement than they were even a couple years ago. As a result, developers need a robust framework on which they can build their players..
This article will explain the basic elements of SMF, demonstrate how you can integrate SMF into your own player projects and walk you through a simple project that uses SMF to create a custom player experience. I’ll show you how to use the logging, settings, and event-handling features of SMF. Finally, I’ll create a player application that displays suggested videos for further viewing when the current video ends.
Getting Started with SMF
To get started, the first thing you’ll want to do is download the framework from Codeplex (smf.codeplex.com). You also need to download the Smooth Streaming Player Development Kit (iis.net/expand/smoothplayer) and reference it in any projects using SMF. The Smooth Streaming Player Development Kit is not part of SMF—it’s a completely separate, closed-source component. However, SMF leverages a core set of functionality from the kit, in particular the video player itself. As of the writing of this article, the Smooth Streaming Player Development Kit is in beta 2.
SMF consists of a number of Microsoft .NET assemblies (as shown in Figure 1), each a different functional part of the overall framework.
Figure 1 The Silverlight Media Framework Assemblies
The core assembly is Microsoft.SilverlightMediaFramework.dll, which comprises a number of utility classes and types referenced throughout the rest of the framework. When using any aspect of SMF, you must also reference the Microsoft.SilverlightMediaFramework.dll assembly.
The Microsoft.SilverlightMediaFramework.Data namespace provides helper classes for consuming data external to the player and for encapsulating data within the player. The data can be general, with any form, but it can also be settings information for the player itself. There’s another namespace, Microsoft.SilverlightMediaFramework.Data.Settings, for types representing and dealing with player settings.
Apart from data used for settings, the type within the Data namespace you’ll most likely interact with is the out-of-stream DataClient class, which can retrieve data from an external source. You reference this assembly if you want to download and use data external to the player.
The SMF player includes the robust Microsoft.SilverlightMediaFramework.Logging framework that uses a callback-style paradigm in which writing to the logging infrastructure raises events. You register your own callback methods with the logging system, and these callbacks carry out additional operations once invoked—such as posting information to a Web service or displaying information to a text box. You reference this assembly if you wish to use the built-in logging facilities of SMF.
The Microsoft.SilverlightMediaFramework.Player assembly implements the player itself. It also provides a number of controls the player relies on, such as a scrubber, volume control and timeline markers. The default SMF player is sleek and clean, a great starting point for any project requiring a Silverlight player. However, central to all controls defined within SMF is the notion of control templating, so each control can be themed by using tools such as Expression Blend or Visual Studio.
Building and Referencing SMF
SMF downloads as a single .zip file in which you’ll find a solution file, a project for each output library, and test projects for running and verifying the player itself.
SMF relies on the Smooth Streaming Player Development Kit. To reference the kit, move the Smooth Streaming assembly (Microsoft.Web.Media.SmoothStreaming.dll) into the \Lib folder of the SMF project.
Next, open the SMF solution in Visual Studio and build it, creating all the assemblies needed to leverage the framework. To verify that everything executes as expected, press F5 to begin debugging. The solution will build and the Microsoft.SilverlightMediaFramework.Test.Web target will execute, presenting you with the default SMF player streaming a “Big Buck Bunny” video (see Figure 2). Note how complete the default player already is, with a position element for scrubbing, play/stop/pause buttons, volume controls, full screen controls and so forth.
Figure 2 The SMF Player and the Big Buck Bunny Video
The next step is to create your own separate Silverlight project and leverage SMF from within it. In Visual Studio click File | New | Project | Silverlight Application. Call the solution SMFPlayerTest and click OK. A modal dialog will pop up, asking whether you wish to host the Silverlight application in a new Web site. Click OK and you’ll see a basic Silverlight application solution consisting of two projects, SMFPlayerTest and SMFPlayerTest.Web.
The final step is to reference the Smooth Streaming Player Development Kit and SMF assemblies from your newly created project. Copy the output SMF assemblies and Smooth Streaming Player Development Kit from the SMF solution’s Debug folder and paste them into your new project as shown in Figure 3. Your new solution now includes all the assembly references required to take full advantage of the SMF.
Figure 3 Referencing the Required Assemblies
Displaying the Player
To begin using the SMF, include the SMF player’s namespace within your MainPage.xaml page. This ensures that all references resolve properly:
Now insert player’s XAML within the page’s LayoutRoot Grid control:
Pressing F5 will launch the project and bring up the SMF player. However, because the player hasn’t been told what to play, it does nothing. All you get is a player with no content to play.
SMF uses SmoothStreamingMediaElement (from the Smooth Streaming Player Development Kit) to play video. From SmoothStreamingMediaElement, SMF inherits its own player, called CoreSmoothStreamingMediaElement. This object is required if you want the player to stream content. Be sure to set the SmoothStreamingSource property to a valid smooth streaming media URL:
As mentioned earlier, Microsoft provides the “Big Buck Bunny” sample video stream, which developers can use to test Silverlight projects. To use this test stream, set the SmoothStreamingSource property on the CoreSmoothStreamingMediaElement to:
Once again, press F5 to build and run the project. The browser will execute with the same player as before, but this time the “Big Buck Bunny” video will begin streaming moments after the player has fully loaded. If your task was to create a basic Silverlight player to stream content, you’ve done it.
However, the SMF offers quite a bit more than we’ve seen thus far. Let’s add some basic logging.
Logging in the Player
Logging in SMF is simple—whenever an event is logged, it raises a LogReceived event. You register an event handler for this event, and thereby receive a notification for each logging event as it’s raised. What you do with the notification is up to you; you can display it in a new window within the player, filter the events and notify a Web service whenever a certain event gets raised, or do whatever is necessary for your scenario.
The LogReceived event is statically defined on the Logger class itself (defined within Microsoft.SilverlightMediaFramework.Logging.dll), so it’s possible to register for logging events anywhere within the project. Here’s an example of registering for and defining the event handler within the MainPage.xaml file of the SMFPlayerTest project:
public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); Logger.LogReceived += new EventHandler<SimpleEventArgs<Log>>( Logger_LogReceived); } void Logger_LogReceived(object sender, Microsoft.SilverlightMediaFramework.SimpleEventArgs<Log> e) { throw new NotImplementedException(); } }
SMF raises quite a few events out of the box. To see them, create a breakpoint within the Logger_LogReceived method and run the player once again in Debug mode. Almost immediately your breakpoint will get hit, allowing you to step through the method’s parameters and see the information passed to it.
Log event data is packaged within a special messaging object whose type must inherit from an abstract class named Log. This abstract Log type has three properties: Sender, Message and TimeStamp. Sender references the object that raised the event. Message is an object of type System.String that holds the text for the logging event. TimeStamp simply holds the date and time at which the logging object was first instantiated. The SimpleEventArgs<> object passed as the second parameter to your event handler holds a reference to the Log object through its Result property.
To raise a log event, all that’s required is to instantiate a type that inherits from the Log base class, then pass this type to the statically defined Log method on the Logger type. The framework supplies a DebugLog class that already inherits from the Log base type. What’s special about the DebugLog type, however, is that if the libraries being referenced by your Silverlight project were created under a Debug build of the SMF, passing a DebugLog type to the SMF logging framework will raise a corresponding logging event (and therefore invoke your event handlers). On the other hand, a Release build of the SMF will ignore any call to the Log method that gets passed the DebugLog class. In short, if you have debugging statements you only want to use Debug builds, with the DebugLog object as the log event argument; otherwise you will need to construct your own type that inherits from the abstract Log type.
Here’s an example that raises a Listening event through the SMF event system by instantiating a DebugLog object and passing it to the Logger’s static Log method (be sure your Smooth Streaming Player Development Kit files were built under Debug settings):
Inheriting from the Player Class
Although logging is a central feature of the player, the SMF playback features are only accessible when you inherit from and begin extending the SMF Player type itself.
To see how this works, you need to create a new class called SMFPlayer that inherits from the Player type.
The new SMFPlayer class looks like this:
Every FrameworkElement type (such as Player in SMF) has an OnApplyTemplate method that is called whenever the ApplyTemplate event is raised. This method often serves as a useful starting point when initializing a FrameworkElement type.
In this case, I override the default OnApplyTemplate method from within the new SMFPlayer class. To demonstrate that the new SMFPlayer type is executed instead of the default Player type, you can set a breakpoint within the override. When you debug the player in Visual Studio, this breakpoint will be enountered when Silverlight executes the SMFPlayer.
Now update the MainPage.xaml file to use the new player class. First, include the player’s namespace in the list of namespaces already referenced (just as you did the player namespace earlier):
Then simply update the Player tags within the XAML to use SMFPlayer instead of Player:
Next, instantiate a DebugLog class and pass it to the Log method as shown earlier. Doing so will fire the event for which you previously registered an event handler:
To listen specifically for this event from within the event handler, filter the Message property of the DebugLog object itself. In this example, look for any message that contains “OnApplyTemplate”:
Using Settings Data
A mature framework for dealing with settings is crucial to most large-scale software projects. The code for handling settings in SMF is built on the Microsoft.SilverlightMediaFramework.Data.dll assembly, which allows you to download generic, external data. The settings layer of SMF uses this infrastructure to reach out and download a specially formatted XML settings file hosted on a Web server. Once the settings data has been successfully downloaded and read, the SMF settings layer encapsulates it with a SettingsBase object whose methods are then used to retrieve the settings values.
The SettingsBase class, as the name suggests, serves as a base for a more specific class that can provide strongly typed access to your settings values. Here’s an example of a class that inherits from SettingsBase. It has two properties, one for retrieving a video player source URL and another for retrieving a Boolean value that indicates whether the video player should start automatically or wait for the viewer to press the play button:
The property methods use functions implemented by the SettingsBase class to inspect the underlying collection of settings name/value pairs loaded into the type (through a mechanism discussed shortly). This provides a type-safe and IntelliSense-friendly method of retrieving settings information.
Now create a new XML file in the SMFPlayerTest.Web project, name it SMFPlayerSettings.xml, and add the following to it:
Next, create a SettingsClient object into which you’ll load the settings XML. SettingsClient takes a URI pointing to the settings file:
The process of retrieving the settings data is asynchronous, so a callback method must be assigned to the RequestCompleted method on SettingsClient:
The last step is to invoke the parameterless Fetch method on the SettingsClient object. When the data is retrieved, the settingsGetter_RequestCompleted event handler will be invoked and a SettingsBase object will be passed to it:
The SettingsBase object passed to the settingsGetter_RequestCompleted method is loaded with the name/value pairs parsed for you by the underlying framework from the file SMFPlayerSettings.xml. In order to load this data into your SMFPlayerTestSettings object, you simply call the Merge method, which merges settings information from one SettingsBase-derived object with that of another:
You no longer have to hard-code the AutoPlay and SmoothStreamingSource properties on the CoreSmoothStreamingMediaElement within the page XAML, because the player settings are being downloaded from within the OnApplyTemplate method. This is all you need for the player XAML:
When you run the player, all the settings data will load, the callback will load the values into the player’s media element, and the video will begin to stream just as it did before.
Extending the SMF Player
On many popular video sites, when video playback has completed, you see a list of similar or recommended videos. To illustrate how easy it is to extend the SMF player, let’s walk through the steps to build a similar suggested-viewing feature into the SMFPlayerTest project.
Start by adding an x:Name attribute to the Player element in the MainPage.xaml file:
This makes it easier to refer to the SMFPlayer object by name within both Visual Studio and Expression Blend.
Now, right-click on the MainPage.xaml file in Solution Explorer and select Open in Expression Blend. Expression Blend 3 will launch and display a design interface to the SMF player. In the Objects and Timeline section, you’ll find a myPlayer node in the tree of visual objects that corresponds to the name given to the SMFPlayer object previously. The goal is to create a template for SMFPlayer, then to add three Suggestion buttons to the template. By using a template in Expression Blend, you can add, edit or remove controls built into the player itself.
To create a template, right-click myPlayer in the Objects and Timeline window and select Edit Template | Edit a Copy. A Create Style Resource dialog will be displayed, click OK. To insert the three buttons on top of the video player, double-click the button icon in the Tools window for each button you want to add. Three buttons should now be visible in the tree of controls that make up the player template (see Figure 4).
Figure 4 Button Controls Added to the Control Tree
Select all three buttons in the tree, go to the properties window for the controls and set the horizontal and vertical alignment to be centered (see Figure 5), thus aligning the buttons down the center and middle of the video player.
Figure 5 Setting Button Control Alignment
The buttons are the default size and lie on top of each other. Set the width of each button to 400, and the height to 75. Next, adjust the margins so that one button has a 175-pixel offset from the bottom, another 175-pixel offset from the top and the last has no margin offsets at all. The end result will look like Figure 6.
Figure 6 The Centered Buttons in Expression Blend
To verify the buttons have been properly placed on the player, save all open files in Expression Blend and return to Visual Studio. Visual Studio may prompt you to reload documents that were changed by Expression Blend. If so, click OK. From within Visual Studio, press F5 to relaunch the SMF player in Debug mode. The player should now appear with three buttons aligned down the center of the video screen as shown in Figure 7.
Figure 7 The Centered Buttons in the SMF Player
Hooking up Event Handlers
Event handlers must now be associated with the buttons. To reference the buttons from code, you need to assign names to them, which you do via the Name text box in the Properties tab. For simplicity, name the buttons Button1, Button2 and Button3. When you’re done, the Objects and Timeline window should update and display the button names adjacent to the button icons in the visual tree.
Within the Properties tab for each button you’ll find an Events button that’s used to assign event handlers for a visual component. Select one of the buttons, click the Event button within the Properties tab, and double-click the Click text box to auto-generate an event handler within the MainPage.xaml.cs. The properties window for each button will now have an event handler assigned to its Click event (see Figure 8), and the MainPage.xaml.cs file will have event handlers assigned to each button’s Click event.
Figure 8 Setting the Event Handler
You can now debug the player. Clicking any of the buttons on the screen will raise a Click event, which is now handled by the auto-generated methods within MainPage.xaml.cs.
Suggested Videos
Now let’s use these buttons to enable the suggested video feature. The following XML will represent the suggestions:
The value of the Url attribute will specify the video the player is to load when the button is clicked, and the DisplayName attribute is the text to be written on the button. Save this file with the name Suggestions.xml in the SMFPlayerTest.Web project.
The DataClient type (within the Microsoft.SilverlightMediaFramework.Data namespace) will be used to download the XML document and to represent the content in a type-safe manner. To represent each Suggestion read from the XML file in a strongly typed fashion, create a class called SMFPlayerTestSuggestion in your Silverlight project:
DataClient, like SettingsBase, is intended to be derived from by a class that enables a strongly typed representation of the data from the XML content (in this case, an array of SMFPlayerTestSuggestion objects).
Create another class file within the SMFPlayerTest project called SMFPlayerTestDataClient:
SMFPlayerTestDataClient inherits from DataClient and sets its template argument to an array of SMFPlayerTestSuggestion types. The DataClient base class provides all the necessary asynchronous networking logic to go online and download the external XML file. Once the content has been downloaded, however, the DataClient base will invoke OnRequestCompleted and expect all processing of the XML data to take place then. In other words, the DataClient base class downloads the content, but the implementer is responsible for doing something with it.
Here’s a more complete implementation of OnRequestCompleted:
protected override void OnRequestCompleted( object sender, SimpleEventArgs<string> e) { XDocument doc = XDocument.Parse(e.Result); List<SMFPlayerTestSuggestion> suggestions = new List<SMFPlayerTestSuggestion>(); foreach (XElement element in doc.Descendants("Suggestion")) { suggestions.Add(new SMFPlayerTestSuggestion { DisplayName = element.Attribute("DisplayName").GetValue(), Url = element.Attribute("Url").GetValueAsUri() }); } base.OnFetchCompleted(suggestions.ToArray()); }
For the sake of simplicity, I’ve used LINQ to XML in this implementation to parse the required elements and attributes in the XML. Once the DisplayName and Url attribute values from each Suggestion node have been retrieved, a SMFPlayerTestSuggestion object is instantiated and the values are assigned.
The final step is the invocation of OnFetchCompleted event. Outside consumers of SMFPlayerTestDataClient may register event handlers to the FetchCompleted event to be notified when the suggested video data has been downloaded. Because OnRequestCompleted has packaged the XML data in a type-safe manner, each event handler will receive a handy array of SMFPlayerTestSuggestion objects, one for each Suggestion element in the XML document the DataClient base class downloaded.
The underlying DataClient provides a method called Fetch that, once invoked, begins the process of asynchronously downloading content. To begin downloading the suggestion data when the video has ended, attach an event handler called mediaElement_MediaEnded to the MediaEnded event on the MediaElement object:
The mediaElement_MediaEnded method creates an instance of the SMFPlayerTestDataClient type, assigns another event handler to the FetchCompleted event, and then invokes Fetch to begin the download process. The FetchCompleted handler will be invoked by the call to OnFetchCompleted implemented previously within OnRequestCompleted (which is invoked by the DataClient base type once the content has downloaded).
The implementation of suggestion_FetchCompleted, registered within mediaElement_MediaEnded, takes the strongly typed array of Suggestion data and assigns one Suggestion to each button:
GetTemplateChild, a method on the underlying FrameworkElement type, gets a reference to each of the buttons defined in the MainPage XAML. For each button, the display text is assigned to the Content property, and the URI is assigned to the Tag property. Each button’s click event handler can then pull the URI from the Tag property and assign the URL to the player’s MediaElement to play the stream:
Showing the Buttons
The final step is to hide the buttons until the currently streaming video has ended, at which point the buttons become visible. Once a user clicks a button, the buttons are hidden again.
Within Visual Studio, edit the SMFPlayer class by decorating it with two TemplateVisualState attributes:
TemplateVisualState is a fascinatingly powerful attribute that defines visual states under which an object may exist. Once a visual state becomes active, Silverlight will update properties of visual elements belonging to the class as instructed—such as the visibility of a child button control.
To set the current visual state, use the static GoToState method of the VisualStateManager class (a native Silverlight type). The GroupName property of the TemplateVisualState groups like states together, whereas the Name property of the TemplateVisualState specifies the individual state.
Return to Expression Blend. In the myPlayer template, click myPlayer directly above the designer window, then click Edit Template | Edit Current. Click the States tab and scroll down SuggestionStates as shown in Figure 9.
Figure 9 Visual States for SuggestionStates
The two SuggestionStates created by the attributes appear as Hide and Show. If you click on Hide, a red circle appears just to the left, indicating Expression Blend is recording any property changes made within the designer. Expression Blend continues to record property changes until Hide is clicked again, which causes the red recording circle to disappear.
With Expression Blend actively recording for the Hide visual state, set the buttons to Collapsed. Select all three buttons under the Objects and Timeline window and choose Collapsed as their Visibility in the Properties tab. Stop recording for the Hide visual state by clicking the Hide button once again. Now click Show so that a red circle appears to the left of the Show visual state. This time explicitly record Visible as the visibility status by clicking the Advanced Property Options button just to the right of the Visibility drop-down and selecting Record Current Value. Save all open documents and once again return to Visual Studio.
The native Silverlight class, VisualStateManager, is used to explicitly set a currently active visual state. From within the OnApplyTemplate method of the player, set Hide as the currently active visual state:
Within suggestion_FetchCompleted, set Show as the currently active state to display the buttons once the stream has ended and the Suggestion data download has completed:
To hide the buttons once a button is clicked (or the original stream is replayed), create a new event handler for the MediaElement’s MediaOpened event, and set the visual state to Hide.
Launch and debug the player one final time. You’ll see the buttons are invisible until the very end of the video, at which point they become visible. Clicking a button navigates the player to whatever URL was specified in the button’s corresponding Suggestion setting.
The SMF project space on Codeplex gives you access to the code base, documentation, discussions and the issue tracker. Take a look and contribute what you can. The more creative minds applied to the project, the better the result for everyone.
Ben Rush is an 18-year veteran software developer specializing in the Microsoft .NET Framework and related Microsoft technologies. He enjoys smart code and fast bike rides.
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/ff646972.aspx | CC-MAIN-2019-35 | refinedweb | 4,234 | 51.38 |
Hi, I'm new in the forums, so nice to meet you guys! I'm starting to work in Unity so at the moment I'm watching some tutorials in the net and trying to learn something, at the moment I did a 2D Endless-Run Game for Android wich is a trash hehe.
Well, now im working in a new project, but i have a problem with the script, i have this in the script: enter code hereusing UnityEngine; using System.Collections;
public class controladorPersonaje : MonoBehaviour {
public float fuerzaSalto = 100f;
public bool enSuelo = true;
public Transform comprobadorSuelo;
float comprobadorRadio = 0.07f;
public LayerMask mascaraSuelo;
private Animator animator;
void Awake(){
animator = GetComponent<Animator> ();
}
// Use this for initialization
void Start () {
}
void FixedUpdate(){
enSuelo = Physics2D.OverlapCircle (comprobadorSuelo.position, comprobadorRadio, mascaraSuelo);
animator.SetBool ("isGrounded", enSuelo);
}
// Update is called once per frame
void Update () {
if(enSuelo && Input.GetTouch(0).phase == TouchPhase.Began){
rigidbody2D.AddForce(new Vector2(0, 200));
}
}
}
It allows the character to jump, but the jumps are not ever the same. I mean that the jumps are more highs/low than the others, I tried to change the "Input.touchCount" for Input.GetMouseButtonDown and the jumps are ever the same, but I want that if i hold the touch of the screen the character will be jumping always not for every touch, but with Input.GetMouseButttonDown it doesn't work.
So please if anyone knows a solution i will be very appreciative, because I want to continue with my project as soon is possible, the classrooms come back and I'm bored as hell hehe.
Edit:This code works too (an admin put it):
if(enSuelo && Input.touchCount == 1 && Input.GetTouch(0).phase == TouchPhase.Began)
but what i want is that if I hold the screen the character will be jumping always with the same jump height.
Thanks and sorry for the long.
Adding 1 frame of force
2
Answers
How to smoothly add force to a rigidbody?
1
Answer
Why doesn`t my GUI Button show up?
1
Answer
forcemode2d.impulse do we need time.deltatime?
1
Answer
Horizontal attraction?
0
Answers | https://answers.unity.com/questions/799772/problem-with-force-on-rigidbody-not-always-the-sam.html?sort=oldest | CC-MAIN-2020-40 | refinedweb | 350 | 56.86 |
Finance
Have a Finance Question? Ask a Financial Expert Online.
Dear Friend,
1-) A working capital shortage can cause a growing company to go bankrupt despite lots of profitable sales. Let's talk about the current liability side of our balance sheet. How would you go about getting vendors to sell to you without demanding COD or COO (cash on order) terms? How would you approach a banker for a working capital loan? Hint: perhaps you are seeing a use for the forecasting discussions we have had in earlier weeks?
The answer to this can be bit broad in nature. It is correct that incorrect planning of working capital requirement can harm or ruin even one of the most profitable companies, and thus, proper planning and getting of required amount of working capital assumes great important. Coming to the point, one of the most easy and flexible methods of addressing the need of working capital is to get “Vender Line of Credit”. To get vendors issue lines of credit, the company can adopt many methods. These include – company can promise them and give commitment of “guaranteed business” for a longer term if they give favorable terms. The company can also offer some price which is higher than the COD or COO up to the extent of cost of funds. Guaranteed business commitment and higher prices (limited to the extent of cost of funds) are the most important methods used to address working capital requirement from the vendors.
If Vendor Line of Credit is not available, then the second step is to approach banks for the working capital requirements. While approaching bank, the method would be slightly different. You need to have exact estimate of working capital requirements and for this, you will need your sales forecasting models in place which will, in turn, help you determine you r working capital requirements. While doing this, cash flows would also determine the requirement as the factors such as the credit that you extend to the customers and the credit / or not credit that you would get from your vendors would also count. So, a bank is approached after detail estimation of working capital requirements is done using sales forecasting methods.
2-)Continued New Topic: Let's assume you have a meeting with a potential new banker. Your company has had lots of profitable sales, but each month you have problems making payroll, and some of your vendors are threatening to put you on COD. (By the way, this was a true situation I faced with a manufacturing company I sold in 2005!)How would your knowledge of your company's CCC give you a starting point for your loan request? What I am asking you to consider is what the CCC (from the other thread) tells you about your need for permanent NOWC?
CCC-Cash Conversion Cycles. – This is a great metric to understand and estimating your working capital requirements. This would help you strategically align your manufacturing, accounts payable, accounts receivable, and customer contract negations together. The key here is to know.The CCC is especially important when determining the amount of external financing you may require to grow your company as opposed to using cash generated by the company to finance the growth.
To give a starting point of your loan request, you must begin with addressing CCC and making necessary alignments to make it as short as possible.
Coming to NOWC –.
In nutshell, while approaching for a working capital, one needs to determine the CCC, align it as much as it is possible and precisely determine NOWC so that one does not end up requesting or raising more working capital than required.
Hope this helps...Warm Regards,
Please could you help me with these project it is due by Saturday:, Thank you?
Sales $10,000,000 Inventory Turnover ratio – Original = 2 Inventory Turnover ratio New Ratio = 5
Let us now Calculate the amount of Cash that will be freed up.
Inventory = Sales Inventory /Turnover Ratio Old Inventory Value = $10,000.000 / 2 = $500,000New Inventory Value = $10,000,000 / 5 = $200,000
FREED UP CASH = Old Inventory Value – New Inventory Value = $500,000 - $200,000 = $300,000
=========================================================================
2-) Receivables Investment
Medwig Corporation has a DSO of 17 days. The company averages $3,500 in credit sales each day. What is the company’s average account receivable?
Accounts Receivable = 17 X $3,500
= $59,500
3-) Cost of Trade Credit
What is the nominal and effective cost of trade credit under the credit terms of 3/15, net 30?
Nominal cost of Trade = discount percentage100- Discount Percentage x 365Days credit is Outstanding-Discount Period = = 397 X 36530-15 = 0.03093 x 24.33 =0.75263 =75.263%?
Periodic rate = (Discount %) / 1 – Discount % Periods per year = 360/(days allowed – discount period) Effective cost of trade = (1 + periodic rate) periods per year – 11/15, net 45 Periodic rate = (Discount %) / 1 – Discount %
= 1/99 = 0.01Periods per year = 360/(days allowed – discount period)
= 360 /(45 – 15)
= 12
Effective cost of trade = (1 + periodic rate) periods per year - 1
= (1.01)12 – 1 = 0.1268 or 12.68% are the average accounts payable for APP?
Average accounts payable = net inventory per day x days in discount period = $500,000 x 15 = $7,500,000.
10 -) Corporate Valuation
The financial statements of Lioi Steel Fabricators are shown below—both the actual results for 2010 and the projections for 2011. Free cash flow is expected to grow at a 6% rate after 2011. The weighted average cost of capital is 11%.
Free Cash Flows – FCF Acutal 2010 Projected 2011
Net Operating working capital 127.20 134.90 Net Plant & Equipment 375.00 397.50 Net Operating Capital (12/31/12) 502.20 532.40 Investment in Operating Cap. 30.20NOPAT EBIT*(1-tx rate) 61.50 65.16
Less: Investment in Operating. Capital 30.20 Free Cash Flows – FCF = 34.96
Horizon Value HVt = [FCFt x (1+g)] / (WACC-g) =[ 34.96 x (1+.06)] / (11-6) Horizon Value 741.152
PV (HVt @WACC + [email protected] PV HVt @WACC 667.70 + NPV [email protected] Value of Operations as on 12/31/11 = 699.20
Vop t + V non operating Assets = Vop t + Mar. Securities Total Company Value as on 12/31/11 7 = 49.10
= V firm - V debt - V preference stock V. Debt = N/P + LT Bonds 210.70 V. Preference Stock = 35.00 V. Equity = V[firm-debt-ps] 503.40 PRICE PER SHARE = = V. Equity / Number of Shares Outstanding = 50.34
Please, I need help with this final project, it is 3 hour timing, do you have time to help me now.. Thank you, Please I only have 3 hours to complete, could you explain the long questions of calculations, Thank you,
I am connected waiting for the final answer
1.Which of the following statements is NOT correct?
èThe corporate valuation model discounts free cash flows by the required return on equity.
2. Which of the following statements is correct?)
If a project with normal cash flows has an IRR greater than the WACC, the project must also have a positive NPV.
3. Calculations:
Year 1 dividend 1.55*1.015= 1.57325 Year 2 dvidend 1.57325 *1.015= 1.5968 Stock price after year 2 1.5968*1.08/(.12-.08)= 43.11.
So current stock price will be these amounts discounted back to the present at the 0.12 rate of return.
1.57325/1.12 +1.5968/1.12^2 +43.11/1.12^2= 37.05?
Current month sales collected: 3000 x 40% x (100%-2%) = $1176
ADD: Prior month sales collected: 3000 x 60% = $1800
LESS: purchases $1500
LESS: other expenses $700
= $776 average cash gain during a typical month.
5. ?%
Answer = d. $119.9 Calculations:
Last year's sales = S0$350
Sales growth rate = g30%
Forecasted sales = S0 (1 + g) = $455
S = change in sales = S1 – S0 = S0 g = $105
Last year's total assets = A0 = A* since full capacity = $500
Forecasted total assets = A1 = A0 (1 + g) = $650
Last year's accounts payable = $40
Last year's notes payable. Not spontaneous, so does not enter AFN calculation = $50
Last year's accruals = $30
L* = payables + accruals = $70
Profit margin = M = 5.0%
Target payout ratio = 60.0%
Retention ratio = (1 – Payout) =40.0%
AFN = (A*/S0) S – (L*/S0) S – Margin x S1 (1 – Payout)
= $150 – $21 – $9.1 = $119.9
6.The Dewey Corporation has the following data, in thousands. Assuming a 365-day year, what is the firm's cash conversion cycle?
Annual sales = Annual cost of goods sold = Inventory = Accounts receivable = Accounts payable =
$45,000 $31,500 $4,000 $2,000 $2,400
d. 35 days
Calculations:
Sales $45,000.00
COGS $31,500.00
Inventories $4,000.00
Receivables $2,000.00
Payables $2,400.00
Days/year 365
Actual CCC = (Inventory ÷ COGS/365) + (Receivables ÷ Sales/365) – (Payables ÷ COGS/365)
= $4000 ÷ ($31500/365) + $2000 ÷ ($45000/365) - $2400 ÷ ($31500/365) = 46.34921 + 16.22222 - 27.80952
= 46.3 + 16.2 - 27.8
Actual CCC = 34.8
7.)
d. 23.45% EAR = (1 + 2/98)365/35 - 1 = 1.2345 - 1 = 0.2345 = 23.45%.
8)?
e. 8.79% Calculations:
First step find the before-tax cost of debt: N = 20, PMT = 80, FV = 1000, PV, 1050... I =7.51% After tax cost = 7.51% * 60% = 4.51% cost of equity = 4.5% + 1.2(5.5%) = 11.1% WACC = 35% * 4.51% + 65%* 11.1% = 1.58% + 7.215% = 8.79%
9.?
b. $167 Calculations:
FCF1= -$10M, FCF2 = $20M We also need to calculate the terminal value, sometimes called the horizon value (the value after which the cash flows are constant)FCF3 for g=4% which we have FCF3 = FCF2*(1+g). ie FCF3= 20*(1+4%) = $20.8M
Therefore, Terminal Value TV would be : TV =20.8/(14%-4%) = $208M NOW, discount each FCF by the WACC, sum up the results (add the TV to the last cash flow, so your last cash flow should be FCF2/(1+WACC)+TV/(1+WACC)^2), after summing up the results, to get to the firm's value. = -10+20/(1+14%)^1 + 208/(1+14%)^2 = $ $167.59
10.?
c. $28.40 Calculations:
Total market value = Value of operations non - operating assets = $900 + $30 = $930 million. Market value of equity = Total market value – (Value of debt Value of Preferred stock) = $930 – (Long-term debt Preferred stock) = $320 – ($90 $20 $110) = $710 million. Price per share = Market value of equity / Number of shares
= $710 / 25 = $28.4., Please Could you help me with this Project, Thank you.
Let us first put in figures what is GIVEN and mentioned?
Net Income increased by 8 percent: $15,000,000 * 0.08= $16,200,000
Total dividends payout in Dollars: $3,000,000 * 0.08= $3,240,000
2. What is the 2012 dividend payout ratio if the company increases its dividends at 8%?
The 2012 payout ratio is $3,240,000/16,200,000= 0.20%
3. If the company follows a residual dividend policy, and maintains its 35% Debt level in its capital structure, and invests in the $12.0 Million capital budget in 2012, what would be the Residual Dividend level (in Dollars) in 2012? What would be this Residual Dividends payout ratio?
Residual Dividends Payout level in (Dollars) :
(0.65)($12,000,000)=$7,800,000
$16,200,000- $7,800,000= $8,400,000
The Residual Dividends payout ratio:
$8,400,000/$16,200,000= 0.52%
4. How much additional capital (Debt and/or Equity) will the company have to raise from outside sources in 2012 if it invests in this capital project, and follows a residual dividend policy?
(0.35)($12,000,000)= $4,200,000
OR
$12,000,000 -$7,800,000= $4,200,000
5. What would be the prudent dividend policy for 2012?: Pay dividends at the current dividend growth rate of 8%, or pay the residual dividend amount.
It would be proper to pay the dividend at current growth rate of 8%
----------------------------------
Maese Industries Inc. has warrants outstanding that permit the holders to purchase 1 share of stock per warrant at a price of $25.
a. Expiration value = Current price - Striking price.
Current Striking Expiration
Price Price Value
$ 20 $25 -$5 or 0
25 25 0
30 25 5
100 25 75
b. V Package = $1,000 = = VB + 40($3)
VB = $1,000 - $150 = $850.
$850 = =
= I (7.4694) + $1,000(0.1037) = I(7.6494) + $103.70
$746.30 = I (7.4694)
I = = $99.91 $100.
Therefore, the company would set a coupon interest rate of 10 percent, producing an annual interest payment I = $100., Good morning please could you help me with this assignment, it is due this comming Saturday, 09/15/2012
Please I need help with a homework, Once I open I cannot close until I finish is Two hour as a limited time, THank you, Please I need great help , could help me with the answer and the explanation of these 5 question, I have two hours at maximum, thank you
I only have left 43 minutes,
1 Which of the following statements concerning common stock and the investment banking process is NOT CORRECT?
.
Explanation – This cannot be called tender offer. Tender offer is generally is a takeover bid in the form of a public invitation to shareholders to sell their stock, generally at a price above the market price.
2 Europa Corporation is financing an ongoing construction project. The firm will need $5,000,000 of new capital during each of the next three years. The firm has a choice of issuing new debt or equity each year as the funds are needed, or issue only debt now and equity later. Its target capital structure is 40 percent debt and 60 percent equity, and it wants to be at that structure in three years, when the project has been completed. Debt flotation costs for a single debt issue would be 1.6 percent of the gross debt proceeds. Yearly flotation costs for three separate issues of debt would be 3.0 percent of the gross amount. Ignoring time value effects, how much would the firm save by raising all of the debt now, in a single issue, rather than in three separate issues?(c) $88,006
Explanation : Single Debt Issue = $15Million x 40% Debt to Equity Ratio = $6Million
Total Debt costs needed would be $6Million / (1-Flotation costs)
=6,000,000/(1-0.016)= 6,000,000/0.984)= 6,097,561
Multiple Debt -- $6Million in total still needed for debt proceeds
Here, total debt proceeds needed would be $6Million / (1-Flotation costs)
=6,000,000/(1-0.03)= 6,000,000/0.97)= 6,185,567
Therefore, Net savings would be 6,185,567-6,097,561 = $88,006
3 New York Waste (NYW) is considering refunding a $50,000,000, annual payment, 14 percent coupon, 30-year bond issue that was issued five years ago. It has been amortizing $3 million of flotation costs on these bonds over their 30-year life. The company could sell a new issue of 25-year bonds at an annual interest rate of 11.67 percent in today's market. A call premium of 14 percent would be required to retire the old bonds, and flotation costs on the new issue would amount to $3 million. NYW's marginal tax rate is 40 percent. The new bonds would be issued when the old bonds are called.What will the after-tax annual interest savings for NYW be if the refunding takes place?(c) $768,900
4. (TCO E) Which of the following statements is most CORRECT? . (Points : 20)
5 (TCO E) Sutton Corporation, which has a zero tax rate due to tax loss carry-forwards, is considering a 5-year, $6,000,000 bank loan to finance service equipment. The loan has an interest rate of 10 percent and would be amortized over five years, with five end-of-year payments. Sutton can also lease the equipment for 5 end-of-year payments of $1,790,000 each. How much larger or smaller is the bank loan payment than the lease payment? Note: Subtract the loan payment from the lease payment.(c) $207,215
Using Excel use PMT(10%,5,6000,0), =(###) ###-####/span>
1790000 –(###) ###-####= $207,215
Please could you help me with this homework,.. thank you
No. 1 - Exchange rates
If one U.S. dollar buys 1.64 Canadian dollars, how many U.S. dollars can you purchase for one Canadian dollar?
1/1.64 = 0.61
No. 2 - Currency Appreciation
Suppose 144 yen could be purchased in the foreign exchange market for one U.S. dollar today. If the yen depreciates by 8.0% tomorrow, how many yen could one U.S. dollar buy tomorrow?
144*1.08 = 155.52
No. 3 - Eurobonds versus domestic bonds?
9/1-0.28 = 12.5
No. 4 – Credit & Exchange Rate Risk
Suppose DeGraw Corporation, a U.S. exporter, sold a solar heating station to a Japanese customer at a price of 143?
143.5/154.4 =$0 .9294 Million
No. 5 – Forward Market Hedge.50?
Actual payment to be made = 200000*5.5 = 1.1 million peso to be made Bought in forward = 1.1/5.45 = 201835 Rate after 90 days = 1.1/5.3 = 207547 Saving in $ = 207547-201835 = $5712
No. 6 :Relate Purchasing Power Parity issues to East Asian currencies vs. the U.S. Dollar.
Purchasing power parity can be defined as an economic theory that estimates the amount of adjustment needed on the exchange rate between countries in order for the exchange to be equivalent to each currency's purchasing power.
Having said this, PPP issues of USD vis-à-vis the East Asian Currencies has increased negatively. This means the purchasing power of the east asian countries has decreased owing to the depreciation against US Dollar which is MORE as compared to depreciation of dollars against other currencies.
. No. 7: Relate Purchasing Power Parity issues to Central and South America currencies vs. the U.S. Dollar.
Having said this, PPP issues of USD vis-à-vis South American Currencies has been little different. The exchange rates have been somewhat constant or stagnant and in some cases, little stronger than the USD. So, the PPP in these areas has been more or less in line with the USD.
Please I need to do the last project, it is going to be 12 questions with their respectively explanations, it is going to be 3 hours, please could you help out with this project, once I open it I must finish it in 3 hours..Thank you. Could you help me out?,
Please I need full explanation for every single question, I have left 2 hours and 50 minutes left,, Thank you I really appreciate your help..
1-)Which of the following statements concerning the MM extension with growth is NOT CORRECT?(d) For a given D/S, the WACC is less than the WACC under MM's original (with tax) assumptions.
2. Which of the following is generally NOT true and an advantage of going public?(e) Makes it easier for owner-managers to engage in profitable self-dealings.
3.? (e) $6,972
4-). Suppose hockey skates sell in Canada for 105 Canadian dollars, and 1 Canadian dollar equals 0.71 U.S. dollars. If purchasing power parity (PPP) holds, what is the price of hockey skates in the United States?(c) $74.55
5-)?(d) $785,714
6-)?(d) 7.88%
7-) Which of the following statements is CORRECT, holding other things constant?(e) An increase in the corporate tax rate is likely to encourage a company to use more debt in its capital structure.
8-)Which of the following statements is most CORRECT? . (e) To a large extent, the decision to dissolve a firm through liquidation versus keeping it alive through reorganization depends on a determination of the value of the firm if it is rehabilitated versus the value of its assets if they are sold off individually.
9-)?
(c) $10,250
10-) Which of the following statements about valuing a firm using the APV approach is most CORRECT?(c) The value of operations is calculated by discounting the horizon value, the tax shields, and the free cash flows before the horizon date at the unlevered cost of equity.
11-) Call options on XYZ Corporation's common stock trade in the market. Which of the following statements is most correct, holding other things constant?(a) The price of these call options is likely to rise if XYZ's stock price rises.
12-) A swap is a method used to reduce financial risk. Which of the following statements about swaps, if any, is NOT CORRECT?(d) A problem with swaps is that no standardized contracts exist, which has prevented the development of a secondary market.
Yes it helped, but I need the explanations or calculations how did you arrive to every single answer..Please I only have 1 hour 30 minutes left, I do not justify the answer the question is incomplete...
Please could you help me with a homework that I have, I need to explain every single question.., Please only explain me base in scenario the last question, thank you
(TCO A) Which of the following is not considered a derivative type of security: (Points : 5)
Common stock
2. (TCO A) Of the following investments which is considered a cash equivalent? (Points : 5)
Money market funds
3. (TCO A) Of the following derivative securities which one provides the contract holder the right to sell a stock to the contract seller at a specific price? (Points : 5)
Put option
4. (TCO A) When a company directly issues short term debt the security is called: (Points : 5)
Commercial Paper
5. (TCO A) If you contribute $400 to your company’s 401K program and you are in the 25% tax bracket how much is your take home income reduced? (Points : 5)
$300The basis of this calculation is since you are in 25% tax bracket, the take home pay would get reduced by 75% of contributed amount, i.e.$300
The last question that I meant to explain is this situation not the last question of the multiple choice:
This the question: is not about length or any specific restriction
If you are a 65-year-old investor with the following demographics: retired but you do some part-time consulting, you own your home, kids are grown, you are married, you and your spouse are on medicare, you have $2,000 a month disposable income, and a portfolio of $250,000. What are your likely investment objectives and what constraints could there be that would keep you from accomplishing these objectives?
Please could you help me out with one activity, thank you, These are the question, the First two questions are short answer questions and the rest multiple choices.
Dear Friend,Here are your answers. Hope this would help.
1-) Which markets are showing the least correlation and could be good candidates for delivering international diversification to a US investor?
Presuming this pertains to the international financial and capital Markets, the Markets of hongkong and other Far East provide relatively low correlation to the US Markets. Although the stocks in these Markets (Far East in general and HK in particular) are more volatile as compared to the US, their systematic component of
risk is relatively low because of the low correlation with the U.S. market. The net result is that the systematic risk (beta) of the average Hong Kong and Far East Markets like Singapore stock from a U.S. perspective is only 0.85, compared with a beta of 1.0 for the average U.S. stock. In other words, diversifying into Hong Kong and Far East Markets stocks will reduce the riskiness of a portfolio currently concentrated in U.S. stocks.
2-) Which markets would you select to achieve the desired international diversification with the US?
We would choose financial Markets to effectively achieve desired levels of international diversification. This is because, the quantum of investments of money can easily be measured and the diversification can be measured and controlled using scientific tools like beta. All U.S. companies are more or less subject to the same cyclical economic fluctuations. Foreign securities by contrast involve claims on economies whose cycles are not perfectly in phase with the U.S. economic cycle. Thus, just as movements in different stocks partially offset one another in an all-U.S. portfolio, so also movements in U.S. and non-U.S. stocks cancel out each other somewhat.
3. (TCO D) If you purchase a three month $10,000 T-Bill for $9,800, what is the actual return for the three months? (Points : 5)
8%
4. (TCO D) The price of a stock at the beginning is $40.00 and ends at $45.00. If the stock paid a dividend of $3.00, what is the holding period return for a year? (Points : 5)
20%
5. (TCO I) An investment has an expected return of 15 percent, a standard deviation of 30 percent, and the risk free rate is five percent. What is the reward-to-variability ratio? (Points : 5)
33%
6. (TCO I) Calculate the risk free rate in the CAPM model for the following data: expected return on the market 20 percent, expected return on a stock 25 percent, and the stock beta is 1.5. (Points : 5)
10%
7. (TCO I) The holding period return on a stock is equal to _________. (Points : 5)
the capital gain yield over the period plus the dividend yield
Warm Regards,
Please I need help with Homework, the first 2 questions are short analysis questions and the rest of the question are multiple choice question., Thank you
1-) Are the markets efficient? If the markets were completely efficient, how would you explain the dot-com bubble of the late 1990s and the subsequent bear market? Compare and contrast this episode with the current housing market.
Efficient Market Hypothesis, popularly known as EMH, maintains that all stocks are equally priced according to their inherent investment properties, the knowledge of which all participants possess equally. The rationale is When an unexploited profit opportunity arises on a security (so-called because, on average, people would be earning more than they should, given the characteristics of that security), investors will rush to buy until the price rises to the point that the returns are normal again.
However, many evidences are found against the Markets being efficient and some explain the dot com bubble and also the subsequent bear Markets. One such factor is Market Overreaction. recent research suggests that stock prices may overreact to news announcements and that the pricing errors are corrected only slowly. When corporations announce a major change in earnings, say, a large decline, the stock price may overshoot, and after an initial large decline, it may rise back to more normal levels over a period of several weeks. If we try and CONTRAST this against current housing Market, the fall in the prices after the lehman crisis has been sudden, but the recovery is very slow and gradual.
Second such evidence that explains this is Excessive Volatility. The stock market appears to display excessive volatility; that is, fluctuations in stock prices may be much greater than is warranted by fluctuations in their fundamental value. This exactly happened with dot com stocks. The extremely volatility lead it to exorbitant pricing but this were much greater than what was supported by its fundamental valuations. This lead to sudden crash in the prices. If we contrast this with current Housing Markets, the housing Markets and its lenders were excessively valued, leading to overvaluation. This lead to sudden decline of the stock prices.
2-) As an investment advisor, you have done your due diligence and determined that a stock is undervalued and you want to buy it. Now you can analyze the stock price using technical analysis. Technical analysis is based on the assumption that markets are driven more by psychological factors than by fundamental values. Behavior finance, or market psychology, asserts that history and patterns tend to repeat. Numerous indicators are used in technical analysis. Go to a technical analysis website such as and access the descriptions for "MACD," "Call/Put Ratio," "TRIN," and "Support/Resistance." Are these valuable tools?Look at the company's charts that you are tracking for this course. Are there patterns that are predictive?
YES… All these are very valuable tools. MACD – i.e. Moving Average Convergence Divergence, is one of the most useful Momentum Indicators. The MACD turns two trend-following indicators, moving averages, into a momentum oscillator by subtracting the longer moving average from the shorter moving average. As a result, the MACD offers the best of both worlds: trend following and momentum. The interpretation is that if the MACD trades above the signal line, it is a bullish sign and the stock is expected to show momentum on the upside.
Call Put Ratio – Often known as PCR is a total number of Puts traded as against Calls. The Put/Call Ratio is an indicator that shows put volume relative to call volume. Put options are used to hedge against market weakness or bet on a decline. Call options are used to hedge against market strength or bet on advance Higher to put trades to calls, more a resistance would be seen at a given strike price. This determines the potential topping or bottoming out of the Markets.
TRIN – This is a Breadth Indicator. This is basically a Oscillator, and determines oversold or overbought positions / conditions in the Markets, greatly helping to determine exit or entry points.
Support and Resistance – These are the points where supply and demand meet. Supply is synonymous with bearish, bears and selling. Demand is synonymous with bullish, bulls and buying. When supply and demand are equal, prices move sideways as bulls and bears slug it out for control. Support is the price level at which demand is thought to be strong enough to prevent the price from declining further. Similarly, Resistance is the price level at which selling is thought to be strong enough to prevent the price from rising further. Support and resistance are like mirror images and have many common characteristics.
3-) The weak form of the EMH states that ________ must be reflected in the current stock price.
all publicly available information
4-) The tendency when the ______ performing stocks in one period are the best performers in the next, and the current ________ performers are lagging the market later is called the reversal effect.
worst, best
5-) If you believe in the __________ form of the EMH, you believe that stock prices reflect all publicly available information but not information that is available only to insiders.
semi-strong
6-) Choosing stocks by searching for predictable patterns in stock prices is called ________.
technical analysis
7-) Behavioralists point out that even if market prices are ____________ there may be _______________.
distorted; fundamental efficiency
Please the homework that you helped me yesterday two of the questions the answer was wrong: Question 1 and question 5.
1-) The weak form of the EMH states that ________ must be reflected in the current stock price.
all past information including security price and volume data all publicly available information all information including inside information all costless information
2-) The tendency when the ______ performing stocks in one period are the best performers in the next, and the current ________ performers are lagging the market later is called the reversal effect.
worst, best worst, worst best, worst best, best
3-) If you believe in the __________ form of the EMH, you believe that stock prices reflect all publicly available information but not information that is available only to insiders.
semi-strong strong weak perfect
4-) Choosing stocks by searching for predictable patterns in stock prices is called ________.
fundamental analysis technical analysis index management random walk investing
5-) Behavioralists point out that even if market prices are ____________ there may be _______________.
distorted; limited arbitrage opportunities distorted; fundamental efficiency allocationally efficient; limitless arbitrage opportunities distorted; allocational efficiency
The professor gave a second chance with other homework, could you help me out with this new assignment?
I could not have access to the new question, please I will post it later.. kindly thank you
Please could you help me out with 3 question HW, it is just you own well opinion ,,
1-) If we begin with the notion that investors do not process information correctly. And what information is represented by the market prices? As we know, this price information represents current level of of market participants expectations regarding future prices. Great. If this current level of expectations is flawed, it will still be reflected in market prices of the equities, irrespective of how poorly investors processed the information. Meaning that it will still be reflecting those expectations, whether they are flawed or not. And the markets will be changing to reflect changes in these expectations, wouldn't they?
2-) How sensitive is this industry,
Aggregate demand (AD) is the total demand by domestic and foreign participants for an economy's scarce resources, less the demand by domestic participants for resources from abroad. This industry is extremely sensitive to,
Please could you help me out with a project???
Please I only have two hour.., please could you explain the question,, thank you ... I only have two hour left,, This is the project, i Have only 1 hour left,, please could you explain the question??
1. (TCO D) Find the required return for a stock, given that the current dividend is $4.45 per share, the dividend growth rate is 6.5 percent, and the stock price is $101.00 per share. (Points : 5)
10.91%
2. (TCO D) Find the next dividend on a stock given that the required return is 9.78 percent, the dividend growth rate is 7.77 percent, and the stock price is $94.89 per share. (Points : 5)
$1.91
3. (TCO D) A company has cash of $500, accounts receivable of $200, and inventory of $400. The company also has current liabilities of: accounts payable $300 and notes payable $600. What is the company's current ratio? (Points : 5)
1.22
4. (TCO B) Behavioralists point out that even if market prices are ____________ there may be _______________. (Points : 5)
distorted; limited arbitrage opportunities
5. (TCO B) You can earn abnormal returns on your investments via macro forecasting ______. (Points : 5)
if you can forecast the economy better than the average forecaster
6. (TCO A) _____ is considered to be an emerging market country. (Points : 5)
Brazil? (Points : 5)
$1,690,000
8. (TCO A) You earn six percent on your corporate bond portfolio this year, and you are in a 25 percent federal tax bracket and an eight percent state tax bracket. Your after tax return is _____. (Assume that federal taxes are not deductible against state taxes and vice versa). (Points : 5)
4.14%
9. (TCO I) CAPM is one of the more popular models for determining the risk premium on a stock. If the Expected Return on the Market Portfolio is 9.10%, the Risk-Free Rate is 2.0%, and the Beta for Stock i is 0.9. Find the Expected Return on the Stock using the CAPM model. Show your work. (Points : 34)
= 2 + 0.90 (9.1-2)
= 2 +6.89
= 9.89%
10. (TCO D) XYZ company paid a dividend of $4.66 in the past 12 months. The annual dividend growth rate is 6.93 percent, and the required rate of return on the stock is 10.25 percent. Calculate the current price of the stock. Do not use a financial calculator or an online calculator. You must show your work. (Points : 34)
Stock Price = Dividends (Div) / (Expected Return (R) - Dividend Growth Rate (G))
Stock Price = Div / (R - G)
= 4.46 / (10.25-6.93)
= 4.46 / 3.32
= 1.34337
=134.34
12. (TCO E) In the past 10 years, Behavioral Finance has begun to explain the qualitative side of market movements and investor decisions. Explain the concept and the value it can provide to the investment markets. (Points : 34)
It is fairly correct to say that Behavirol finance has begun to explain the qualitative side of the Market movements and investor decisions. Behavioural Finance is the study of the inuence of psychology on the behaviour of financial practitioners and the subsequent effect on markets. Behavioural
finance is of interest because it helps explain why and how markets might be
inefficient. Behavioral finance is a relatively new field that seeks to combine behavioral and cognitive psychological theory with conventional economics and finance to provide explanations for why people make irrational financial decisions.
The Key concepts covers Anchoring, Mental Accounting, Confirmation and Hindsight Bias, herd behavior, Overconfidence, Overreaction and Availability Bias, Prospect Theory, etc.
It also offers tremendous value to the investment markets. Behavioural finance is an approach that is becoming increasingly popular as an approach to capital investments. But what lies behind it and how can it be used by advisers to benefit clients
The basic premise of behavioural finance is that investor behaviour often exhibits “anomalies”, that is, they have behavioural patterns or tendencies that have no rational explanation. This is reflected in their investment decisions and in the prices of securities and on the stock exchanges. There are no purely rational investment decisions that, as the rational markets theory would have it, lead to efficient markets. On the contrary, there are always inefficiencies.
It adds value to the investment markets as it teaches investors to avoid leverage and diversify the trading. Following behavioral finance, investor presses for more management study and study of fundamentals, and would make him seek contrary opinions. He would not be guided solely by the historical prices
13. (TCO B) Although the Efficient Markets Hypothesis is a popular theory, there are several limitations. Identify and explain two of those limitations. (Points : 34)
Efficient Market Hypothesis also popularly known as EMH,.
Having said this, there are several limitation also. The basic limitation is that it is very weak foundations. . The effectiveness of these hypothesis depends upon validity of one of the three conditions viz. rational investment decisions, independent irrational investment decisions, and arbitrage. In practice, none of these three conditions are valid.
For example, let us take two points, i.e.e Rational Investment Decisions and Independent Deviations from rationality..
Rationality: All investors in the market should be rational. When relevant information is released in the market by a firm, all investors will adjust their estimates of stock prices of the firm in a rational way. E.g., the relevant information could be the announcement of new product development by a firm.
Independent deviations from rationality: Deviations from rationality are not random, thus they are not likely to cancel out in a whole population of investors. It is not uncommon in the market place that investors over value an upcoming new sector like internet, information technology or biotechnology. In these cases, investors believe that current performance of these sectors is representative of the future performance. This behavior of representativeness leads to bubbles in the markets, which is not explained by efficient market hypothesis. On the other hand, there are some conservative investors who are too sluggish to adjust to new information. This results in a slow process of price adjustment to new relevant information. This is against the concept of efficient market hypothesis.
Please could you help me with a project.. Thank you,, Please in the last 5 question I need your own opinion well explained.. Thank you
Top of Form
1. When discussing bonds, convexity relates to the _______. (Points : 5)
shape of the bond price curve with respect to interest rates
2. A __________ bond is a bond where the bondholder has the right to cash in the bond before maturity at a specific price after a specific date. (Points : 5)
puttable
3. A pension fund has an average duration of its liabilities equal to 15 years. The fund is looking at 5-year maturity zero coupon bonds and four percent yield perpetuities to immunize its interest rate risk. How much of its portfolio should it allocate to the zero coupon bonds to immunize, if there are no other assets funding the plan? (Points : 5)
33%
4. The duration of a portfolio of bonds can be calculated as _______________. (Points : 5)
the value weighed average of the durations of the individual bonds in the portfolio
5. Find the Yield-to-Maturity on a Semiannual Coupon Bond with a Price of $974, a Face Value of $1000, an Annualized Coupon Rate of 8.7 percent, and four years remaining until Maturity. (Points : 5)
9.5%
6. Did anybody see indications as to what is the lag between major trends in PPI and CPI?.
Yes, it can be rightly said that there is a lag between major trends in PPI and CPI. There are reasons that can be attributed to it..
7. Consumer credit has started growing again. Do you think we may return to reckless days of the pre-crisis debt to equity ratios of the households?
Yes, again, it can definitely be said that consumer credit has started growing again. However, in the same breadth, it just cannot be said that we may return to reckless days of the pre-crisis debt to equity ratios of the households. There are solid logic and reasoning behind it.
In fact, growing consumer credit is a good sign of a expanding and improving economy. Post the crises, the lenders have come up with better and stricter credit appraisals, better eligibilities, and far elaborate screening processes. This is resulted into better asset qualities with the lenders. So, consumer credit growth is not alone, but is accompanied with better asset qualities which is certainly a good sign and a positive factor. It just cannot be said that mere growth in consumer credit will take us back to the credit crisis period.
8. Would you be using the principle of portfolio immunization and adjusting the portfolio duration to match your target period?
The principle of portfolio immunization is an investment strategy used to minimize the interest rate risk of investments by adjusting the portfolio duration to match the investor's investment time horizon. It does this by locking in a fixed rate of return during the amount of time an investor plans to keep the investment without cashing it in.
Yes, one would certainly use this method / strategy. This is because it would help create a portfolio that is duration matched with the added constraint that it be cash matched in the first few years.
9. Say, if you were to invest into bonds, what kind of indexes would you be looking at?
If one is to invest in bonds, it would look at US Corporate bond index, Dow Jones Corporate bond index, etc. The Dow Jones Corporate Bond Index is an equally weighted basket of 96 recently issued investment-grade corporate bonds with laddered maturities. The index intends to measure the return of readily tradable, high-grade U.S. corporate bonds. It is priced daily.
10. If you were planning investing into bonds, what maturities or what average duration would you be focusing on?
Average Maturity period would depend the investment horizon. The duration is normally decided by adding together the total amount of time until maturity and dividing by the number of debt securities in the mutual fund. The shorter the average maturity is, the less the fund's share price will fluctuate with changes in interest rates. The average maturity that one would look for would be 20 years.
Bottom of Form
Question number 3 was marked wrong. Could you fix that issue?
Please could you help me out with this project? the two open question must be your opinion...,,
Thank __________.
LEAPS.
cash; actual,
You may select a specific strategy for your example. underlying a stock.
could you help me out with a homework?
Thank you, ***** ***** it is due tomorrow, the homework is 4 open question not length requirement but quality it is a must, it is based in your opinion ,, This is the link:
Thank you..
Here youunderlying astock.
This is not the question that I asked, the link is :,, thank you
1-)I would suggest applying the Sharpe ratio and the Alpha coefficient in a specific setting of a fund, not as a general definition?
Sharpe Ratio, derived by William Sharpe in 1966, measures how much excess return you are receiving for the extra volatility that you endure for holding a riskier asset. It is worked out as :
S(x)=(Rx-Rf) / StdDev(x)
Where:
X is the Investment, Rx is the average rate of return Rf is the beta available
StdDev is the Standard Deviation
Let us try and put this is a specific set of example and see how it works.
Assume portfolio A had or is expected to have a 10% rate of return with a standard deviation of 0.10. In the United States, US treasury bills are often used as the benchmark for risk free interest rates. During the 20th century, the treasury bills averaged a return of about 0.9%. In that case, R would 0.10, Rf would be 0.009, and s would be 0.10. The equation would be set up to read (0.10 – 0.009)/0.10, which calculates to 0.91. In other words, the Sharpe ratio for portfolio A would be 0.91.
If portfolio B shows more variability than portfolio A, but has the same return of return, it will have a greater standard deviation, but the same R. Assuming the standard deviation for portfolio B is 0.15, the equation would read (0.10 – 0.009) / 0.15. The Sharpe ratio for portfolio would be 0.61, so portfolio B would have a lower Sharpe ratio relative to portfolio A. This is not a surprising result, considering the fact that both investments offered the same return, but B had a greater risk. Obviously, the one which has less risk but offers the same return would be the preferred option.
Let us now discuss Alpha Coefficient – This is a measure of the difference between an investment’s actual returns and its expected return, given its level of risk as measured by beta.Beta is a measure of the volatility, or systematic risk, of an investment.It is the component of risk that is correlated to market movements, and which is not eliminated through diversification. Let us again see this as an example:
As an example, consider the 3-year statistics for a small cap growth fund If alpha is calculated using the S&P 500 as the market measure, the fundhas a high beta (1.53) and negative alpha (-2.86%).Based on this you would
TABLE 1
S&P 500
RUSSEL 2000 Growth
R2
70
89
Beta
1.53
0.80
Alpha
-2.86
2.36
conclude this is a volatile fund and the manager has actually destroyed value.Investors would be better off in a passive S&P 500 index fund or ETF than in this fund.But this is a small cap fund with a significantly higher R2 to the Russell 2000 Growth Index.Against that, the fund’s alpha is a positive 2.36% and the beta is actually below that of the index.This suggests the manager has added value with below-average volatility – the exact opposite of the initial conclusion.
Which is correct?They both are.They are both accurate applications of Equations 1 – 3 simply using different values for the market return (Rm).
======================================================
2-) I was looking for the use of alternative methods of evaluating mutual fund performance management. For instance, Global Investor (Oct 2004) discusses the strength and the weaknesses of the Sharpe ratio by stating as follows: "As a robust indicator, which does not depend on any choice of benchmark, the Sharpe ratio is considered to be an absolute portfolio performance measure, but this strength probably leads to its principal weakness. The fund with the best Sharpe ratio is not necessarily the one that performed best in comparison with the precise risk. The concern to relativise the performance with regard to the risk taken by the manager has led researchers to attempt to go beyond market risk analysis and favour more sophisticated models. These allow all of the portfolio risks to be highlighted and the normal returns arbitrated by the market to be evaluated. Consequently, the excess performance (abnormal return or alpha) achieved by stock picking or market timing could be measured in relation to the risks taken by the manager. The Europerformance/Edhec rating attribution methodology is based on the following steps: 1. return-based style analysis and calculation of alphas, 2. persistence analysis, 3. extreme risk analysis, and 4. rating attribution." References: Assessing mutual fund performance. (2004). Global Investor, , 1-51. Retrieved from
I read the above description. WHAT IS THE QUESTION ?? What do you want me to explain ??
==========================================================
3-) Which areas, countries of the world would offer a higher level of diversification for an US investor?That is which countries would be less correlated with the US equity markets?
Historically, and also with a purpose to achieving balanced diversification while balancing and mitigating resultant risk.
For this, it will need to invest in the Markets which are less correlated with the US. Foreign countries in the in the long-term.
=============================================================
4-) But aren't the globalized markets becoming more correlated which undermines the main notion of international diversification of the domestic asset portfolio? Your opinion?
YES.. that is correct. It is beyond doubt that the international portfolio and the resultant diversification reduces risk in the portfolio to a large extent. However, with the time going by, this correlation which existed in different countries which provided reduced risk through balanced diversification is slowly fading away as the world and countries are more gettingglobalized. The risks such as currency exchange rate risk, Geopolitical risks, economic and credit risks which came with international diversification are now being more efficiently managed.
The main reason behind this being more correlated is that the business are getting globalized. Growth of one geographical set of countries depends upon the other and so on. For example, if you own Brazil, you own a derivative of China and that you can’t like China unless you believe their best customers (the US and Europe) can remain somewhat healthy.
Please I need help with a last important project of this semester, it is 3hour once I opened i must finish, Could you help me out?? Thank you
this is my last project I only have 3 hours to finish, I need to explain in details and with accurate every single question, thank you:
Did you receive the link of the Project that I just sent you,, I have that doubt.. ?? Thank you
Dear Friend,I am terribly sorry. I have facing constant problem with my connection and just have a power failure. I am currently on power backup which will not last long.I am opting out so that other experts can help you in time.Hope to assist you again in future.Warm Regards
Please This is a timing project, Is there anyway that I can solve this problem,, It is a lot of point ? Please help me out with alternative.??? please
Why you did not tell me that before? I am going to lost a lot of point,, Thank you anyways..
Please could you help me out with two questions>> thank you
Question 1-) can you find some examples where the legal and regulatory process successfully stopped the Merger or Acquisition? What was the rationale for this action?
(Your focus should be primarily the legal and regulatory process of the US Federal Government; however, you can also find examples of actions taken by States (in the US) and actions taken by foreign governments, such as Canada, the European Union, etc.)
Your examples should be from the 1995 - 2012 period.
2-) The AOL – Time Warner Merger: What was right with this merger? What was wrong with it? What was the industry and competitive environment like internationally when this merger was announced?
It is open question about your own creation opinion. Thank you
Hello. Do you have the answer of my two questions, the opinion.. Thank you...
Please could you help me out with this project , it is due by the end of the day: the link is: you.. Could you help me with this project?? | http://www.justanswer.com/finance/720tn-please-help-out-question-no-citation.html | CC-MAIN-2016-36 | refinedweb | 8,818 | 64.61 |
Hello,
Printable View
Hello,
Have you looked at TextField.addKeyPressHandler(...)?
Yes, but its not as easy as it seems :))
Ah. It wasn't clear where you were at in your development. This works nicely, if however unreasonably specific. The key is event.preventDefault() since you don't want the original character rendering, without this line, both characters will render:
Code:
public class TextFieldKeyPressTest implements EntryPoint
{
@Override
public void onModuleLoad()
{
ContentPanel cp = new ContentPanel();
cp.setPixelSize(400, 400);
final TextField tf = new TextField();
tf.addKeyPressHandler(new KeyPressHandler()
{
@Override
public void onKeyPress(KeyPressEvent event)
{
event.preventDefault();
char val = event.getCharCode();
char swapChar = val;
String current = tf.getText();
if (val == 'a')
{
swapChar = '\u00E4';
}
tf.setText(current + swapChar);
}
});
cp.setWidget(tf);
RootPanel.get().add(cp);
}
}
Thank you.
Your reply helped me.
but I have a question:
what is the difference between setText()/getText() and setValue()/getValue() methods?
You're welcome. I believe get/set Value have to do with focus, i.e. getValue won't return the actual, displayed value (which is what getText does) until a blur event is fired.
I attempted to verify this but getValue is always returning null because it appears that setText does not update the internal value reference. I am still working on this and don't have an answer yet.
The difference is that getValue only returns the actual displayed value when an onBlur event is fired. Since we're calling preventDefault(), this event is not propagated which means getValue will never return anything but null (assuming an initially empty text field).
setValue updates the internal value field and fires a value change event if the new value is different from the old value. Essentially, setValue gives you the opportunity to do more event handling control, e.g., preventing updates to the field from firing events.
Using setValue to set the field's value this example will actually cause the text field's focus to be lost because of the underlying call to redraw(). Some work arounds would be to call focus() again but then you'd have to set the correct cursor position, or you could not call preventDefault() but then you'd need to get/use the current selection range to replace the extra character.
But bottom line, if you don't have any other handlers on this text field, don't care about any other event propagation, and are sure you are consistent in calling getText instead of getValue, there's nothing wrong with keeping the preventDefault() and calling setText.
Thank you icfantv. Everything is clear :) | https://www.sencha.com/forum/printthread.php?t=255419&pp=10&page=1 | CC-MAIN-2017-13 | refinedweb | 422 | 57.98 |
nice - set program scheduling priority
Standard C Library (libc, -lc)
#include <unistd.h>
int
nice(int incr);
This interface is obsoleted by setpriority(2).
The nice() function obtains the scheduling priority of the process from
the system and sets it to the priority value specified in incr. The priority
is a value in the range -20 to 20. The default priority is 0;
lower priorities cause more favorable scheduling. Only the super-user
may lower priorities.
Children inherit the priority of their parent processes via fork(2).
Upon successful completition, nice() returns the new nice value minus
NZERO. Otherwise, -1 is returned, the process' nice value is not
changed, and errno is set to indicate the error.
The nice() function will fail if:
[EPERM] The incr argument is negative and the caller is not
the super-user.
nice(1), fork(2), setpriority(2), renice(8)
The nice() function conforms to X/Open Portability Guide Issue 4.2
(``XPG4.2'').
A nice() syscall appeared in Version 6 AT&T UNIX.
BSD February 16, 1998 BSD | https://nixdoc.net/man-pages/NetBSD/man3/nice.3.html | CC-MAIN-2020-45 | refinedweb | 174 | 50.53 |
.
In XML 1.0, element and
attribute names were treated as atomic tokens with no interior structure.
Namespaces in XML
introduced the concept of element and attribute names existing in
namespaces. Namespaces are identified by URIs and bound to
namespace prefixes.
It is also possible to bind a default namespace to the empty
prefix. This namespace will then apply to all elements that have no prefix.
For example, the XSLT elements exist in the
namespace, which is traditionally bound to the xsl namespace
prefix:
xsl
<xsl:transform xmlns:
<xsl:variable
...
</xsl:transform>
A namespace-aware XML processor will internally resolve these element names
into tuples containing the namespace URI, the namespace prefix, and the
local name:
{, xsl, transform }
{, xsl, variable }
The particular namespace prefix used is supposed to be irrelevant, but in
practice people agree on common namespace prefixes for clarity, as
it would be very confusing if everyone used different ones.
Here are some familiar namespace prefixes from the W3C:
Prefix
URI
xml
xsl
fo
xsd
html
svg
xml
fo
xsd
html
svg
Looking at this list one might wonder why it should be necessary to
specify the namespace URI at all, considering that these namespaces already
have a standard prefix that is far more concise and easy to remember.
Using URIs to identify namespaces is a problematic approach with many
usability flaws, all of which would be solved if namespaces were identified
by the namespace prefix instead.
As seen in the table above, namespace URIs tend to be long and cryptic, with
lots of punctuation and case-sensitive text.
In this instance the W3C has compounded the problem by adding dates
to ensure that the namespace URIs are unique, as if it were likely that
the W3C would create another "XSL/Transform" or
"xhtml" namespace in the future.
XSL/Transform
xhtml
While namespace URIs may be guaranteed to be unique, they are also
guaranteed to be impossible to remember. Quick, without checking, can you
remember if the namespace URI for W3C XML Schema ends with
"xmlschema", "XML/Schema", or
"XMLSchema"? Was the namespace URI for SVG allocated in 1999,
2000, or 2001?
xmlschema
XML/Schema
XMLSchema
The opaque nature of these namespace URIs is inconvenient for users,
who must begin each new XML document with a ritual of carefully copying and
pasting all of the namespace declarations from the last document that they
were working on. If the namespace URIs are typed slightly wrong, the XML
document will lose its intended meaning and software will fail to process it
HTTP URIs are often used as namespace URIs. However, most software
treats HTTP URIs as resource locators, not identifiers. For example,
the requirement to type namespace URIs exactly as they appear is at odds with
the standard practice for HTTP URIs, which usually have many equivalent forms:
All of these HTTP URIs will return the same web page if entered into a
browser, but only the last one is the correct namespace URI for XSLT.
This clashes with user expectations, to put it mildly.
The one potential advantage of using HTTP URIs would be that they could act
as links to useful resources, but in practice most people don't bother doing
this. This disinterest is most strikingly observed with the XSLT and XSL-FO
namespaces, which point to brief documents saying "Someday a schema for
XSL Transforms will live here" and "This is another XSL namespace"
respectively.
There was an effort to develop RDDL
(Resource Directory Description Language) expressly for creating documents
to sit at the end of HTTP namespace URIs and direct XML tools to associated
resources such as style sheets, schemas, and documentation.
It is not used by any tools on the Web and with good reason: there are better
ways to associate resources with individual XML documents.
Aside: Why were URIs chosen over better alternatives?
It is not difficult to construct a better syntax than HTTP URIs for unique
identifiers. A good existing example is the syntax used to identify Java
packages:
org.w3.xsl.transform
Look at the difference. The identifier is all lowercase to make it easier
to remember, the redundant. that
wastes the first 11 characters of so many namespace URIs is gone, as are
all the slashes..
Given that Java predated the XML Namespaces specification, one can only
assume that URIs were chosen to identify namespaces for reasons other than
syntactical convenience, such as their intended use in the RDF/XML syntax..
<foo:schema>
<superluminal:transform>
Namespace URIs don't help people to read XML documents either.
They add an unnecessary level of indirection that makes XML documents
harder to interpret, as looking at an element name is no longer enough
to tell you exactly what that element is.
When you read an XML document beginning with
<html>, or <svg>, or
<xsl:transform>, or <xsd:schema>,
should it really be necessary to carefully check that the namespace prefix
is bound to the correct namespace URI?
<html>
<svg>
<xsl:transform>
<xsd:schema>
Since namespace URIs don't help people to read or write XML documents, why
should XML tools complain if they are omitted? Namespace URIs do not fit in
with the goals of XML, which has been designed to be produced and/or consumed
by people as well as software.
If namespace URIs were removed and namespaces were identified solely by
namespace prefixes instead, namespaces would still make sense and existing
XML specifications would only require minor alterations.
XSLT is one of the few XML languages that actually relies on namespaces for
disambiguation, specifically to distinguish XSLT elements that are processed
specially from other elements, which are output verbatim. XSLT also has
the requirement to perform namespace rewriting in order to be able to output
elements that are in the XSLT namespace without actively processing them,
similar to quoting or escaping in other programming languages.
However, XSLT has no need for namespace URIs. An XSLT processor could instead
treat any element with an xsl prefix as being in the XSLT
namespace and process it accordingly. Elements with a different prefix or no
prefix would be output verbatim in the usual manner, and namespace prefix
rewriting would also take place as normal using the existing XSLT
aliasing mechanism:
<xsl:namespace-alias
Removing all of the namespace URIs from an XSLT transform will make it
easier to read and write but will not affect the way it is processed,
so why require namespace URIs for XSLT?
XHTML documents rarely use namespace prefixes, as many web browsers are
not XML-aware and do not expect to see them. In any case, a root element
of <html> should be sufficient to identify an XHTML document;
there is no pressing need to add the namespace URI as well.
Current W3C practice encourages XHTML documents to accumulate the namespace URIs
for XHTML, SVG, MathML, XForms, XML Schema, XML Events, and who knows what else.
All of these have simple prefixes that are sufficient to identify the
namespace in question, so there is no reason to place this burden on users.
XHTML does not need namespace URIs.
RDF/XML, the XML syntax for RDF that seems to have been the driving force
for the adoption of namespace URIs, does not need namespace URIs.
Or to be more accurate, it would be trivial to define a method of binding
URIs to namespace prefixes specifically for RDF/XML, without forcing it
to be a standard that applied to all XML documents.
Given that RDF/XML is not an ideal syntax for representing RDF anyway—there exist numerous superior alternatives—-it is
unfortunate that it has imposed such a clumsy namespace mechanism on
the wider XML community.
There are some occasions such as modular XHTML, where people may wish to
write elements without namespace prefixes that are nonetheless in a
namespace. This could be done with an attribute like xmlns;
let's call it xml:ns, just for fun:
xmlns
xml:ns
<blockquote xml:
...
</blockquote>
An explicit namespace prefix is probably a better choice though, as it
makes each element stand alone, with a fixed meaning that cannot be changed
at the whim of its ancestors.
One of the uglier architectural warts that namespaces has introduced to XML
is the use of qualified names in text content:
<foo:message status="foo:severe" ...
The problem, of course, is that according to the current specification of
XML Namespaces, namespace prefixes are supposed to be irrelevant and may be
changed without altering the meaning of the document. Unless it uses namespace
prefixes in text content, in which case the namespace prefixes become very
significant indeed. Why not just drop the URIs, admit that the namespace
prefixes are significant, and end the whole pointless charade?
Use
XML Namespaces with Care, where Uche Ogbuji provides some more handy hints
for effective namespace usage.
A Plea for
Sanity, where Joe English defines the useful concepts of neurotic,
borderline, psychotic, normal, and sane use of namespaces in XML documents..
If namespaces are necessary, choose namespace URIs that are concise
and easy to remember. It helps if they are all lowercase and don't include
unnecessary information.
Try to get by with only one namespace if you can. There is not much to gain
by multiplying namespaces unnecessarily except trouble and complexity.
If you must use more than one namespace, at least ensure that the namespace
URIs follow a consistent pattern.
Agree on standard namespace prefixes for your XML vocabularies; they will
help people to read and write your XML without confusion. If you find yourself
using the default namespace rather than the prefixes, consider whether you
actually need a namespace at all.
Following these steps will help to keep namespace URIs under control
in your XML documents.
© , O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.xml.com/pub/a/2005/04/13/namespace-uris.html | CC-MAIN-2016-44 | refinedweb | 1,645 | 56.69 |
Changing HTML in QtWebView
Hi,
I'm currently trying to build an interactive book using Qt. To achieve that, I want to display rendered HTML with keywords. If the user clicks on a keyword, a menu shows up displaying various ways to interact with said keyword.
Depending on what you choose the story will change so it is crucial that the displayed text also changes.
Since I'm using HTML and want the app to be multi platform, my only option to render HTML is QtWebView. This is where the trouble starts.
I can
import QtWebView 1.1in my QML file, but the WebView never appears in the
QML Typessection. If I put it in by editing the QML file textually, I get a functional browser but the Designer claims
Item could not be createdwhen hovering over it in the navigator. There's also an exclamation mark next to the WebView and next to the root object, when hovering over the one next to the root object, Qt claims that
module "QtWebView" is not installed. However, this is not the case. QtWebView exists both in my GCC- and in my ARM7 installation of Qt.
Even though it's ugly and not really intuitive, I could live with that if the rest was working. However, I can't access the methods for QtWebView in my C++ code. When I try to
#include <QtWebView>it claims that there is
no such file or directory. Thus I can't even call
loadHtml().
I already thought of putting HTML into seperate files and editing those using
fopen(), but android doesn't allow you to change assets after the APK has been created.
I've been reading the docs over and over again but can't seem to come up with a solution. Any help would be really awesome.
Thanks in advance! | https://forum.qt.io/topic/64085/changing-html-in-qtwebview | CC-MAIN-2017-47 | refinedweb | 307 | 73.37 |
In this article we will discuss about delegates, what is multicast delegates and its contribution in Asynchronous communications and also how to work with long-running processes etc.
In the previous session we have discussed about delegates and its basic features. Now I would like to introduce some of the interesting features of Delegates. Multicast delegatesAs we have discussed in the earlier session, delegates represents functional pointers. When a delegate represents multiple methods, it is multi cast delegates. Multicast delegates should represent void methods. By adding += I am making this delegate to multicast. The moment I add this += it becomes an array !. This would hold as many numbers of methods as you specify.// Normal delegateCalculator c = new Calculator();BusinessControl bc = new BusinessControl();CalculatorHandler ch = new CalculatorHandler(c.Add);MessageBox.Show(bc.CallMe(ch).ToString()); //call add method - > this technique is called IOC // multi cast delegateCalculator c = new Calculator();BusinessControl bc = new BusinessControl();CalculatorHandler ch = new CalculatorHandler(c.Add);ch += new CalculatorHandler(c.Add);ch += new CalculatorHandler(c.Multiply);MessageBox.Show(bc.CallMe(ch).ToString());Here is how it works. When you invoke the delegate, 'Add' method will be called first followed by 'Multiply' in a single thread and the compiler will come back to you once it is done with all methods specified in the multicast delegate. Note that you will get a response back only after completing both the tasks ie. Add and Multiply. This is the reason it is suggested that Multicast delegates should represent only void methods. Hope this might have given you an idea of multicast delegates. Let us look at what is Synchronized and Asynchronized communications and how delegates support these features. Synchronized CommunicationIn our previous example, when we call Add() method of calculator class, using oCalc.Add(), we are making a synchronized communication. Meaning, we are waiting for Calculator object to complete its Add method. To test this, add a Thread.Sleep(10000) in the add method. When we call this method, the current thread will be in sleep mode for 10000 milliseconds. In other words, we are forced to wait until the compiler is done with the Add method.public int Add(int a, int b){ Thread.Sleep(10000); Console.WriteLine((a + b).ToString()); return (a + b);}It is not possible to call any other method untill 10000 ms is completed. This is called synchronous communication. Practical implementation: Think of a complex stored-procedure that takes 10000 ms to execute. We cannot proceed with next step until the procedure is completed and the compiler gets back to the code.Asynchronous Communication Asynchronous methods will not wait for the completion of current thread, and hence we can proceed with next. This is something like we delegate the work to a third person to do the task on our behalf.Having said this, how about getting a notification from that person once he completes the tasks? Sounds good isn't it? We will discuss about this later. To try asynchronous methods, we are adding Thread.Sleep(10000) to both Add() and Multiple() methods of Calculator class. This is nothing but to demonstrate the waiting period. Please refer to the below sections.Synchronize Call - NormalCalculator c = new Calculator();BusinessControl bc = new BusinessControl();CalculatorHandler ch = new CalculatorHandler(c.Add);MessageBox.Show(bc.CallMe(ch).ToString()); You will notice that, until it is completed we won't be able to do any other task and the entire application would be locked. Try typing something in the textbox during this invocation. Converting to Asynchronize CallCalculator c = new Calculator();CalculatorHandler ch = new CalculatorHandler(c.Add);ch.BeginInvoke(100, 25, null, null);ch = new CalculatorHandler(c.Multiply); ch.BeginInvoke(25, 4, null, null); Now type something in a textbox. What is Begin InvokeThe moment you call BeginInvoke, you are creating a fork. Meaning, each delegate will be executed in different threads through BeginInvoke. The current thread will be splitted into three. One will represent the main thread from which the delegates are called, one will go for the C.ADD method and the other one will be used for the C.MULTIPLY method. What is End Invoke EndInvoke is nothing but just opposite action of BeginInvoke. This will create a Join ie, it will wait for all other threads to complete and combine them all together into one single thread and proceed with the Main thread. Getting results from Asynchronous call - Long running processesHere I would like to introduce a new member IAsynchResult interface. It takes responsibility to get the work done by the delegate and bring the result back to your location. IAsynchResult.IsCompleted returns true when the asynchronous job is completed; until that time it is false and you can continue with your work. Hence we can use this to assess whether the long running process is completed.private void button4_Click(object sender, EventArgs e){ Int32 intResult; Calculator c = new Calculator(); CalculatorHandler ch = new CalculatorHandler(c.Add); IAsyncResult ar1 = ch.BeginInvoke(100, 25, null, null); int Counter = 0; while (!ar1.IsCompleted) { // Do other stuffs Counter++; } // Now we know that call is completed as IsCompleted has returned true textBox1.Text = Counter.ToString() + " times i was doing my other work"; intResult = (int)ch.EndInvoke(ar1); MessageBox.Show(intResult.ToString());} Now what about Out parameters? So how are we supposed to deal with out parameters of our method? Let us take a look at the Add method which takes a parameter int as Out. Here's how it goes, Int Add(Int,int,out int). Then use,BeginInvoke(int,int,null,null) and the EndInvoke will be EndInvoke(out int). Call Back Mechanism - Completion NotificationThe idea is to make the calculator come back and say I have finished my work and this is the result. This mechanism is called Callback. So far, the last two parameters of BeginInvoke have been passed as null. What was that all about? Is that the one that makes delegates responsive? Yes, we will discuss this in next part. Mark your suggestions and comments
View All | https://www.c-sharpcorner.com/UploadFile/vmsanthosh.chn/working-with-delegates-part-ii/ | CC-MAIN-2021-21 | refinedweb | 999 | 50.84 |
08 August 2012 10:37 [Source: ICIS news]
SINGAPORE (ICIS)--SABIC is planning to shut a 700,000 tonne/year monoethylene glycol (MEG) unit at Al-Jubail in ?xml:namespace>
The maintenance at the unit, which is located within the complex of affiliate firm Eastern Petrochemical Co (SHARQ), will last about 10 weeks, SABIC said in a statement on the Saudi bourse Tadawul.
The turnaround at the unit will also include “some repairs”, it said.
“This temporary shutdown will not preclude the company's commitment to its customers and will not have any major impact on the company,” SABIC said.
“Any new developments on this issue will be announced at the appropriate time,” the company added.
SHARQ is a joint venture between SABIC and Saudi Petrochemical Development Co (SPDC), a consortium of Japanese companies led by | http://www.icis.com/Articles/2012/08/08/9584935/sabic-to-shut-al-jubail-meg-unit-for-turnaround-early-september.html | CC-MAIN-2014-49 | refinedweb | 136 | 58.11 |
Given below the sample code :
1 public class Section611 {
2 public static void main(String[] args) {
3 convert("illlegal");
4 }
5 public static void convert(String s) {
6 try {
7 float f = Float.parseFloat(s);
8 } catch (NumberFormatException ne) {
9 f = 0;
10 } finally {
11 System.out.println(f);
12 }
13 }
14 }
Find the output after execution of the above code ?
1. 0.0
2. Compilation fails
3. illegal
4. It 'throws' number format exception.
(2)
Compilation fails because the variable at line 9 doesn't have any type because the variable ' f ' at line 7 has it's scope within 'try' block. | http://www.roseindia.net/tutorial/java/scjp/part6/question11.html | CC-MAIN-2017-13 | refinedweb | 103 | 55.95 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.