Document
stringlengths
395
24.5k
Source
stringclasses
6 values
How much will my game cost? The best way to find the cost of custom printed components for your game would be to build a quote in our system. 1. First, click on "Make" at the top of the screen. Then, select "Pricing" from the drop-down list. 2. Next, click on "Build a Quote." 3. You will be prompted to sign in, create an account, or proceed with making the quote as a guest. (Signing in or creating an account is the best way to go, as it is a quick and easy process and will be easier to proceed with creating a game this way.) 4. Next, you'll be prompted to enter a name for your quote. You can then click on the "Start Quote" button. 5. Once in the Game Quote screen, you can select the components that you wish to include in your game. Along the left side of the page, you'll see different component categories listed. Click on a category to see components from that category. 6. To add a component to your quote, click on the "+ Add to Quote" button next to the component. 7. You will then enter the quantity of this component that you wish to include in your game. For some select components, you may also be able to choose a finishing option for that component only. 8. When adding items such as cards, you'll be asked to enter the total number of cards of that type for the game. (For example, if your game will contain two decks of 20 poker cards each, you would enter "40" in the Quantity field.) 9. When you're done adding components to your quote, you can select surfacing options that will be applied to all eligible products in your game. Click on the "Surfacing" category the left side of the screen, then change the fields to "Yes" for the options you wish to select. 10. You can see the total cost of your game at any time by clicking on "Quote" on the left side of the screen 10. On the Quote page, you'll also see options to "Create this Game" or "Download this Quote." If you click on "Create this Game," our system will create the game for you in our system, and take you directly to that game under your account. If you click on "Download this Quote," the system will automatically download a spreadsheet to your device containing information about your quote. The quote will include the type of component you selected, the quantity, the component's dimensions, and a direct link to that component's product page on our site. It will also show the total cost for one game copy. Some things to note: - The quote will not include the cost of any stock components. You can find pricing on all of our Stock Components in the parts shop. - The quote will not include the cost of laser movement/cutting for products such as custom punchouts, custom cardstock, and acrylic shapes. - The quote will assume that all sides of custom parts (such as meeples and dice) will be printed/engraved. You can watch a tutorial about how to download templates for your game here: https://youtu.be/VggD1_mTlmY You can watch a tutorial about how to design images using our templates here: https://youtu.be/8IDecwxMakY You can watch a tutorial about how to load your images into a game here: https://youtu.be/fFQcLH5yTiE You can watch a tutorial about how to order your game here: https://youtu.be/ual0FnfQcMI
OPCFW_CODE
This is a topic that keeps on coming back when you start to talk about Windows 365 and Cloud PCs. “This sounds really cool, but there are sooooo many licenses to choose from. Which one should I get?” The answer is just as hard as the question. It depends. It depends mostly on what your users will use their Cloud PCs for, and what you consider to be a fair machine to provide your users with. Licenses are rarely someone’s favorite subject (I know there are some people who do favorite it thought), but for Windows 365 it’s really simple and straight forward and this is something you as an admin should care about. Licenses is what defines how powerful your Cloud PC will be, meaning how many vCPUs, how much RAM and how many Gb of storage you will get. Licenses for Cloud PCs are also subscription based, billed on a monthly basis (this could vary depending on your agreements), meaning that you can easily adjust your volumes. There are a few different ways to look at this, but the image below is a pretty good pointer in when to use what. Like the example below states, you could classify this into three categories: Depending on what your users workloads are, you can quite easily find a good place to start using this simple matrix. If you want to expand even further, there are a lot more variations available than just those three. There are today 11 different licenses available for Cloud PCs today, divided into 4 different vCPU and RAM categories, where storage adds the variation. Depending on your vCPU and RAM configuration you can select between 64 Gb up to 512 Gb storage. But where do I start? Going back to the initial question, what licenses should you get? I kind of have to stick to my initial statement, “it depends“, because this really comes down to what these machines will be used for. If you are looking at users running mostly productivity and Line of Business (LOB) applications, the 4vCPU/8Gb RAM/128 Gb storage is a pretty sweet deal. Since most of us today are using OneDrive, local storage isn’t that important anymore for many users on a Cloud PC and this license is usually where I recommend many to start since it would fill the basic needs of their user base. The 4 vCPUs and 8 Gb of RAM will provide you with a great user experience and wont at the same time cost you are fortune. The step-up from this license is the 8 vCPU with 16 Gb RAM if you need a more powerful machine running heavier workloads. If this size is to small or big, you can always scale up or down using the resize feature (which I’ve written about in this post). But what about diskspace? To be honest, this is probably the least of your concerns, the performance with RAM and CPUs are more important. Local storage is not to important in a world where most documents are stored in OneDrive or SharePoint, that leaves diskspace to be used by applications. For most scenarios you wont need more than 128 Gb disk, bit of course there are always use cases where local storage is key. But diskspace is like all the other parts, it can be expanded. HOWEVER, you cannot decrease diskspace. This means that you can move from 128 Gb disk to 512 Gb, but not the other way around. This is an important thing to keep in mind! The most important part is to get started somewhere if you want to utilize the awesome benefits that Cloud PC brings, and you can always adjust along the way! I hope this gave some kind of pointers in where to start!
OPCFW_CODE
I calculated that we have have 53 relevant first moves Is that right? I turned out the symetry Board without diagonals dividet by 8+Long diagonals divided by eight+Tengen With relevant i mean moves with the same postion if you move the board. Not only good ones The number of different first moves is 55: Two different ways to calculate it without resorting to numbering like I did above: There are 361 intersections total. 1 of them (tengen) is unique 4x9 + 4x9 = 72 of them (the moves along midlines and diagonals) come in groups of 4 the other 361 - 1 - 72 = 288 come in groups of 8 So we have 1 + 72/4 + 288/8 = 55 unique moves. If we know that they come in a triangle shape, we can also calculate like this: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = (1+10) + (2+9) + (3+8) + (4+7) + (5+6) = 5 x 11 = 55 Hope this is helpful! :Forgot the midle lines am i silly ^^ Wanted tom explain to a chess coleauge why go is more complex than chess Chess is more complex than go bc go does not support horses How about keima? Sensei's Library, page: Keima, keywords: Go term. SL is a large WikiWikiWeb about the game of Go (Baduk, Weiqi). It's a collaboration and community site. Everyone can add comments or edit pages. And there is even a large knights’ move: ogeima. Sensei's Library, page: Large knight's move, keywords: Go term. SL is a large WikiWikiWeb about the game of Go (Baduk, Weiqi). It's a collaboration and community site. Everyone can add comments or edit pages. Sill think go is better than … Or the extra large knight’s move, with the fun name of daidaigeima. How do you measure complexity? Game tree complexity (of which initial number of moves is an input) and state space complexity are pretty standard, and go is indeed bigger than chess. But increased complexity doesn’t necessarily make a better game. Here’s a game with infinite complexity that’s really boring. Player A thinks of any number. Player B guesses what it was. If they are right they win this round, else they lose. Swap roles and repeat an even number of times, tallying score. (I brush over the difficulty of a human brain thinking of arbitrary transcendental numbers) I guess we have haengma too! That other corner influence one corner for example… Or the difficutie reading who has the better postion. And laslty how mutch longer it needed until a computer defeaded a strong player
OPCFW_CODE
I have created standby database on same server ( windows XP) and using oracle 11g . I want to synchronize my standby database with primary database . So I tried to apply redo logs from primary to standby database as follow . My standby database is open and Primary database is not started (instance not started) because only one database can run in Exclusive Mode as DB_NAME is same for both database. I run the following command on the standby database. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; It returns "Database altered" . But when I checked the last archive log on primary database, its sequence is 189 while on standby database it is 177. That mean archived redo logs are not applied on standby database. The tnsnames.ora file contains entry for both service primary & standby database and same service has been used to transmit and receive redo logs. 1. How to resolve this issue ? 2.Is it compulsory to have Primary database open ? 3. I have created standby control file by using command SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘D:\APP\ORACLE\ORADATA\TESTCAT\CONTROLFILE\CONTROL_STAND1.CTL‘; So database name in the standby control file is same as primary database name (PRIM). And hence init.ora file of standby database also contains DB_NAME = 'PRIM' parameter. I can't change it because it returns error of mismatch database name on startup. Should I have different database name for both or existing one is correct ? Can anybody help me to come out from this stuck ? Thanks & Regards Thank you Girish. It solved my redo apply problem. I set log_archive_dest parameter again and then I checked archive redo log sequence number. It was same for both primary and standby database. But still table on standby database is not being refresh. I did following scenario. 1. Inserted 200000 rows in emp table of Scott user on Primary database and commit changes. 2. Then I synchronized standby database by using Alter database command. And I also verify that archive log sequence number is same for both database. It mean archived logs from primary database has been applied to standby database. 3. But when I count number of rows in emp table of scott user on standby database, it returns only 14 rows even of redo log has been applied. So my question is why changes made to primary database is not reflected on standby database although redo logs has been applied ? When I google (without double quotes) "standby sync primary" I found many good links. Whatever you have posted it is not clear to me where is problem i.e. in redo shipping or applying, which type of standby you have, how you configured it etc. So, I would suggest to just search with google with mentioned keywords and if you still have problem, just show us sqlplus cut and paste so that we may know what you are doing and how Oracle is responding you.
OPCFW_CODE
add role management This is a prerequisite for sharing NG to clarify roles and permissions management. Permissions are represented as a list of resource actions and conditions that must be met for the action to be allowed. Resource Actions Resource actions are a set of tasks that can be performed on a resource. The following is the schema for resource actions: {Namespace}/{Entity}/{PropertySet}/{Action} For example: libre.graph/applications/credentials/update {Namespace} - The services that exposes the task. For example, all tasks in libre graph use the namespace libre.graph. {Entity} - The logical features or components exposed by the service in libre graph. For example, applications, servicePrincipals, or groups. {PropertySet} - Optional. The specific properties or aspects of the entity for which access is being granted. For example, libre.graph/applications/authentication/read grants the ability to read the reply URL, logout URL, and implicit flow property on the application object in libre graph. The following are reserved names for common property sets: allProperties - Designates all properties of the entity, including privileged properties. Examples include libre.graph/applications/allProperties/read and libre.graph/applications/allProperties/update. basic - Designates common read properties but excludes privileged ones. For example, libre.graph/applications/basic/update includes the ability to update standard properties like display name. standard - Designates common update properties but excludes privileged ones. For example, libre.graph/applications/standard/read. {Actions} - The operations being granted. In most circumstances, permissions should be expressed in terms of CRUD operations or allTasks. Actions include: create - The ability to create a new instance of the entity. read - The ability to read a given property set (including allProperties). update - The ability to update a given property set (including allProperties). delete - The ability to delete a given entity. allTasks - Represents all CRUD operations (create, read, update, and delete). The most interesting part IMO is how we will represent CS3 permissions. I took the liberty to map them to unifiedRolePermissions: CS3 ResourcePermission action comment stat libre.graph/driveItem/basic/read basic because it does not include versions or trashed items get_quota libre.graph/driveItem/quota/read read only the quota property get_path libre.graph/driveItem/path/read read only the path property move libre.graph/driveItem/path/update allows updating the path property of a CS3 resource delete libre.graph/driveItem/standard/delete standard because deleting is a common update operation list_container libre.graph/driveItem/children/read create_container libre.graph/driveItem/children/create initiate_file_download libre.graph/driveItem/content/read content is the property read when initiating a download initiate_file_upload libre.graph/driveItem/upload/create uploads are a separate property. postprocessing creates the content add_grant libre.graph/driveItem/permissions/create list_grant libre.graph/driveItem/permissions/read update_grant libre.graph/driveItem/permissions/update remove_grant libre.graph/driveItem/permissions/delete deny_grant libre.graph/driveItem/permissions/deny uses a non CRUD action deny list_file_versions libre.graph/driveItem/versions/read versions is a driveItemVersion collection restore_file_version libre.graph/driveItem/versions/update the only update action is restore list_recycle libre.graph/driveItem/deleted/read reading a driveItem deleted property implies listing restore_recycle_item libre.graph/driveItem/deleted/update the only update action is restore purge_recycle libre.graph/driveItem/deleted/delete allows purging deleted driveItems This is in fact a 1:1 mapping of the CS3 Resource permissions to unifiedRolePermission actions as they are used in ms graph. Conditions Optional constraints that must be met for the permission to be effective. Conditions define constraints that must be met. For example, a requirement that the principal be an owner of the target resource. The following are the supported conditions: Self: @Subject.objectId == @Resource.objectId Owner: @Subject.objectId Any_of @Resource.owners Grantee: @Subject.objectId Any_of @Resource.grantee - does not exist in MS Graph, but we use it to express permissions on shared resources. Permissions The following is an example of a role permission for a Viewer role on shared resources. { "id": "7ccc2a61-9615-4063-a80a-eb7cd8e59d8", "description": "Allows reading the shared file or folder", "displayName": "Viewer", "rolePermissions": [ { "allowedResourceActions": [ "libre.graph/driveItem/basic/read", "libre.graph/driveItem/permissions/read" ], "condition"<EMAIL_ADDRESS>Any_of @Resource.grantee" } ] }, The following is an example of a role permission for a Space Editor role on (co-)owned resources. { "id": "7ccc2a61-9615-4063-a80a-eb7cd8e59d8", "description": "Allows editing the co-owned file or folder", "displayName": "Space Editor", "rolePermissions": [ { "allowedResourceActions": [ "libre.graph/driveItem/basic/read", "libre.graph/driveItem/permissions/read", "libre.graph/driveItem/upload/create" ], "condition"<EMAIL_ADDRESS>Any_of @Resource.owners" } ] }, aces are currently hardcoded to a CS3 permissions set: // grantPermissionSet returns the set of CS3 resource permissions representing the ACE func (e *ACE) grantPermissionSet() *provider.ResourcePermissions { p := &provider.ResourcePermissions{} // r if strings.Contains(e.permissions, "r") { p.Stat = true p.GetPath = true p.InitiateFileDownload = true p.ListContainer = true } // w if strings.Contains(e.permissions, "w") { p.InitiateFileUpload = true if p.InitiateFileDownload { p.Move = true } } // a if strings.Contains(e.permissions, "a") { // TODO append data to file permission? p.CreateContainer = true } // x // if strings.Contains(e.Permissions, "x") { // TODO execute file permission? // TODO change directory permission? // } // d if strings.Contains(e.permissions, "d") { p.Delete = true } // D ? // sharing if strings.Contains(e.permissions, "C") { p.AddGrant = true } if strings.Contains(e.permissions, "c") { p.ListGrants = true } if strings.Contains(e.permissions, "o") { // missuse o = write-owner p.RemoveGrant = true p.UpdateGrant = true } if strings.Contains(e.permissions, "O") { p.DenyGrant = true } // trash if strings.Contains(e.permissions, "u") { // u = undelete p.ListRecycle = true } if strings.Contains(e.permissions, "U") { p.RestoreRecycleItem = true } if strings.Contains(e.permissions, "P") { p.PurgeRecycle = true } // versions if strings.Contains(e.permissions, "v") { p.ListFileVersions = true } if strings.Contains(e.permissions, "V") { p.RestoreFileVersion = true } // ? if strings.Contains(e.permissions, "q") { p.GetQuota = true } // TODO set quota permission? return p } @butonic My only question "should we use a dot in the namespace of the permissions or not?" The 'action' is just a string that follows the same pattern as the ms graph api. They use microsoft.graph as the namespace, so I chose to mimic it as libre.graph. I don't see the action string being part of object properties. @micbar I would add the weight property to the @libre.graph.permissions.roles.allowedValues annotation, which is part of listing permissions: { <EMAIL_ADDRESS>{ { "name":"read", "displayname":"Viewer", "weight":30 }, { "name":"write", "displayname":"Editor", "weight":20 }, { "name":"owner", "displayname":"Manager", "weight":10 }, { "name":"foo", "displayname":"Fooer", "description":"Can foo things, but not bar them.", "weight":5 } }, "value": [ { "id": "67445fde-a647-4dd4-b015-fc5dafd2821d", "roles": [ "read" ], ... but that is not part of this role management PR ah got it ... yeah we also need a way to set the weight.
GITHUB_ARCHIVE
SQL aggregate SUM function not outputting correctly? I have three tables: crash_table contains the numbers of car crashes on each day from 2009 to 2024, date_table contains date information (which i'm using to get the year out of the date values in crash_table), and population_table contains the total population for each year. Date Severity Crashes 1/1/2009 Property Damage Only 66 1/1/2009 Major Injury 2 11/5/2017 Minor Injury 10 and so on. The date table looks like this: Date Year Month Day of Week 1/1/2009 2009 January Thursday 1/2/2009 2009 January Friday and so on. The population table looks like this: Year Population 2009 3,032,870 2010 3,050,819 and so on. My goal is to get a table which shows each year, and the estimated per-capita car crash rate for each year. To that end, I've written the following SQL query: SELECT p.Year, (SUM(c.Crashes) / ANY_VALUE(p.Population)) AS `Crashes per Capita` FROM crash_table c INNER JOIN date_table d ON c.Date = d.Date INNER JOIN population_table p ON d.Year = p.Year WHERE p.Year != 2024 GROUP BY p.Year; I put ANY.VALUE() around population in the SELECT statement to avoid errors caused by the ONLY_FULL_GROUP_BY setting. The table output I'm getting from this query is clearly incorrect; the 'Crashes per Capita' column values are all in the tens of thousands, while I'm expecting values that are far less than zero. The result table looks like this: Year Crashes per Capita 2009 18498 2010 18132 I'm expecting the per-capita values to land in the realm of 0.01 to 0.02, or so. Is the issue with the aggregate function in the SELECT statement, or am I missing something else? Apologies if this is something obvious - I'm a relative beginner and I couldn't find any similar questions with answers that worked for me. Please post sample data and the expected result. If there are multiple matching rows in date_table or population_table, the sum will be multiplied by the number of matches. E.g. if population_table has a row for every day of the year, the sum will be multiplied by 365. And you'll pick a random date's population with ANY_VALUE(). I added the sample data and results. I hope I was clear enough on what I'm looking for. I can see how the JOIN statement might be causing problems, but I'm unsure of how to get to where I'm trying to go. If the population column has comma as thousands separator, then that column is a string not a number. Convert it to a number by removing the thousand separators and I guess it should work. That seems to have done it, Shadow! I did not notice that SQL had imported population as a text variable. Thank you so much! Thanks to Shadow, the answer was just that the population variable was cast incorrectly as text instead of as an int. Thank you everyone!
STACK_EXCHANGE
min_error issues umbrella After our discussion about min_error on Wednesday, I thought I would collate some of the issues here. So far my thoughts are: There should be some sort of error/warning message when you pass arguments to min_error and also calculate_min_error. I did this by mistake today, and they produce conflicting behaviour - calculate_min_error overrides min_error, but it's not clear this is what's happening when you pass both in the .ini file That being said, I think the behaviour when min_error = 10.0 or similar is passed into the .ini file is as expected. The problem I've had is with verifying this : the min_model_error variable and attribute are just the single value passed (or calculated by the residual/percentile method) repeated nmeasure times. I think it would be more informative to have the output variable report 'epsilon', which is the error actually passed to the likelihood in L310 of inversion_pymc.py. This does vary with time, although in practice if a large min_error is specified it will default to this most of the time. My issue is coming somewhere else - I'm fairly confident the Likelihood is sufficiently broad, but some other step is meaning that my modelled obs are coming out with very narrow error bars. I'm looking into what that might be now.... Feel free to add other issues with min_error to this one - just thought I'd summarise here. Thanks for opening this issue Ben! I'd like to add that maybe a good way to add the min_error to the rhime output would be as a variable with the dimension (nsite), instead of (nmeasure) and also not as an attributes. Maybe a 'min_error' variable with dimension nsite that shows the input min error, and a 'model error' variable with dimensions (nsite, nmeasure) that is what error values are actually passed to the inversion for each site (which is the larger of the min_error and the pollution-scaled error) ? Thanks Ben! I'm working on incorporating some the PARIS formatting code into inversions, which I think will help with this. (E.g. in PARIS formatting, epsilon is used explicitly in the outputs.) min_error was originally a single value, so I put it in the attributes of the output (since we put info about the priors and so on there). If you specify min_error and make calculate_min_error = None, that is still the case. Otherwise, min_error is actually an array, but still stored in the attributes of the output. It turned out that this worked with the PARIS formatting code without having to modify anything, so I left it as is, since the long term plan was to move a lot of that code into inversions. So, if you use calculate_min_error, then the min_error attribute will actually be a vector with the same length as, say Y (so could have nmeasure as coordinates). Going forward, reporting min_error as a data variable, and reporting epsilon sounds like a good plan. Also a warning for passing both min_error and calculate_min_error seems like a good idea. Alternatively, we could just have one parameter min_error in the ini, that could either be a float value, or "percentile", or "residual" (this would probably mean fixedbasisMCMC would need minor changes). I think having a single parameter would be far more elegant for min_error
GITHUB_ARCHIVE
in category SmartHome I’m building a house. I want to make it “smart”. The goal is to save as much my (or my family’s) time as possible in the future. I needed a plan about how to do it. After some research I came up with the following architecture. Firstly, I needed a base. The base system that it will be the heart of everything. I had a few core requrements for it. - It should be easy to use for the end user. I don’t expect that my wife or kid will learn some more in-depth details in order to setup basic things themselfs. - No cloud required. I want to keep all my data locally. Noone should be ever able to reach them from outside if I won’t allow them to do so. - Have good support. I’m not an expert in the area so I’ll need help. It’s about my home. If I make a huge fuckup it will be hard to revert it. - Be extentionable. I’m fine with choosing the manufacturer’s featured divices but there should be a way to plug in third part devices as well. After my reasearch, I decided to choose Loxone. I’ve been on their partner’s training and I liked it a lot. I may write more about the company and it’s devices. Please let me in the comments section below if you want to read more about it. Loxone didn’t develop everything I’d expect so I need a way to communicate with third-part devices or even those I’ll build myself. Loxone has feature called Virtual Inputs and Virtual Outputs. Thanks to them, I can read or change a state using HTTP requests. So I needed a hub. I choose raspberry pi 4 for that. Any Arduino, ESP8266 devices will be contacting with Raspberry Pi and it will be communicating with Loxone miniserver. This decision has some weaknesses. The biggest is a fact that if the Raspberry Pi will go down, I won’t be able to communicate with other devices. That’s why I setup some observability around Raspberry Pi and probably will buy more to get better availability if something will go wrong. I setup a few Grafana dashboards with Prometheus metrics to make sure everything’s OK. Here’s one of them. I’ll cover topics like the devices I chose, technologies, and automation I have in mind, and a detailed description of how I build my custom devices as well. I’m extremely happy that I started the journey and cannot wait to show you what I’ve built!Buy me a coffeeTags: #IoT #Loxone #raspberry #ESP8266 #arduino
OPCFW_CODE
I ran into this callback ordering issue while working on a Rails project recently and decided to investigate. After writing some test rails apps I am pretty confident this is a bug in Rails 3.0.x. If you believe otherwise, please leave a comment or explanation. Taking a step back from the actual code, in an ORM (Object Relational Model) I would expect that callbacks from objects are consistent in their ordering. That is, if I have a callback that fires when an object has been saved to the database, that callback will fire at the same time (after the object has been saved to the database) regardless of any dependent objects or where the callback has been declared. Here’s some code to demonstrate this: Simple, right? Almost. The problem in Rails is that the after_create callback doesn’t exactly work that way. When it fires is actually dependent on the order in which it was defined. What’s more - this isn’t actually the case with the after destroy callback. Here’s some code to demonstrate the issue: Take a second and look at that - do you see what’s wrong? The User#after_create callbacks are called at opposite ends of the Post#after create callbacks, but the User#after_destroy callbacks are called at the end. What does this mean? Well basically that after_create is not actually called after the record is created (and saved to the database) - it’s called “after the record is created and any dependent relations defined before it are also created”. Now that just seems wrong to me. Especially when the after_destroy callback behaves in a different way. It seems that the flow should work like this: - The user is saved. User#after_createcallbacks are executed. - The post is saved Post#after_createcallbacks are executed. Especially when the opposite is true for - The post is destroyed. Post#after_destroycallbacks are executed. - The user is destroyed. User#after_destroycallbacks are executed. The problem right now is that the User#after_create callbacks are being called in two places - once after the user is created (if they are defined before the has_many :posts declaration) and once after the are executed (if they are defined after the has_many :posts declaration). If you’re into TDD, you would check that the first case actually produces this result: => User - first after create => User - second after create => Post - first after create => Post - second after create So why is this a problem? Why not just define all your callbacks before your relations? The first problem with this is that it is inconsistent - after destroy and after_create should behave in the same manner. The second problem is that your code might fail and it’s not obvious why. While this example is trivial it illustrates how easy it is to mix up the order or definitions in Ruby. Unbeknownst to us, simply looking at the User class, this would actually cause our :do_something callback to be executed in the wrong order.
OPCFW_CODE
CTRL+SHIFT+F opens the Format Cells dialog box with the Font tab selected.CTRL+G Displays the Go To dialog box.F5 also displays this dialog box.CTRL+H Displays the Find and Replace dialog box, with the Replace tab selected.CTRL+IApplies or removes italic formatting.CTRL+KDisplays the Insert Hyperlink dialog box for new hyperlinks or the Edit Hyperlink dialog box forselected existing hyperlinks.CTRL+N Creates a new, blank workbook.CTRL+O Displays the Open dialog box to open or find a file.CTRL+SHIFT+O selects all cells that contain comments.CTRL+PDisplays the Print dialog box.CTRL+SHIFT+P opens the Format Cells dialog box with the Font tab selected.CTRL+RUses the Fill Right command to copy the contents and format of the leftmost cell of a selectedrange into the cells to the right.CTRL+SSaves the active file with its current file name, location, and file format.CTRL+TDisplays the Create Table dialog box.CTRL+UApplies or removes underlining.CTRL+SHIFT+U switches between expanding and collapsing of the formula bar.CTRL+V Inserts the contents of the Clipboard at the insertion point and replaces any selection. Availableonly after you have cut or copied an object, text, or cell contents.CTRL+WCloses the selected workbook window.CTRL+X Cuts the selected cells.CTRL+YRepeats the last command or action, if possible.CTRL+ZUses the Undo command to reverse the last command or to delete the last entry that youtyped.CTRL+SHIFT+Z uses the Undo or Redo command to reverse or restore the last automaticcorrection when AutoCorrect Smart Tags are displayed. KeyDescriptionF1Displays the Microsoft Office Excel Help task pane.CTRL+F1 displays or hides the ribbon.ALT+F1 creates a chart of the data in the current range.ALT+SHIFT+F1 inserts a new worksheet.F2Edits the active cell and positions the insertion point at the end of the cell contents. It also moves the insertion pointinto the Formula Bar when editing in a cell is turned off.SHIFT+F2 adds or edits a cell comment.CTRL+F2 displays the Print Preview window.F3Displays the Paste Name dialog box.SHIFT+F3 displays the Insert Function dialog box.F4Repeats the last command or action, if possible.CTRL+F4 closes the selected workbook window.F5Displays the Go To dialog box.CTRL+F5 restores the window size of the selected workbook window.F6Switches between the worksheet, ribbon, task pane, and Zoom controls. In a worksheet that has been split (Viewmenu, Manage This Window, Freeze Panes, Split Window command), F6 includes the split panes when switchingbetween panes and the ribbon area.SHIFT+F6 switches between the worksheet, Zoom controls, task pane, and ribbon.CTRL+F6 switches to the next workbook window when more than one workbook window is open.F7Displays the Spelling dialog box to check spelling in the active worksheet or selected range.CTRL+F7 performs the Move command on the workbook window when it is not maximized. Use the arrow keys to
OPCFW_CODE
709 Killian St. SE Atlanta, GA 30312 404.316.3749 | firstname.lastname@example.org Georgia Institute of Technology - Atlanta, GA B.S. in Electrical Engineering, Fall 2012 - Spring 2015 Berry College - Rome, GA B.A. Dual-Degree Program, Fall 2009 - Spring 2012 Co-founder & Chief Technical Officer, FIXD Automotive August 2015 - August 2023 | Atlanta FIXD Automotive is a company that makes car ownership as easy and affordable as possible for the everyday driver. The flagship product is a Bluetooth OBD-II sensor and mobile app that diagnoses car problems, explaining error codes in simple language. While there, the company grew rapidly, being the fastest growing lower middle-market company in Georgia in 2018. As of Q2 2023 it has grown to 40 employees, graduated from ATDC, and sold around 2 million units, all while remaining profitable and never raising any outside capital. - Wrote the initial production versions of both Android and iOS native mobile applications, and the REST-ful API backend in Ruby on Rails to power them. Also worked on projects in Nodejs, Go, Python, Svelte, and Flutter. - Oversaw the release of several iterations of the FIXD Sensor, a dual-band Bluetooth device. The latest generation model reduced the original cost-of-goods by ~75%. Worked closely with our manufacturer in Shenzhen, China to ensure product quality, having visited the facility on multiple occasions. - Built out the cloud infrastructure on AWS using established best-practices for cloud deployments, leveraging Docker containers and Convox PaaS for application deployments. Provisioned using infrastructure-as-code practices via Terraform. Services include EC2, RDS, S3, ECS, CloudFormation, Lambda, Athena, CloudWatch. Grew to 125 vCPUs across 5 separate organizational accounts. - Reverse-engineered diagnostic protocols used by vehicles, information that ranges from sparse to proprietary. Researched, documented, and coded interfaces to access both standardized and proprietary data for tens of thousands of year-make-models. As odometer is not mandated by OBD-II, created a proprietary algorithm for predicting the mileage for a vehicle based on other outputs from the car. - Recruited, trained, and grew the Engineering team to a size of 16 split across 6 teams. Focused on developing talent over recruiting experience. Steered culture through one-on-ones, code review, and creation of an Engineering Handbook. Defined software development processes and practices based on Agile methodologies. Combined git flow with a robust CI/CD pipeline to enable developers to deploy code constantly while still ensuring the product met our quality standards. - Built a data platform, and later a team to maintain it, around the open-source “modern data stack.” Extracted GB of data per week from dozens of data sources into Snowflake, applied business logic, reported KPIs for the company, and provided tools and training for employees to perform their own analyses. Stack included Meltano, DBT, Dagster, and Metabase. Personally was a major contributor to these young tools, sending dozens of pull requests and playing a part in design and direction. - Built and migrated to our own in-house e-commerce platform from Shopify in only 12 weeks, which outperformed the previous one in KPIs. Included highly-optimized direct-to-checkout experience, with support for one-click post-purchase up-sells and subscription-based products. The company plans to productize the platform and begin offering it to other companies in 2024. - Set policies and ensured compliance around IT security and user privacy. As Data Protection Officer, brought company into compliance with GDPR and CCPA, and automated customer data deletion requests. Lead Software Engineer, TechJect May 2015 - August 2015 | Atlanta Returned to TechJect after completing my undergrad program. Lead a team of 3 software engineers. The team overcame several technical challenges: - Implemented a control interface via SPI to allow a drone’s flight control board to communicate with the Android SoC board. This interface was used to send flight control commands and receive sensor and telematics data. - Optimized the performance of the live video streams, allowing 20-25 FPS at 1080p resolution while maintaining a low latency, simultaneously with the video feed from the small ground-facing flight control camera. - Passed raw sensor data from the flight control board to the Android board, and made that available for telemetry and diagnostics on the client app. - Implemented a Kalman Filter, a sensor fusion algorithm which combined GPS and sensor data from the Android board with the sensor data from the flight control board, providing a more accurate determination of the drone’s position. - Configured and administered an on-site build server for improving Android OS and firmware build times, reducing the build time by over a factor of 18. May 2015 - August 2015 | Atlanta After completing my undergraduate degree, I founded a technology development consultancy. Embedded electronics and software development, specializing in IoT and Bluetooth electronics products, pro-audio projects, and full-stack web design. Electrical & Computer Engineer, TechJect May 2014 - December 2014 | Atlanta TechJect designed consumer drones, including the Robot Dragonfly, which was the first crowd-funded project to break $1 million in funding. The company was run and largely staffed by Electrical Engineering PhDs. Joined the company as the only non-firmware software engineer while still in undergrad. - Assisted in development of an embedded ARM SoC development board with a quad-core CPU, a basic sensor suite, GPS, WiFi, Bluetooth, and dual 12MP cameras. Interacted with off-shore manufacturers and maintained the firmware and kernel drivers for the board, which ran a custom build of the Android operating system. - Created micro-UAV control Android applications, including system code running on the drone and and an end-user application for controlling it without the need for an RC remote. In addition to providing complete drone flight controls over WiFi or Bluetooth, the application was capable of streaming live video from two cameras over WiFi with sub-200 millisecond latency. It also contained a basic PID control algorithm, allowing the drone to maintain it’s altitude automatically. - Developed and administered the email server and a collection of Wordpress sites including a WooCommerce e-commerce site and customer support sites, and wired the office with CAT 6. - Introduced peers to gitand implemented version control practices for the company. January 2014 - June 2014 | Atlanta Joined an Atlanta Ventures-funded startup during their progress through the Flashpoint accelerator at Georgia Tech. The company offered crowd-sourced audio-to-text transcription as a service. Worked directly with the CTO to build and manage a web application in the Zend PHP Framework. Implemented new application features and patched bugs in a rapid development cycle. Advised on technologies and company direction, lead initial research into machine-learning algorithms for automated transcription. Through the Flashpoint program, learned a rigorous customer discovery processes. Distributed Systems/AIX Co-Op, Norfolk Southern Corporation September 2012 - August 2013 | Atlanta Worked on a number of different projects over two co-op sessions. - Developed an application using Visual Basic to connect to a database of datacenter inventory and generate a Visio document for visualizing datacenter layout and rack contents. While working on that project, shadowed administrators of the 10,000 sq. ft. datacenter, and worked with the diagnostic reporting and resource inventory database team to collect and correct data. - Developed several internal department applications in Microsoft SharePoint. There was no documentation in the organization for how to create applications for Sharepoint, so after determining best practices, created an extensive guide and documentation for developing such applications. - Deployed and solely administered a DokuWiki server for departmental documentation. This included writing several custom DokuWiki plug-ins in PHP and and training employees on proper usage. - Wrote server automation scripts, both shell and Perl, for server administration and worked closely with AIX administrators managing QA and Production servers. Student Supervisor, Technical Support Desk, Berry College OIT Aug 2009 - April 2012 | Mount Berry, Georgia Provided technical support for faculty, staff, and students via phone, email, and in person. Created and managed service requests and directed departmental calls. Forbes 30 Under 30, 2019 Included along with co-founders on the 30U30 list in the Manufaturing & Industry category in recognition of the growth of FIXD Automotive. Finalist, Static Showdown 2015 Solo submission for Static Showdown 2015, a 48-hour hackathon for static apps. QuickComments.js, a drop-in comments system for websites. Built comments system, Grunt tasks, documentation, and demo website. Roughly three times faster than Disqus. Winner, Music Hack ATL 2014 As part of team of 4, in 24 hours, created Rockscör, an aggregator of data for bands to estimate their audience size and influence in different markets for marketing and tour planning. Wrote entire application as the only technical team member, integrating data from 5 REST APIs. Best Hardware, HackBurdell 2014 Solo submission for Georgia Tech Invention Studio hackathon that served as precursor to HackGT. Built an MP3 player for attachment to car stereo, with driver-friendly interface. Volunteer, Code for Atlanta January 2017 - March 2020 Code For Atlanta, a local brigade of Code for America, is a group of civic-minded technologists, designers, and topic experts using our skills to improve Atlanta and the world. As a Project Lead volunteer, lead teams working on various projects during our regular hack nights. Organizer, Startup Exchange August 2013 - May 2015 Startup Exchange is a student organization to foster entrepreneurship and hacker culture at Georgia Tech. Co-founded the Maker team. Developed and maintained the website, taught classes on web development with Ruby on Rails to students, organized three and attended over a dozen hackathons. - “Kotlin Coroutines in Android”, Atlanta Android Club - Atlanta, GA, March 2018 - “Kotlin Coroutines in Android”, Connect.Tech - Atlanta, GA, September 2017 - “Custom modular synthesizer”, Atlanta Tech Demo Night - Atlanta, GA, May 2015 - “Error Correction Over Noisy Channels”, Berry College - Rome, GA, March 2012
OPCFW_CODE
Is it a good idea to use lambda expression instead of delegates? Lambda expression can used instead of delegates, I'm not sure that it is a good way,I think it is more handy than delegates but I'm not sure that it is a good idea. Is it a good idea to use lambda expression instead of delegates? Why? Why do you think lambdas are not "a good way" or not "a good idea"? You have an interesting use of formatting, like I somehow lack reading comprehension. It depends on the case. Don't you have an example where both ways make sense and where you are not sure which is better? @Stefan, there is no specific exaple in my mind, I just work onWeb Applications and I addicted using lambda expression, I really afraid that overusing them face my application with an issue, that's why I asked this question. I don't really understand the question; it's a bit like asking "which is a better pet, a dog or a mammal?" Well, a dog is a mammal, so that's not really a choice. Lambda expressions are used because they are convertible to delegates, so asking which is better doesn't really present a choice; if you're using lambdas, you're already using delegates. Can you ask the question such that there's a clear choice? Do you mean is it better to use the lambda syntax for an anonymous method than the old C# 2 syntax? @Eric: I think that's what he means. It's what I got from the question. From a compiler perspective it makes no difference. From a developer perspective lamba expressions improve readability in most cases. Write code for yourself and other developers, not for your computer. If the lambda is easier to understand in your case, it's a good choice. Most lambda-expressions will be compiled to delegates anyway. (Excluding expression trees) You can create a delegate instance in several different ways: Using C# 3 lambda expression: Func<int> getFive = () => 5; Using the C# 2 anonymous method syntax: Func<int> getFive = delegate { return 5 }; Using non-anonymous method (available since C# 1): int GetFive() { return 5; } … Func<int> getFive = GetFive; I think each of these has its uses. The delegate { } syntax has the advantage that you don't have to declare the parameters if you don't need them. Lambda expressions are very succinct and can be translated to expressions instead of delegates, which is very useful for LINQ-to-some-DB, and means you can use the same syntax for querying in-memory structures and databases. Using non-anonymous methods means you can easily reuse them and is also suitable for long methods. All the cases above are translated to equivalent code. The only difference is whether the method is visible to you. Both lambda expressions and anonymous methods can also be closures, i.e. they can capture local variables. yes. you can do delegates by lambda and reduce number of lines of code and make code readable Lambada? Haven't heard that in a while. I know that I can do this, I amcurrently do this, I asked that is it a good idea? I think it may reduce performance or cause to another problem. @Nasser Hadjloo no performance problem. for compiler both are same
STACK_EXCHANGE
Have you increased your understanding of AI along with me? Are you encouraged or interested in studying more about artificial intelligence? Where do you do that? Where does it begin? Where does it end? My colleague Steven Mc Auley came to AI through psychology. Does that seem counterintuitive? I guess it does a bit. Why does psychology matter when it comes to AI? Daniel Goldman has a few thoughts in his article “The Psychology of Artificial Intelligence.” “True AI, also known as artificial general intelligence or AGI…And that subfield would benefit greatly from greater interest by psychologists, and better communication between computer scientists and psychologists.” Why? Well if AI is on par with humans then does it stand to reason that it will also suffer the same ailments as humans? “Mental illness in machines will be just as much a problem as it is in humans.” — Daniel Goldman I never thought of that! Did you? So no matter where you start or what your industry or specialty or education is, there is a way to connect it to the advancement of AI. Maybe you aren’t going to be studying AI but you can use it to improve your business. If you are in the service industry, how can you serve your customers better? If you are an accountant, how can you implement the best practices of AI in your programs? There is an overlap somewhere. How do I start studying artificial intelligence? I took this from the perspective of someone new to the topic — just like me. These are a ton of free resources for beginners to the topic of AI, and who want to learn the foundational elements of it. Devon Sun shares his top ten books as illustrated here, as well as sharing his reviews and recommendations. Now I am a bit biased as “Prediction Machines: The Simple Economics of Artificial Intelligence” is written in part by my former professor Ajay Agrawal whom I have a great deal of respect for, alongside Joshua Gans and Avi Goldfarb. Steven Mc Auley suggests the following book as the first step to understanding. “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark, as it addresses his topic of human-centred AI and why this matters to and for everyone. Or you can start with the Website AI Playbook — intended to help newcomers (both non-technical and technical) begin exploring what’s possible with AI. Another good resource, that covers the general concepts of AI on Youtube, is “AI, Deep Learning, and Machine Learning: A Primer and The Promise of AI.” But perhaps my best find? You can learn from Iron Man himself! True! There is actually a new YouTube show “The Age of AI” hosted by Robert Downey Jr. and while it felt very showy at first, contains some interesting highlights. For those that intend to go into programming or developing, it is a broad field so where do you start? AI covers many areas and very deep areas of computer science, mathematics, hardware design, and even biology, anthropology, and psychology. If you want to be a creator then…. Which university is best for Artificial Intelligence? Traditionally we would consider offline universities, but if we are joining Industry 4.0 then it is best to go online where courses can be accessed by anyone and everyone. A great start is the “AI for Everyone” by Andrew Ng on Coursera to kick things off. The core point here and one that resonates for me is that “AI is not only for engineers”. Business — all business — leaders will be affected by the rise of AI. So every business leader needs to know the basics. Incidentally, this is the course that I signed up for. It is broken down into four relevant segments and you can learn: - What is AI — here I had a head start thanks for the AI journal! - Building AI projects — relevant to my experience as a project manager - Building AI in your company — how to translate what AI means to your business and your projects - AI and society — how it impacts our every day and what you need to know It covers all the basics of understanding and allows the learner to go away with some knowledge that can propel AI implementation within their own company but only as it aligns with the overall strategy. Not arbitrability as is often the case. How long does it take to learn AI? The course is approximately five hours and can give you a basic foundation but realistically like any learning, it can take much longer for a fundamental understanding of and application of AI. The standard rule for creators, programmers, and developers to gain expertise is anywhere between 1–5 years. The issue here is that AI is developing at such a rate that university courses — even online courses — are having trouble keeping up and sharing relevant use cases. Most content has already changed by the time you are in year 2! As with any subject how much time you commit, will impact how fast you learn and so if your industry is about to undergo change and adopt AI, faster is better! What skills do you need for AI? The distinction here, it really depends on whether you want to develop AI or implement AI. For developing AI and being a part of the programming game, the three areas or skills that consistently came up were as follows: - Math: statistics, probability, predictions, calculus, algebra, Bayesian algorithms, and logic. - Science: physics, mechanics, cognitive learning theory, language processing. - Computer science: data structures, programming, logic, and efficiency. Which is the best programming language for AI? Again if we are talking about programming, there are a few languages that seem to come up in my research. It seems the top language commonly used for AI projects is Python. Python is considered to be in the first place in the list of all AI development languages due to its simplicity. I am not a programmer and nor do I intend to be, but if I had to have a conversation about it, I now know that it is a combination of math, science, and computer science, and that Python is the most commonly used programming language. The impact of AI on business, everyday business, is what keeps me poised to keep learning. When digital transformation first came into play, it was the same. I just about got on top of how digital can simplify and increase our impact in the workplace. Now digital tools are integrated into every aspect of our lives, and that took less than 10 years. AI takes it to the next level and attention must be paid or any of us can easily fall behind. As I am a project manager by trade this is especially useful when looking for experts to help my projects succeed, to analyze possible solutions, and where we may need to hire experts of specialists. That’s it for section 3. Section 4, starting Day 14, is a deep dive into these same learnings. Stay tuned — I will continue to share my progress as I complete the journal!
OPCFW_CODE
Type: Posts; User: talemu Search took 0.01 seconds. September 9th, 2011, 12:56 PM I have a project for plotting an electric field strength profile on a certain map. The Electric field strength is to be calculated based on user inputs- I have a java code for that. I need to get... June 22nd, 2011, 02:40 AM Is it possible to have a multi dimensional array of double type, with size - which holds nearly 10MBits of data ? Is it possible to transfer this amount to applet, from servlet? June 17th, 2011, 12:30 PM myArray[i] = myscanner.nextDouble(); myscanner.next() expects char inputs June 16th, 2011, 08:11 AM Can I use JNI to process my files at the server side - with servlets(?), and link it with applets? I happen to get a C- code, for my computational needs. June 15th, 2011, 12:16 PM What about using other URLs like www.xxxxx.com? May firewalls block the transfer of text files from web server to applets? I am letting them access text files from web servers without serverlets. June 15th, 2011, 10:01 AM I wrote an applet accessing a text file from server for further processing. Right now, its working fine - the applet and the server running on the same PC - "localhost:8080...". I have a program which accepts inputs such as latitude, longitude, towerHeight, signalPower, and frequency. I need my program to do this task. - each time a user clicks an "Add" button, an... Due to the size of my file on server/nearly 80MB/, I am now compelled to consider server processing. Becoz the applet is not able to process this much amount of file on the browser side. When I finished applet program, can I compile it as a .jar, or .war file so that its accessible via web? I need some files/*.txt/ to be loaded at applet start up - at the init() function,... This forum is so active and awesome one, I am amazed by your immediate and precise help, mommy next door(?)! Probably my last query will be about the idea of signing, The whole of concept of... Thanks for your swift reply, I have another question, I am now reading and writing files, to/from disk to an applet program. But if, I deploy the applet on a web server, is it possible that my... I am new to java programming. I am assigned to write an applet simulator/originally in matlab code but now its needed to be in java applet for web interactivity/ , which draws graphics on a... Click Here to Expand Forum to Full Width This is a Codeguru.com survey!
OPCFW_CODE
Table join with multiple conditions I'm having trouble to give the condition for tables' joining. The highlight parts are the 3 conditions that I need to solve. Basically, there are some securities that for their effective term if the value is between 0 to 2 it has score 1, if the value is between 2 to 10, it has score 2, and if the value is bigger than 10 it has value 4. For the first two conditions, in the query's where part I solve them like this however for the third condition if the Descriptsec is empty I'm not quite sure what can I do, can anyone help? You should not post screenshots to give us code and data. Post the actual text and some DML. Also, your screenshots might reveal potentially sensitive user or company information. I have submitted an edit to remove them, but you should remove them ASAP. You can check for empty value and set to either NULL or some significant integer value using a CASE statement. You should read this: https://meta.stackoverflow.com/a/285557/3043 Can you change the lookup table ([Risk].[dbo].[FILiquidityBuckets]) you are using? If yes, do this: Add bounds so that table looks like this: Metric-DescriptLowerBound-DescriptUpperBound-LiquidityScore Effective term-0-2-1 Effective term-2-10-2 Effective term-10-9999999(some absurd high number)-4 Then your join condition can be this: ON FB3.Metric='Effective term' AND CAST(sa.effectiveTerm AS INT) BETWEEN CAST(FB3.DescriptLowerBound AS INT) AND CAST(FB3.DescriptLowerBound AS INT) Please note that BETWEEN is inclusive so in the edge cases (where the value is exactly 2 or 10), the lower score will be captured. I can see some problems: the effective term in table with sa alias is a float. So you should consider rounding up or down. Overall a lot of things can be changed/improved but I tried to offer an immediate solution. Hope this helps. But this is exactly what I did. My problem is still that I can't include the case when effective Term is bigger than 10 You did not do this. Take a test case: first row where effectiveTerm is 19.273... In your current logic it is returning NULL because you are not capturing this record with any condition. You have an empty string in DescriptSec. Replace that string with a really high number and it will work. I was just pointing at the bad design of the lookup table and the join condition.
STACK_EXCHANGE
Following the global financial crisis which erupted in 2007, various rates of return in Europe started to become negative. Since 2014, negative rates have become persistent and widespread. Initially, many cash investors have been reluctant to accept negative rates, including parties to repo transactions being remunerated on deposits of cash margin and on income due on securities they have given as collateral. Before the crisis, repo was the only financial instrument which paid a rate of return that could become negative under normal market conditions. Negative repo rates can happen when a particular collateral security is subject to exceptional borrowing demand and/or reduced supply in the repo market. In order to borrow these securities, buyers have to tempt potential sellers with cheap cash. ‘Cheap’ means a repo rate less than the GC repo rate. When the repo rate on a particular collateral asset falls below the GC repo rate (see question 8), that asset is said to have gone ‘on special’ (see question 9). In the case of very special collateral, the repo rate can fall so far that it becomes negative. This naturally happens more frequently when the GC repo rate is already close to zero, as there is less distance for a special repo rate to fall in order to become negative. During periods of financial stress in Europe, GC repo rates in several currencies became negative. This meant that most, if not all, securities in a particular currency were subject to exceptional demand. Typically, these securities were the government bonds of strong economies and were strongly sought after because they were seen as ‘safe haven’ assets. Since 2014, negative rates have also been driven by the exceptional lending extended by the ECB and other European central banks in order to try to head off deflation, as well as regulatory disincentives to wholesale deposit-taking by banks (who try to deter depositors by quoting negative interest rates). What does a negative repo rate mean? A negative repo rate means that the buyer (who is lending cash) effectively pays interest to the seller (who is borrowing cash). For example, consider a one-week repo with a purchase price of EUR 10 million at a repo rate of -0.50%. The repurchase price will be: 10,000,000 *(-0.50 x 7/100 x 360) = 9,999,027.78 The buyer (cash lender) pays the purchase price of 10,000,000 and receives the repurchase price of 9,999,027.78, therefore making a loss; whereas the seller (cash borrower) receives the purchase price of 10,000,000 and pays the repurchase price of 9,999,027.78, therefore making a gain. Problems caused by negative rates for repo transactions These problems fall into two categories: - Difficulties arising from the fact that standard repo contracts --- such as the GMRA --- have been drafted under the implicit assumption that GC repo rates would only ever be positive. When GC repo rates are negative, problems arise: - In the case of the early termination of a buy/sell-back following a default or in calculating the exposure on the transaction for the purpose of variation margining, where the payment of a coupon, dividend and other income on collateral is assumed to be reinvested at the repo rate on the transaction before being passed to the seller by means of a reduction in the repurchase price (in lieu of a manufactured payment). - Where parties have agreed to use a repo rate as the interest rate to be paid on cash margin. - Because a negative repo rate creates a perverse incentive to the seller to fail to deliver collateral on the purchase date. - Initial disagreements between parties, due to the novelty of negative interest rates in general, over the interest rate to be paid on cash margin. When income is paid on collateral in a repo, it is paid to the buyer, who is the legal owner. But the buyer is obliged to make an equivalent payment to the seller. In a repurchase transaction, the payment is due immediately and is often called a ‘manufactured payment’ (see question 22). But in a buy/sell-back, this payment is deferred until the repurchase date, when it is deducted from the repurchase price. In the interim, the buyer is obliged to reinvest the value of the payment in order to compensate the seller for the delay in reimbursement. If (1) such a buy/sell-back is terminated because of a default by one of the parties or (2) the exposure on the transaction is being calculated for the purpose of variation margining, a reinvestment rate has to be assumed. The reinvestment rate is given in the formula for the Sell Back Price (which is equivalent to the repurchase price) in the Buy/Sell-Back Annex of the GMRA (see paragraph 2(a)(iii)(y)): (P + AI + D) − (IR + C) P Purchase Price – ie the clean price of collateral in the case of a buy/sell-back. AI amount equal to Accrued Interest at the Purchase Date, paid under paragraph 3(f) of the Buy/Sell-Back Annex – which is coupon interest accrued on the collateral security since the last income payment date. D Sell Back Differential (equivalent to repo interest). IR amount of any coupon income in respect of the Purchased Securities payable by the issuer on or, in the case of registered Securities, by reference to, any date falling between the Purchase Date and the Repurchase Date – which is a coupon, dividend or other income paid during the term of the buy/sell-back. C aggregate amount obtained by daily application of the Pricing Rate for such Buy/Sell Back Transaction to any such income from (and including) the date of payment by the issuer to (but excluding) the date of calculation – which is the reinvestment income on the income payment calculated at the repo rate on the buy/sell-back. If the repo rate (C) is negative solely because the collateral is special, it is not appropriate to use it as a cash reinvestment rate. However, unless the parties agree to amend this formula, they will be obliged to follow it. In practice, this problem may not be significant for parties who are active dealers in buy/sell-backs, given the likely alternation in the direction of underlying positions and payments of income, as well as the likely infrequency of income payments. Where the interest rate to be paid on cash margins is a repo rate Under paragraph 4(f) of the GMRA, parties holding cash margin are obliged to pay interest “at such rate, payable at such times, as may be specified in Annex I… or otherwise agreed between the parties…” Parties could have agreed to use the repo rate on the underlying transaction, particularly where that transaction is being margined in isolation. In the first case, if the agreed repo rate goes on special --- in other words, if it falls below the GC repo rate --- that rate is no longer representative of the going rate for cash reinvestment. The spread between a special repo rate and the GC repo rate represents a borrowing fee for the specific collateral asset. Using a special repo rate as a cash investment rate is therefore implicitly charging a fee that has nothing to do with the value of cash. Accordingly, the use of a special repo rate violates the principle that the use of a security as collateral in a repo should not cause the seller to gain or lose on his investment in that security as a consequence of having repoed it out. However, whatever the economic argument, a party cannot unilaterally change the cash reinvestment rate previously agreed with its counterparty. It must seek to negotiate a new interest rate with the counterparty. The perverse incentive created by negative repo rates to sellers to fail to deliver on the purchase date If a seller fails to deliver collateral on the purchase date of a repo, he will not receive or be able to retain the purchase price until he does deliver. However, the seller will remain obliged to pay repo interest to the buyer, even if he delivers the collateral late and therefore has delayed use of the cash. Having to pay interest without having the use of cash is a cost that provides an incentive to the seller to remedy a failure to deliver as well as providing compensation to the buyer. However, if the repo rate on a particular transaction is negative (whether this is because the collateral is on special or because GC repo rates have gone negative), the automatic cost of failing to deliver collateral becomes a perverse incentive to fail. This is because the repo interest due to be paid is negative, which means it has to be paid by the buyer, despite the fail being caused by the seller. Thus, the seller will be rewarded for his failure!* To eliminate the perverse incentive arising from negative repo rates, the ICMA issued a recommendation in November 2004 on behalf of the then European Repo Council (ERC) that, when the seller fails to deliver on the purchase date of a negative rate repo, the repo rate should automatically reset to zero until the failure is cured, while the buyer has the right to terminate the failed transaction at any time. Subsequently, this recommendation has been included as an optional supplementary condition in Annex I of the GMRA 2011. For parties using the GMRA 2000, it is best practice to adopt the ICMA recommendation by an agreed amendment to the GMRA or, if that is not practicable, by inclusion in confirmations. Disagreements between parties due to the novelty of negative interest rates The negative interest rates that appeared following the crisis that erupted in 2007 were historically unusual, episodic in appearance and not expected to persist. Many parties therefore felt that it was inappropriate to apply negative rates to cash margin paid under repo agreements and to the reinvestment of income payments on collateral in buy/sell-back. However, as already explained, whatever the economic argument, a party cannot unilaterally change the cash reinvestment rate previously agreed with its counterparty. It must seek to negotiate a new interest rate with the counterparty. Since 2014, it has become apparent that negative interest rates are likely to persist for some time in many currencies. They have become a ‘new normal’. It is now no longer possible to sustain an argument that negative interest rates are some sort of aberration. What is the most appropriate cash investment rate for use in repo transactions? The most appropriate rate for the reinvestment of cash margin and collateral income in buy/sell-backs is the GC repo rate for the currency. In the case of cash margin, this should be the overnight GC repo rate, given that margin can change daily. In the case of the reinvestment of collateral income in buy/sell-backs, the theoretical choice would be a GC rate for a tenor equal to the interval until the repurchase date (the reinvestment period). However, GC repo rates for some tenors may be difficult to agree, in which case, the next best choice would also be the overnight GC repo rate (depending on the perceived roll-over risk). If it is not possible to agree on the fixing of an overnight GC repo rate, the most pragmatic alternative would be to use a recognized overnight unsecured interbank deposit rate benchmark. Under normal market conditions, there should not be much difference between overnight secured and unsecured rates. And in practice, such overnight indexes are already commonly used in the repo market as cash reinvestment rates. *Even at zero or low positive repo rates, there is a perverse incentive on the Seller to fail, inasmuch as a failure to deliver creates a free option on the repo rate. If the repo rate rises subsequently, the Seller can cure the fail with collateral borrowed through a separate reverse repo. He will owe interest at the original repo rate on the cash he receives on repo on which he has just delivered but will receive interest at the new higher rate on the cash he gives on the reverse repo. Back to Frequently Asked Questions on repo contents page <<< Previous page Next page>>>
OPCFW_CODE
KPIT PAPER ON 1st MARCH 2008 Hi Everyone. I am Rahul A Pardeshi frm Rajarambapu Institute of Technology, Sangli. B4 starting i want to thank GOD ,my parents and friends. The Aptitude started at 10.00am. Students from 9 diff colleges appeared for the test. The aptitude results were out till 12.30. The first name announced was mine. Was very happy but the pressure was building on. They shortlisted 76 out of 550(near about) After the aptitude they had PPT. G.D was cancelled as only 76 of us were there. After G.D , there was T.I & P.I. They need students who are Strong Technically and good at Soft Skills. One advice is that be confident and keep smiling the day throughout. Don’t lie anywhere throughout the process, be simple. Aptitude test pattern: test is of 35 marks for 30 minutes. no sectional cutoff. There’s 1/4 negative marking. So keep in mind for every right answer you get +1 mark and for every wrong answer -1.25 marks. There are two sections 1)Qantititive (20 Ques) 2)English. (15 Ques) In quantitative section there are 20 questions. they are really time consuming. Don’t try to solve all the questions . Solve English first, try to solve all que in english,they are easy. While solving don’t guess about any question solve only those question which you are confident about. Out of 20 ques in quanti even if you solve 10 right questions that would be enough. Topics to work on: 1. Permutation and Combination. 2.Profit and Loss. 4.Time ,Distance and Speed. 6.Pipes and Cirstens. 7.Allegation and Mixture. Solve english section first. there are 15questions on sentence correction. but remember that there is negative marking of 0.25. There’s are two seperate interviews. My interview was for about 25 minutes They are really cool persons .Just be confident and keep a smile. ME : Please, may i come in Sir ? INT: Yes Rahul, Please come in.(I was shocked, he knew my name already. Then i saw that he had my resume in his hands, which they collected during PPT.) ME: Good Evening Sir. INT: Good Evening INT: Sit, be Comfortable. ME: Thank You ,Sir. INT: How was the day? ME: Fantastic Sir.(With a huge Smile) INT: OK, good. He looked at my resume INT: So, you are frm St. Jude School. Where is this school. ME: Sir ,Its in Pune. INT: Did your school had any branches, coz i am also from St. Jude School (Calcutta). ME: No Sir, no branches. INT:Explain me abt ur project (As I had written my mini project that I am currently doing and a Future Project in my resume.) INT:So, which phase you are in now. ME: Sir we have prepared the SRS and are currently working on FP. INT: What is SRS. ME: Software Requirement Specification document(Explained its contents) INT:I think SRS if System Requirement Specification (Just to check your confidence.) ME: No sir , Its Software Requirement Specification(With Smile. ) INT: Explain SDLC. INT:So which language do you like. ME: C and C++. INT: What are the features that C++ contains and C does’nt. ME: Explained the features of C++. INT:OK, Can you Write a prog for FIBONANCI series. INT: Process models in Software Engineering. ME: Explained two of them. INT: Thank You Rahul . (He shook hand with me.) ME: Thank you Sir. Have a good day. As i came out , they told me to wait in the waiting room. After 10 minutes they told me that i can go for the second round(P.I.) I was the first person for HR. The interview was a little Stress interview . They test only confidence & communication skill. INT: tell me about urself that’s not in your resume? INT: what r ur hobbies? ME :Playing Snooker, Listening Music, and i like to think, think in all dimensions. INT: What are your weaknesses. INT: So what do you do to overcome them. INT: how do you do that. INT:I will ask you two questions. ME: yes , sir. INT: If a raw egg is dropped on a concrete floor, it will crack. What will you do, so that it doesnt crack. ME: (thought for a while) Sir I will not drop the egg. INT: Do you know Bay of Bengal. ME: Yes sir. INT: In which state is Bay of Bengal,. ME: Sir , its a Sea ,it cannot be in any state. INT: I want the state. ME: West Bengal. (The expected ans was : Bay of Bengal is in LIQUID state.) INT: OK. You may go. ME: Thank You Sir. After that i came again to the waiting room. And within one hour the results declared. Then came My Name in the list, and i was the Happiest Person on Earth
OPCFW_CODE
/* * Copyright 2015 Your Name <your@email.address> * All rights reserved. Distributed under the terms of the MIT license. */ #include <storage/Directory.h> #include <storage/File.h> #include <storage/FindDirectory.h> #include <storage/Path.h> #include <translation/TranslationUtils.h> #include <string.h> #include <stdio.h> #include "MessageEditor.h" MessageEditor::MessageEditor():BApplication(APP_SIGNATURE) { //RegisterMime(); mWindow= new MessageEditorWindow(100.0,100.0,600.0,500.0); mWindow->CenterOnScreen(); } MessageEditor::~MessageEditor() { delete mWindow; } void MessageEditor::ReadyToRun() { //create settingsfolder BPath settings; status_t err = B_OK; BDirectory *settingsDir = NULL; find_directory(B_USER_SETTINGS_DIRECTORY, &settings, true); settingsDir = new BDirectory(settings.Path()); err = settingsDir->CreateDirectory("MessageEditor", NULL); err = settingsDir->SetTo(settingsDir, "MessageEditor"); mWindow->Show(); } /** * @todo request the Quit .. don´t simply quit all without asking (so that there is chance to save or abort because the document has changed) */ bool MessageEditor::QuitRequested() { delete mWindow; return true; } void MessageEditor::MessageReceived(BMessage *message) { switch(message->what) { /* case MENU_FILE_OPEN: { break; } case MENU_FILE_NEW: { break; }*/ default: BApplication::MessageReceived(message); break; } } void MessageEditor::RefsReceived(BMessage *msg) { uint32 type; int32 count; entry_ref ref; msg->GetInfo("refs", &type, &count); // not a entry_ref? if (type != B_REF_TYPE) { return; } if (msg->FindRef("refs", 0, &ref) == B_OK){ BFile *messageFile = new BFile(&ref, B_READ_WRITE); if (messageFile->InitCheck()==B_OK){ BMessage *toLoad = new BMessage(); if (toLoad->Unflatten(messageFile) == B_OK){ mWindow->SetMessage(toLoad); } else printf("could not parse BMessage"); } else printf("ERROR initilizing File\n"); delete messageFile; } } void MessageEditor::AboutRequested() { //**ToDo implement a about Window } void MessageEditor::ArgvReceived(int32 argc, char **argv) { if (argc>1) { BEntry ref(argv[1]); if (ref.Exists()){ /** try to load the Message } else printf("Error creating the document");*/ } else printf("Could not load %s. File dosent exist",argv[1]); } } int main() { new MessageEditor(); be_app->Run(); delete be_app; return 0; }
STACK_EDU
WITH a broadband connection, it seems inconceivable that you can still stare at a blank browser screen while waiting for a page to load. Yet there are times when your browser seems to bog down and you wonder just how fast your broadband connection really is. Many factors can affect browsing speed. A website that is mostly text, for example, should load faster than a graphics-intensive one. Lately, however, I found that my browser was taking much longer than it should to load Google, which is probably one of the least graphic-heavy pages you can imagine. When I saw that the browser seemed to be taking an inordinately amount of time at “Looking up www.google.com,” I suspected it might be the DNS server of my Internet service provider (ISP) that was tripping me up. DNS, short for Domain Name Service, is like a phone book for the Internet that translates the Web addresses you type into the browser (like www.google.com ) into a “phone number” that computers can use (220.127.116.11). By default, your router and computers connected to it are set up to use the DNS servers of your ISP, but sometimes, these might not be performing up to snuff. Fortunately, there is an easy fix: simply use somebody else’s DNS server. There are quite a number of free and public DNS servers but it’s advisable to stick to trusted services such as OpenDNS (http://www.opendns.com/ ) and Google Public DNS (https://developers.google.com/speed/public-dns/ Whichever you choose, you’re likely to experience a dramatic improvement in your browsing speed. That’s because both OpenDNS and Google have the robust infrastructure and the technology to reduce latency in the address lookups and make the results more reliable. If you’re feeling adventurous, you can change the DNS servers on your router, thus enabling all computers and devices that connect to it use the new DNS servers. While this can be quite efficient (you need to make only one change), there are several disadvantages to this approach. First, you will need to find instructions for doing this that are specific to the brand of router you’re using. Second, if something goes wrong, it will go wrong for everybody connected to that router. A less complicated approach is to simply change the DNS servers used by your computer or device. Whichever way you decide to go, it’s a good idea to write down the previous settings before making changes so you can revert to the old setup if need be. Here are the numbers you need to remember: |Free and Public DNS Servers |Google Public DNS To change the DNS server on an Ubuntu Linux (14.04) PC, go to Settings and go to Network Connections. In the window that opens, choose the connection for which you want to change DNS servers, then click on the Edit button. In the next window, click on the IPV4 Settings tab. In the drop-down menu next to “Method,” choose “Automatic (DHP) addresses only.” Then, in the space next to DNS servers, type in the appropriate addresses from the table above, separated by a comma (for example, 18.104.22.168, 22.214.171.124 – if you want to use Google Public DNS). Click on the Save button and reboot. To change the DNS server on a Mac (Mavericks), click System Preferences from the Apple menu. Choose Network. Select the connection for which you want to change DNS servers. Click Advanced and select the DNS tab. Click on the “+” to add the appropriate addresses, then click Apply and OK. You can also change the DNS server on your Windows machine. Just follow the detailed instructions in the OpenDNS and Google Public DNS websites. How do you choose between OpenDNS and Google? You can try both and try to figure out which seems faster, or you can use a utility like Namebench (https://code.google.com/p/namebench/ ), which will hunt down the fastest DNS servers available for your computer to use. Versions are available for Windows and the Mac. I also found instructions for installing the utility in Ubuntu at the Ubuntu Geek website (www.ubuntugeek.com The disadvantage of using public DNS servers is that most of them log information, including your IP address, the domain name you looked up, the name of your ISP and your approximate location. Because of this, it’s a good idea to read the privacy policies of these companies, and find out beforehand what they do with the information they gather. Chin Wong Column archives and blog at: http://www.chinwong.com
OPCFW_CODE
Gosu class vs enhancement I want to know the difference between Gosu class and enhancement. Because whatever we can do in enhancement, that we can do in Gosu class also then what is the need of Gosu Enhancement. Gosu class is just like a Java class. What confuses you is the enhancement. Enhancements are the extended properties of an OBJECT and are available for particular objects for which it is written for. for example, lets say i need to write an function to check whether a number entered is greater than 10 or not. So using gosu class, how we write the code is like Class MyInteger(){ static funtion isNoGreaterThan10(no : int) : boolean{ return (no > 10) } } and we call the function like: MyInteger.isNoGreaterThan10(34) //returns a boolean value So basically , the class and the method which we wrote are available anywhere in our application. Here comes the use of Enhancement Enhancement MyInteger : int{ funtion isNoGreaterThan10() : boolean{ return (this > 10) //"this" represents the object upon which we are calling this enhancement } } The above enhancement is available for Integer objects only. and all the functions inside this enhancement becomes the property of any integer object. var number = 14 number.isNoGreaterThan10() //return True The call be made even simpler like 36.isNoGreaterThan10() //return True "my_name".isNoGreaterThan10() // is not possible as "my_name" is not an integer. Similarly, lets see an enhancement for a string (say to get the length of a string) Enhancement MyStringEnhancement : String { property get Length():int{ return len(this) } } and the property Length() will be available for all string objects. "Hello boss".Length // returns 10 hope this helps. Aravind :) so, similar to extension methods in c# then? Find the Differences here follows. In an enhancement, it is not allowed to define any variable (No change for logging). Thus, enhancement should be used for simple aggregate calculation only. The advantage from enhancement is the new method is visible from the entity. If you define in Gosu class, you must know the class name. The purpose of enhancement may be to extend the class which I do not have control over. For example, I am working on customization, and I need one more method on the class Contact. It's Guidewire's class, and I cannot modify it. Also it does not make much sense to create subclass and add the new method there, since such method would not be seen in existing Contact subclasses. But if I create enhancement, and add the method there, it will promptly be available in whole Contact subclass-tree, and I can use it anywhere. You can assume it as extended attributes of an object (Fine tuning operation on top of an object or property that you wish to perform) if the logic is reusable and generic we can place that in gosu class if the logic is intrinsically related to entity we can place it in enhacement class
STACK_EXCHANGE
The development of complex open source solutions is generally done by adapting and integrating multiple existing components. The resulting application (or “solution”) may look as a single program from the user point of view, but is in fact a combined work. The different components may be covered by different licences. Are they compatible and legally interoperable? Determining whether the various licences involved are compatible is important when the aim is to redistribute the application to third parties: under what open source licence should it be distributed? Is this always possible? The licences for distributing free or open source software (FOSS) are divided in two families: permissive and copyleft. Permissive licences (BSD, MIT, X11, Apache, Zope) are generally compatible and interoperable with most other licences, tolerating to merge, combine or improve the covered code and to re-distribute it under many licences (including non-free or “proprietary”). At the contrary, the “copyleft” licences impose the use of the same licence as soon the distributed work is a derivative of the covered work. To avoid software appropriation by third parties, a majority of open source projects have adopted copyleft licensing terms: the two Gnu GPLs and the EUPL are “copyleft”. Some licences like the LGPL or the MPL try to compromise between the permissive and copyleft requirements: covered components (and their specific derivatives) will always keep their primary licence, but the combined application “as a whole” (even if it may be considered globally as a derivative) or its executable binary (a single program from the user point of view) can be distributed under any licence. Licence incompatibility exists when a program is derivative of components licensed under two different copyleft licences (for example, the GPLv2, which is still the most used copyleft licence, is not compatible with GPLv3, and vice-versa). A logical incompatibility issue may be resolved through dual licensing: the original licensor (owning full copyright) may provide the same program under two or more licences, even if these licenses are not compatible. The most frequent cases apply to licensors distributing their work under the “GPL” (without mentioning the version number): in such case, recipients can use the work under any of these licences (GPLv2 or GPLv3), which is in practice a dual licensing. When the licences are mutually incompatible, the risk of dual licensing is the forking of the program in various legally incompatible releases, which even the original licensor will not be able to reunify. An alternative way to resolve incompatibility issues without the risk of forking is the constitution of an exception list. The advantage is to maintain the licensed component under a single licence including its specific derivatives, while allowing combined derivative works where the component is integrated or merged to be licensed under an alternate licence. The difference with the more permissive LGPL system is that exception lists specify which licence(s) are accepted (not “any” licence). Exception lists can be implemented at licence level: this is done by the EUPL, where an appendix of “compatible licences” specifies which licences can be used in case of “combined derivative” (when the program is a derivative of both a EUPLed component and another component licensed under a compatible licence). The disadvantage of such practice is that it does not facilitate very frequent updates: adapting the list (for example in case a compatible licence is updated with a new version number) modifies the licence (producing a new version that will not be automatically OSI-approved) and impact its whole community of users (some of them may disagree with the extension…). Exception lists can also (and in addition) be implemented by a specific licensor. This is especially recommended when a licensor distributes a library of components under a copyleft licence (as the GPLv2 or V3) instead of using the more permissive LGPL or MPL (which are generally the most adapted licences for such components). For example, Oracle distributes MySQL component libraries under the GPLv2, but has implemented the “MySQL FOSS Exception List” authorising the distribution of combined derivatives under about 20 other FOSS licences (including the copyleft GPLv3 and the EUPL). Similarly, a licensor could distribute a library of components under the EUPL and implement an exception list for some licences that are not in the EUPL compatibility list (for example the GPLv3). Illustration: the EUPL compatibility The EUPL upstream (incoming components) and downstream compatibility (distribution of the resulting work) can be illustrated as follows: For a complete analysis of the EUPL compatibility (with all OSI-approved licences) please refer to the EUPL compatibility matrix. The EUPL v1.2, published in May 2017 (OJ 19/05/2017 L128 p. 59–64 ) has extended the EUPL compatibility to GPLv3, AGPL and other licences. In many cases, simply using or combining a component (even if the combined software is perceived by the end user as a single program) does not produce a “derivative work” according to the applicable copyright law. For example, such combined work could be distributed under the EUPL even if one of its components was obtained under the GPLv3. In general, the distribution of a combined work does not change (producing a forking) the licence of its covered components. If the combined work is a derivative of its components (because their source codes are merged) you cannot distribute it under the EUPL if at least one of these components is covered by the GPLv2 and if there is no exception list including the EUPL. This is the main case where you must distribute under the GPLv2 (similar solution for components covered by the CeCILL licence, which is occasionally used in France). The case of linking In our opinion, linking two programs or linking an existing software with your own work does not – at least in Europe – produce a derivative or extend the coverage of the linked software licence to your own work. This opinion is based on the Directive 2009/24 EC on the legal protection of computer programs. This 2009 version codifies, without modifications, the original text written in 1991 (Directive 91/250/EEC). The aim of this Directive is to facilitate interoperability, making possible to connect all components of a computer system, including those of different manufacturers or authors, so that they can work together. Therefore the parts of the programs known as ‘interfaces’, which provide interconnection and interaction between elements of software and hardware may be reproduced by a legitimate licensee without any authorisation of their rightholder (“Whereas” (10) & (15) of the Directive). This implies that such interfaces escape to the copyleft provision of any licence, open source (like the GPL or EUPL) or proprietary. The technical way of linking for interoperability (static or dynamic) should not make any difference. For this purpose, the Directive (article 6) authorises the legitimate licensee to decompile the licensed program in order to obtain the information necessary to achieve the interoperability. This information can then be reproduced for the purpose of writing the interface. At the time the 1991 directive was written, the open source option was not considered: indeed, decompilation targets the case where the legitimate licensee has received the licensed program in object code only. However, it looks obvious that the exception implemented by the Directive could be used also when the licensee has access to the source code: in such case, code needed for interoperability is directly available, without the need for a decompilation. It remains that such an exception to the author's exclusive rights must be strictly limited to the parts that are needed for interoperability and may not be used in a way which prejudices the legitimate interests of the rightholder or which conflicts with a normal exploitation of the program. A question to clarify at early stage The sooner you examine licence compatibility, the better it is. If the intention of the software producer is to distribute the work under a specific licence, it is advised to declare this licence in the specifications (if software developers are contractors, they will be responsible for using new or compatible components) or in a contributor’s agreement (if the software is produced by a FOSS community of developers).
OPCFW_CODE
The 16 Basic Color Names. Tutorial 3: Designing a Web Page Working with Fonts, C o l o r s , and Graphics. This figure shows the 16 basic color names that are recognized by all versions of HTML. Partial List of Extended Color Names. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. Tutorial 3: Designing a Web Page Working with Fonts, Colors, and Graphics This figure shows the 16 basic color names that are recognized by all versions of HTML This figure shows a partial list of these additional color names. The extended color name list allows you to create color schemes with greater color variation. A more complete list is provided in Appendix A, "Extended Color Names." Web site for color designations: (See the ITOM 2308 Web Site for this hyperlink.) The color yellow has the RGB triplet (255,255,0) and is represented by the hexadecimal string FFFF00. This figure shows the RGB triplets and hexadecimal equivalents for the 16 basic color names presented earlier. <BODY BGCOLOR="color" TEXT="color" LINK="color" VLINK="color" ALINK="color"> previously visited hyperlink color background colorHTML Code for using Color in a Web Page Continued <FONT SIZE="size" COLOR="color" FACE="face"> text </FONT> Example <FONT> tag with modified properties: <FONT FACE=Arial, Helvetica, "sans-serif"> Arcadium</font> The Process of Tiling the Background Image This figure shows that when a browser retrieves your image file, it repeatedly inserts the image into the background, in a process called tiling, until the entire display window is filled up. setting the image file for the page’s background results in the background below Finding the right background image is a process of trial and error. You won’t know for certain whether a background image works well until you actually view it in a browser. See Page 3.28 for details about Alignment Options. Using the ALTAttribute <imgsrc="graphics.gif" vspace="value" hspace="value"> Controlling Image Size Alternate image text is important because it allows users who have nongraphical browsers to know the content of your graphics. Alternate image text also appears as a placeholder for the graphic while the page is loading. This can be particularly important for users accessing your page through a slow dial up connection. specifying alternative text for an inline image Using the BORDER Attribute for example, using starts the next line when the left page margin is clear
OPCFW_CODE
I should add that the menu thing was nowhere to be seen this morning. Gone. (That was before the fixlog). Thank you. Weird login screen behavior from Windows 10 Posted 23 March 2020 - 03:02 AM Posted 23 March 2020 - 07:36 AM That's good to hear. What I think happened is when it saw the path to Opera Browser Assistant HKU\S-1-5-21-2490165305-1638453623-257508744-1001\...\Run: [Opera Browser Assistant] => C:\Users\David Jackson\AppData\Local\Programs\Opera\assistant\browser_assistant.exe [3024920 2020-03-12] (Opera Software AS -> Opera Software) it got as far as C:\Users\David and then thought it was at the end of the line so looked for a file named David. I'm not sure where it found the data that it showed you - possibly some temp cache. Did this start happening after installing Oracle? Best practice when writing a path to the registry is to put quotes around the whole path just in case there are spaces (like in your user name) and it is possible they forgot to do it. I checked the equivalent section in my registry and some of them have quotes and some don't. But my user name is Ron so there aren't any gaps so it doesn't matter. This used to be a big problem back in Win 2000 days but somewhere along the line Windows got a bit smarter and stopped interpreting a space as a new line most of the time but there are still some instances where it reverts back to its old behavior. I don't know what happened to your f.lux entry but the following fixlist should put it back so that it will show up next time you reboot. This fixlist will just make one change then read it back to make sure it took. No reboot required. Just post the fixlog. fixlist.txt 678bytes 218 downloads Posted 24 March 2020 - 04:47 PM Hello again. Thank you very much. To the best of my knowledge, I haven't installed Oracle unless I did so unwittingly at some point (not impossible, hehe). Thank you for sharing the bit of revision history: I guess you've been involved with computers since their inception, right? Seen a lot of changes and breakthroughs. I appreciate your expertise and help. Thank you very much. Here's the fixlog: Posted 24 March 2020 - 06:23 PM That didn't work for some reason. Thought I had it right but I used REG_DWORD instead of REG_SZ. My mistake. Let's try again: fixlist.txt 672bytes 221 downloads Posted 27 March 2020 - 03:54 AM Hello and thank you for your help. Log is below: Posted 27 March 2020 - 04:51 AM Appears to have worked. Did your f.lux come back after a reboot? Posted 27 March 2020 - 04:45 PM Hi. Thank you. Some weirdness. Before seeing your message, there was a quick Opera notification which flashed up in the bottom right corner of the screen (I apologise but I don't recall what it said). Anyway, after that happened I noticed that an Opera icon had mysteriously appeared in the hidden icon box and then when I saw your message, I realised that f.lux wasn't automatically running (what I mean is it wasn't dimming the blue light) but I unpinned it from the taskbar and somehow managed to make its icon also appear in the hidden icons area where it always used to live. Then I rebooted, as per your message. What took me by surprise was that the Menu with the question: How do you want to open this file? had reappeared again. I hadn't seen that on reboot for a good few days. Could it be connected to the Opera message? I can start f.lux manually from the Start thing, but it used to auto-start before. Thank you very much. Posted 27 March 2020 - 09:11 PM Opera has a task that checks for updates. Expect it wanted to update. If you don't use Opera I would suggest you uninstall it. If the file is back we can easily undo the f.lux registry entry or you can just use Autoruns to uncheck it. Posted 28 March 2020 - 02:34 PM Hello. Thank you. I uninstalled Opera. I found f.lux in pink listed as users/david so I unchecked that. I was unsure whether I should uncheck all the yellow stuff in the Autoruns as you suggested to do previously, so I didn't do that. I'm unsure if I have completed the instruction correctly. Thank you. Posted 28 March 2020 - 03:58 PM Yellow items indicate files are not there so it doesn't hurt to uncheck them. Sounds like my attempt at replacing the f.lux didn't work. Probably needed extra quote marks. Oh well you say you have a work around so I guess we won't worry about it. Do we have any problems left? Posted 31 March 2020 - 05:33 AM Hi. Menu's gone. I can manually start f.lux. All good, thank you very much. Take care. Posted 06 April 2020 - 03:52 AM Hello - again. I feel a bit bad jumping back on this thread yet again; however, for some reason that I can't fathom my fan is going a bit crazy. I remember the last time it happened you recommended running process explorer and repairing avast so I just ran it but I couldn't see an avast replication - lots of open brave stuff which didn't correlate to the actual open tabs (don't know if they should). Today's a warm day so it's not firing because there's ambient cold (again, don't know if it ever would). So here I am again seeking your help. You've already given me an enormous amount of your time and help and so if you feel I've already reached my limit, then I quite understand. If, on the other hand, there's still a bit of good-will left, I would very much appreciate your expertise in resolving this. As a particularly sound-sensitive soul (unlike my ex-wife who could sleep through anything), it's doing my head in a bit. Hope you're safe and well. Thank you very much. Posted 06 April 2020 - 04:11 AM Disregard last post. After I posted that, I thought I'd run an Avast scan and so opened it's icon. I was met with an upgrade alert which I ran and it updated some apps (don't know which). Then I remembered that over the past couple of weeks, sporadically, I'd seen not an Avast, but a Dell alert which said there were 3 updates and to click the box to update. Weird thing is that each time I clicked the box to action the update, NOTHING happened so I was left thinking it was a glitch. What the relationship is between the Avast and Dell update, I don't know - maybe totally independent. Anyway, I thought it had fixed the fan noise but it just kicked in again, and now stopped again (normally it's always pretty quiet, like it is now). Don't want to waste your time. Thank you. Posted 06 April 2020 - 04:15 AM Just to be clear: it's all good now. I don't currently require your help. Thank you. Have a great day. Posted 06 April 2020 - 06:19 AM No problems with coming back. Glad to have something to do. Next time please make a process Explorer log as we did before. The Screenshot doesn't tell me anything. View, Select Column, check Verified Signer, OK Options, Verify Image Signatures Click twice on the CPU column header to sort things by CPU usage with the big hitters at the top. Wait a full minute then: File, Save As, Save. Note the file name. Open the file on your desktop and copy and paste the text to a reply. Also tagged with one or more of these keywords: windows 10, password login, fake login screen 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
With the huge amount of sessions at Oracle Open World, it’s often hard to find the little gems of information amongst all the marketing. This is true of ADF like all other technologies at the conference, there’s simply a lot of information to digest and filter. Luckily Oracle publishes the presentations PPTs afterwards and it’s possible to find a jewel or two in all the content with some careful searching. For the ADF developers among us, this blog post attempts to summarize some of the main ADF takeaways from Oracle Open World 2011. Please remember this is my summary, not Oracle’s (I am not an Oracle employee), and Oracle publishes all of this content under the Safe Harbor statement which means they cannot be held to anything they published. All the links in this post are not guaranteed to be up forever as Oracle may remove them in the near future. I suggest if you're interested in reading the presentations download them now. Finally I apologize for some of the clunky grammer and phrases in this post, I wrote it on the plane back to Australia with the usual jetlag that fogs the brain. Of the large announcements at Oracle Open World 2011, the soon-to-be-released (2012) Mobile edition of ADF was the most significant in the ADF space. Some key points of the new platform is it supports both iOS and Android, runs on device with a mini JVM, and uses PhoneGap to allow the native app to access the device’s native facilities. For me the most telling part was the architecture diagram from the Develop Mobile Apps for iOS, Android, and More: Converging Web and Native Applications presentation by Oracle Corporation’s Joe Huang, Denis Tyrell, and Srini India: Data Visualization Controls Katarina Obradovic-Sarkic, Dana Singleterry and Jairam Ramanathan from Oracle included screenshots of upcoming DVT components in their Building Visually Appealing Web 2.0 Data DashBoards. First we see a new Network Diagrammer: As can be seen the component demonstrates the relationship between disparate nodes. This is incredibly useful for visualizing relationships in data. Another screenshot showing a different data relationship structure: In terms of graphs Oracle is looking at a Treemap graph: …and a Sunburst graph: ...both useful for showing hierarchical data visually. Of all the DVT controls the Timeline graph excites me most, something I’ve asked for in the past: However I must clearly stress to readers these DVT controls are not in the current 220.127.116.11.0 release, and under Oracle’s safe harbor statement is not guarantying they will ever see be released (but fingers crossed anyway huh?). As the ADF EMG moderator I’m involved in a lot of discussions in the community about the IDE and the framework. One hot topic is JDeveloper’s Maven support. 18.104.22.168.0 introduced the first cut of Maven support for the IDE, as discussed by Oracle’s Susan Duncan’s Team Productivity with Maven, Hudson and Team Productivity Center. This first slide shows the current Maven support: Of more interest is the planned Maven features for 12c, which not only tells me Oracle is committed to Maven support, but also there are definitely limitations in the current implementation: Most importantly here for me is the first 2 bullet points, which means I wont recommend to customers working with Maven until Oracle makes these available. Don’t get me wrong though, a couple years back there was no Maven support and it’s great Oracle is working to fill that gap completely. What can Fusion Applications teach us about ADF? Unlike OOW10, this year at Oracle Open World there was considerable more Fusion Applications demonstrations and presentations. This has been a boon as previously we’ve seen a lot of demos of dashboard-like-screens that while pretty don’t show us where the real work occurs for users. Fatema Madraswala from PwC and Rob Watson from Oracle included screenshots of the Fusion Applications Talent Management system (The very first Fusion go-live case study: It’s curious to me that while Oracle has put a lot of effort into communicating the User Experience design effort put into Fusion Applications, then we see a screen that looks Oracle-Forms like, especially with it’s tabbed interface. In turn the worksheet at the bottom looks cluttered with buttons and fields. Yet with respect designing user interfaces for complex business systems is surely not easy. I recommend ADF developers to search out as many Fusion Applications screenshots as possible as it reveals an insight into how to build the UI and what is and isn’t possible. What about E-Business Suite? EBS customers might feel the whole ADF/SOA bandwagon is passing them bye, what with the focus on Fusion Applications. Yet this year saw presentations tailor fitted to cover integrations points with EBS. I must admit I can’t really comment on the quality of the solutions as I have no direct experience with EBS, so I’ll leave experienced readers to make their own assessment. Check out the presentation entitled Extending Oracle E-Business Suite with Oracle ADF and Oracle SOA Suite from Oracle’s Veshaal Singh, Mark Nelson and Tanya Williams. As extension to the Fusion Applications demos, I’m detecting more down-and-dirty technical presentations on MedaData Services (MDS) where the framework can support personalizations and customizations. Gangadhar Konduri and a fellow Oracle colleague discussed the theory and demonstrated customizing a Fusion Applications module, with a focus to what technical people need to know. I must admit in the past I’ve been a little skeptical of MDS et all, not for it’s implementation but just the lack of information around on how to maintain and work with it from a developer/administrator point of view. However I’ll need to step back and reassess that opinion. You can read more in Gangadhar’s Managing Customizations and Personalization in Oracle ADF MetaData Services. For ADF Experts For the ADF experts who feel many of the presentations aren’t aimed at them, it’s well worth catching one of Steven Davelaar’s presentation. Steven who is the JHeadstart Product Manager at Oracle extends and pushes the ADF framework to its limits. His presentations often include large amounts of code where I discover new properties and techniques way beyond my current level of expertise. This year Steven presented Building Highly Reusable ADF Task Flows and Empowering Multitasking with an Oracle ADF UI Powerhouse for the ADF EMG (great title Steven ;-). From my own perspective one of the most important presentations I attended was Oracle’s Duncan Mill’s ADF – Real World Performance Tuning presentation. As I now have several clients with production level ADF applications, my focus has moved away from the basics of creating ADF applications to architecture and performance. Duncan’s presentation aggregated a wide range of tuning hints into an easily digestible guide, highly valuable. In a separate presentation entitled Certified Configurations of Oracle ExaLogic, Oracle Fusion Middleware, BI and Oracle Fusion Apps by Pavana Jain and Deborah Thompson from Oracle Corp, the future roadmap for FMW releases was revealed. Readers are reminded the safe harbor statement means Oracle doesn’t have to stick to what they present, so take the slides as guidelines only. The first slide shows the approximate dates of each version: The second slide reveals which 11g FMW products will be included in each release: Some readers might find it curious why the 11g 11.1.1.X.0 series continues to at least 22.214.171.124.0 while there is already an 126.96.36.199.0 release of JDev. My understanding this is occurring because Fusion Apps will continue on the 11.1.1.X.0 series for some time yet thus extending the life of that branch. Finally the third slide the same for the 12c FMW products: Oh and the ADF EMG had a great event too The ADF EMG also had a "super" Super User Group Sunday, but people are probably a little sick of me talking about it, so I'll just push you to a link instead.
OPCFW_CODE
Null FTP Server Pro Free Download Null FTP Server Pro Null FTP Server Pro Free Download. Null FTP Server - Enterprise level FTP, SFTP (SSH), FTPS, HTTP / S server with remote administration. Localized graphical user interface: *English, French, Russian, Ukrainian *Easy to use local and remote server administration *Shell extension for setting Null FTP security permissions inside windows explorer *Tray icon for easy server configuration access and server status indication *x86 and x Users and Groups: *Use built-in user accounts and/or Windows accounts *Advanced quota management for groups and users *Per user site commands *Per user virtual directories *Per user IP restrictions *Per user settings and transfer restrictions *Anonymous accounts is treated with all options that a normal user account has Security and Access Control Lists: *ACL level security *Windows authentication or Null FTP built in security *Windows authentication impersonation *Create user groups *Null FTP ACL settings inline with windows explorer or within application *IP based access *Full digital certificate management *Protection against denial of service attacks *Access to specific accounts for specific IP addresses *Automatic banning of IP if a specified number of connection attempts are made within a specified time limit, IP banning removal. *Automatic account locking *IP banning after a specified amount of invalid login attempts. Full API provided: *COM, HTTP REST SDK *Built in integrity check commands (XCRC, XMD5, XSHA) *Upload and download resuming *Mode Z compression and bzip compression *NT windows System service *Full unicode support Null FTP Server Pro keywords: ftp, ftps, sftp, ftp server, file transfer server, web server Need a small but yet powerfull FTP Client for your handheld device? If so, then Good FTP is the program for you. If you have to be away for business, you don't have to wait days or weeks to get back to your personal computer to manage your web site. Null FTP Client Pro Null FTP Client Pro - The most powerful and easy to use FTP, SFTP (SSH), and FTPS client. Skin support, transfer resuming, integrity checks, drag and drop, ... TubeToolbox will forever change the way you use YouTube.com. It tracks your friends and videos making it easy to do things such as send comments. You can also download videos, gather users videos, and send comments, messages, or friend requests. TSFTP provides Fast, Easy and Secure file transfer for Terminal Services, and supports transferring files between the client workstation's drives and those on the terminal server. ExtraFTP is an FTP (File Transfer Protocol) client which allows you to transfer files between your local computer and a server on the Internet.
OPCFW_CODE
On June 10th, the Massachusetts Environmental Police received a strange call. “We got a call to our dispatch from someone who claimed there was a three-foot lizard in their backyard in Chicopee, Massachusetts," recalls Massachusetts Environmental Police Lt. Tara Carlow. When officers arrived on the scene, they found a disgruntled homeowner and a fully-grown Argentine tegu. Also called black-and-white tegus, these exotic lizards can reach over four feet in length and are native to rainforests and savannas across South America. Still, Carlow wasn’t surprised one turned up in Chicopee. “We get these types of calls at least once a year,” she says. In the state of Massachusetts, Argentine tegus are widely sold, and citizens do not need a permit to own one. Tegus are skilled escape artists, Carlow says, and this is hardly the first time one has gotten loose in Massachusetts. People often purchase exotic pets without understanding what they're getting themselves into, Carlow says. Tegus, pythons, parrots, sugar gliders, and many other animals sold as exotic pets can live for upward of 20 years, nearly twice as long as the average dog. Caring for a long-lived exotic pet is an expensive and, in some cases, risky endeavor—because exotic pets are largely undomesticated, their behavior can be unpredictable. In the United States, at least 300 people have been attacked by an exotic pet since 1990, according to the nonprofit Born Free USA. (Read about how young collectors in China are fueling a boom in ultra-exotic pets.) It is for these reasons, Carlow says, that escapes—and even intentional releases of exotic pets—are not uncommon. When this happens, the fallout can be catastrophic. If the animal doesn’t die as a result of predation, exposure, or starvation, it may find a mate, proliferate, and become an invasive species. The exotic pet trade now ranks among the primary causes of the spread of invasive species, according to a new academic review published last month in the journal Frontiers in Ecology and the Environment. The review finds that the exotic pet trade has led to the establishment of hundreds of invasive species and is poised to contribute to the establishment of even more. “I don’t think most of us fully grasped how expansive the trade has become,” said lead author Julie Lockwood, professor in the Department of Ecology, Evolution, and Natural Resources at Rutgers University, in a statement. “The volume of vertebrate animals that are traded worldwide is shocking, even to relatively seasoned invasion biologists.” Invasive species are the second largest driver of biodiversity loss worldwide. They’re estimated to cost the U.S. some $120 billion a year, and more than 40 percent of species listed as threatened or endangered in the U.S. reached that status because of invasive species. Invasive species alter habitats, break up food chains, eat up prey populations, and reduce predator populations. Pets to pests The exotic pet trade is a multibillion-dollar industry involving tens of millions of individual animals from thousands of species, including reptiles, amphibians, fish, birds, and mammals. It has become significantly more widespread over the past few decades, in part because of the rise of non-traditional marketplaces, such as websites, trade shows, and social media. Most research into the trade has been on its role in the spread of disease or loss of biodiversity, so not much attention has been given to its role in the proliferation of invasive species, the authors write. “Key to addressing the invasion threat of exotic pets is learning more about the socioeconomic forces that drive the massive growth in the exotic pet market,” the study says, as well as understanding why people release their exotic pets into the wild. How and why exotic pets are introduced into foreign environments is not well understood, says Mark Hoddle, director of the Center for Invasive Species Research at the University of California, Riverside, who was not involved in the study. “Sometimes pets escape from their enclosures. Other times people get tired of looking after them and just let them go,” he says. People also deliberately release exotic animals for religious reasons and to make their surroundings “more interesting,” he says. (Learn how a dozen Asian monkeys took over a state park in Florida.) Stopping the spread “The best way to address the spread of any animal brought in through the pet trade is through education, early detection, and rapid response,” says Christina Romagosa, an invasive species biologist with the University of Florida's Department of Wildlife Ecology and Conservation and co-author of the study. Unfortunately, in Florida, the tegu has already established itself. They regularly raid the nests of Florida’s egg-laying species, including the threatened gopher tortoise, a keystone species whose burrows provide homes for hundreds of other animals. It’s just the latest invasive species to wreak havoc on the state’s native birds and reptiles. Florida’s infamous Burmese pythons, which became fully established as an invasive species in the state around 2000, have been blamed for reducing mammalian diversity in the state. Similarly, red lionfish, highly venomous aquarium fish introduced into Florida waters in the late 1980s, have significantly diminished the abundance and diversity of marine life on the state’s coral reefs. (Read more: Burmese pythons eating through Everglades mammals at an "astonishing rate.") “Wherever they are, there are definitely less fish in that area—especially fish that are good to spear,” spearfisherman Jarrad Thomason previously told National Geographic. Romagosa emphasizes that education is particularly important. She’s found that consumers who know exactly what they’re signing up for when they purchase an exotic pet are less likely to release them. Equally important, she says, is more research. "We simply do not have a lot of information on what factors lead to a species being incorporated into the [pet] trade in the first place, or what factors lead to escape or release," Lockwood says. "Without this information, it is very difficult to pinpoint policy directives so that people can still enjoy owning and interacting with exotic pets while reducing the chances that the trade will generate more harmful invasive species." As for how—or why—that tegu in Massachusetts got out, we still don’t know. The exotic escapee is currently living in a reptile care facility while police try to find his owner.
OPCFW_CODE
Hello, We are working on a project which requires to analyse 120 GB ( which will grow to 6TB in couple of years). Our current approach is SQL database ( DW) ---> QVD ---> QS but we are not conformable with this approach as we are doing change capture ( update as insert using exist function based on ID and date) in Qlik and storing the data in QVDs. This process taking 5-6 hrs daily (sometimes more than that) Here are the detail steps Step 1 : extract.qvw -- > exe store procedure and compare with past qvds and update qvd with new data --> 5-6 hrs Step 2 : Transform.qvw --> load all qvds and create a start schema data model -- 30 mins to 1hr Step 3 - QS app - binary load Transform.qvw into QS app -- 10 mins ( QS server has got 1 TB ram) Issues: 1.Process time : Daily 5 -8 hrs process means business won't get data in time, if there are in changes in business logic in SQL store procedure need to full history reload and recreate all qvds. this would take months if not years 😞 2. Scalability: In couple of years it will reach around 6TB ( last 6 years data + future data) , duplicating the same amount of data in QVD format again. infrastructure and architecture teams not happy. 3.Performance of final app : Loading this 6TB into Qliksense memory ( no idea how much memory is required), defiantly there will be performance issues. To resolve these issues, we would like to use big data approach and QABDI. 1. Move all change capture logic and data loads to SQL server --> Teland --> Data lake ( Hadoop) 2. Use QABDI and provide different kind of options like ( live, ODAG, live apps) Is this right approach or are we moving the problem one area to another area? This does sound like a good approach. However, there are definitely some intermediate steps that you can do to try tackle this data size. Have you implemented ODAG already? I would suggest doing that now utilizing QVDs. I personally haven't found a use case where a user needs access to that much data at once, so implementing ODAG is usually an easy change management process. If you work on creating your template script to leverage optimized QVD loads, it should still be extremely quick. Also, you can talk directly to your DB or data lake which if they have the resources could potentially be even faster. The second thing that may be worth looking into is your extract. Do you know why it is taking 6 hours? Is there any way to speed this up? QABDI sounds like a great use case here and I think is worth doing a POC for it. What's great is everything mentioned above will compound the performance gained through QABDI. It will also help in scenarios where QABDI may not be a great fit or where a feature is currently lacking. We haven't implemented ODAG yet. We need to access history data ( which is very huge ) for trend analysis and positions and BM analysis. It is taking 6 hours as SQL SP out put is created into multiple small tables ( like data_20190101, 20190102.. ) these are compared with existing QVDs using exists function to add new and updated rows i have one more question related to QABDI.. does it support MS Azure data lke gen2 ?
OPCFW_CODE
Live! 360/App DevTrends: Reza Rahman Calls on Java Enterprise Community To Come Together Reza Rahman delivered the opening keynote at App Dev Trends 2016 (part of Live 360!) Tuesday morning, giving attendees a deep, contextual history of enterprise Java and issuing something of a call to arms. "If we are going to ensure the future of enterprise Java, we must remain alert, stay engaged, and participate in the community. You have to do these things anyway, of course, but it's more important now than ever. Java EE is a maturing technology. If we don't reenergize it now, as a community, the investment in this technology we have made over the years will go away." Rahman emerged as a pivotal figure in the enterprise Java world earlier this year, starting with his departure from Oracle, where he had served as Java EE Evangelist, over concerns about what he perceived to be the company's neglect of enterprise Java. Shortly after resuming his consulting work for CapTech, a national IT management consulting firm, Rahman and a group of concerned Java community members launched the Java EE Guardians, and promptly published a petition aimed at Oracle executives. Rahman's keynote, entitled "You Are the Future of Enterprise Java!" was an apt exploration of a dynamic technology evolution presented within the context of recent developments. It focused on what's inside Java EE 8 and how it got there, and explored the critical role Java EE and APIs currently play in maintaining the health of the entire Java and IT ecosystem. But at a fundamental level, his presentation was about a community. "The process of defining the scope of Java EE 8 was the most community-opinion-driven process in the history of the platform," he told me in an earlier interview. "In fact, it was the community that helped to smooth the many bumps along the road to Java EE 8 for the entire IT industry." To a question from a skeptical audience member about how important the Java community really is to the future of Java EE, Rahman offered a hypothetical scenario with the largely Microsoft-centric conference in mind. "In a worst-case scenario, we -- the Java EE community -- could say, forget about Oracle and let's all just stand behind the MicroProfile project and move the technology ahead together," he said. "But imagine this happening in the Microsoft world. If Microsoft were ever to decide to divest itself from .NET -- and I'm not saying they ever would—you would have no one to respond, no one who could leverage the strength of a community to get them to change their minds. "All of what has happened in the past year -- the founding of the Java EE Guardians and the MicroProfile, Oracle's response to our actions, etc. -- to me, is a validation that the Java ecosystem works. In fact, I'd say that this is the worst fire drill that we could have gone through, proving out what differentiates us from all the other technologies out there; that this is not an autocracy, but a dynamic system with multiple layers of control. Even with all our recent belly aching and strife, we are still fundamentally one of the strongest ecosystems around." AppDevTrends is part of the popular Live! 360 uberconference, underway this week at Loews Royal Pacific Resort at Universal Orlando. We're running side-by-side with Visual Studio Live!, SQL Server Live!, Office/SharePoint Live!, Modern Apps Live!, and TechMentor. Videos of some of the other keynotes from the show are available here. Be sure to say "hi" if you see me at the show, and follow the latest updates on my twitter. Posted on December 6, 2016
OPCFW_CODE
One of the features I use most often on GitHub is linking to highlighted lines of code in my repository. GitHub, a Git repository web-based hosting service, is a great tool for source code management and version control. I use it for all of my web projects and even for management of files that are not necessarily website-related. From time to time I like to share links to files in my GitHub repository. But instead of just sharing a vanilla URL, I like to share a link that takes someone to a specific part of the page with a specific part of code highlighted. This is very simple to do. On any GitHub page, click on a line number to the left of the code. Notice the URL is now appended with the line number you selected ( e.g. https://github.com/.../functions.php#L117) . Visiting this link will take you to the exact line of highlighted code. To link to multiple lines of highlighted code, select your first line of code and then CTRL+SHIFT click (CMD+SHIFT for Mac) on the last line of code you want to highlight. Notice the URL is now appended with a range of line numbers (e.g. https://github.com/.../functions.php#L117-L148) . Visiting this link will take you to the beginning of the highlighted block of code. One thing to keep in mind is that these links are not anchored to the code but to the line numbers. That means if you make a change to the file's code the links may no longer highlight the lines of code you had originally intended to highlight. To link to the code, as opposed to line numbers, highlight the code you want to link to then click the "y" key. Notice the URL changes again ( e.g. https://github.com/.../blob/deffc216c5ac5ed6807289ca1fe8cf6f773e2447/.../functions.php#L117-L148) . You are now linking to lines of code stored in a unique version of the file's history. So now, even if the file is altered in subsequent commits, your link will still point to the lines of code you originally intended to highlight. There are lots of fantastic GitHub features like this. My second favorite feature is the File Finder, which you can open by hitting the "t" key. And don't forget, you can always hit the "?" key on any GitHub page to open the Keyboard Shortcuts window. There's also Owen Ou's article " Ten Things You Didn't Know Git And GitHub Could Do" which is a good starting point for learning about other GitHub tips and tricks.
OPCFW_CODE
One of the key elements of gaming on an Xbox console was getting to use your Microsoft Points. They were a virtual currency, integral to the Microsoft ecosystem, and could allow gamers to unlock new levels, purchase additional items, add new games to their library, watch movies, listen to music, and more! Whether you’re in the United Kingdom or anywhere else, redeeming your Microsoft Points will only take a couple of minutes. Today, we’ll show you exactly how to do it. What are Microsoft Points? Before we delve deeper into redeeming Microsoft Points, let’s first understand what exactly they are. Microsoft Points serve as the virtual currency of Microsoft, usable on Xbox and the Microsoft Store to purchase a wide range of items, including games, movies, apps, and more. Obtaining these points can be done in several ways, including the Xbox rewards system, or by simply purchasing them directly from Microsoft or reliable retailers. Microsoft Points were first introduced in 2005, alongside the launch of Xbox 360. Their main goal was to give users a simple way to purchase online content without using a credit card - but don’t think it was done out of altruism, Microsoft simply wanted to reduce the number of small credit card transaction fees they would have to pay. Users can buy points in retail stores or via the Xbox console itself, then spend these points on a variety of digital goods available on Xbox Live Marketplace, including full game downloads, DLCs, movies, and more. Nowadays, the Microsoft Points system looks nothing like it once did. In 2013, Microsoft Points were removed, and local cash currencies were implemented in their place. Users that still had Microsoft Points in their wallets had it converted to an equivalent amount of currency. Users can still purchase codes and gift cards for use on Xbox and Microsoft Store - just not in MS points. How to redeem Microsoft Points? While Microsoft and Xbox gift cards no longer feature Microsoft Points, they can still be redeemed just the same way. Regardless of whether you’re located in the UK or not, here are the steps: - Navigate to the Microsoft Store on your Xbox console, or go to https://redeem.microsoft.com/ on any other device. You can also redeem your code from the Xbox app on your Windows PC. - Select the “Redeem a code” option - Enter the code found on your Microsoft gift card - Follow the prompts to complete the redeeming process And there you have it! Your Microsoft Points should now be added to your account and ready for you to use any purchase you desire. What are Microsoft Rewards? Microsoft Rewards is a free loyalty program offered by Microsoft that allows users to redeem Xbox rewards by performing various activities. The program was first launched in 2016 as a way to incentivize Microsoft’s services like the Bing search engine or the Microsoft Edge web browser. Participants can earn points for making Bing searches, buying products from the Microsoft store, completing quizzes and surveys, or using Microsoft Edge for their browsing. The points you gather through the program can be redeemed for a range of rewards. The rewards include gift cards for Microsoft Store, entries into sweepstakes for larger prizes, donations to charitable organizations, or even direct deposits into your Microsoft account balance, which can then be used to buy games, apps, and other content. Joining Microsoft Rewards is simple and easy: - Visit the Microsoft Rewards website at https://rewards.bing.com/ - If you already have a Microsoft account, click “Sign In” at the top of the page. If you don’t have an account yet, you’ll need to create one. Click on “Create a Microsoft account” and follow the instructions presented on the screen. - Once you are signed in to your Microsoft account, navigate back to the Microsoft Rewards page if you’re not automatically redirected. - Click on the “Join Now” button or a similar prompt to start the registration process for the Microsoft Rewards program. You might have to verify your email address or phone number as part of this process. - Once you’ve joined, you can start earning points right away by doing daily activities like searching on Bing, answering questions, reading articles, and more. Check out the Microsoft Rewards dashboard for all available tasks. Keep in mind that participating in Microsoft Rewards is only available in certain regions, and some features may vary depending on your region as well. Make sure to check the terms and conditions of the Microsoft Rewards program to fully understand how it works. Getting the most out of Microsoft Rewards Microsoft Rewards is an extensive platform that allows users to interact and engage with Microsoft’s products and services while earning rewards. It’s not just about the points, but also about discovering new features, products, and learning opportunities presented by Microsoft. Check your dashboard frequently for new daily sets and challenges, designed to boost your points while also increasing your knowledge and know-how with Microsoft’s product. The rewards are integrated seamlessly with the Microsoft Edge browser, allowing users to easily earn points and browse tasks. Shoppers at the Microsoft Store - both online and physical - also benefit from points on their purchases. Every dollar spent on Microsoft products will award participants of the Rewards program with a number of points depending on your Rewards Level. The Microsoft Rewards program is tiered, meaning that users can earn higher levels depending on their level of activity. Level 1 is the default tier - users can earn Level 2 by reaching 500 Microsoft Rewards points each month, no matter how they are earned. Level 2 users benefit from more points reward from daily Bing searches, using Microsoft Edge, and more points per dollar spent in Microsoft Store (double for Xbox Live Gold users). Level 2 users also get access to special member sales and discounts, along with surprise invitations to exclusive member-only events. Participants with Level 2 can also gain up to 10% off when buying products from Microsoft brands or redeeming Microsoft codes, so create your Rewards account if you haven’t already and start collecting points! Troubleshooting common issues with redeeming Microsoft codes Sometimes, you might run into problems when you try to redeem your Microsoft code. Let’s take a look at some of the most common issues and how to resolve them: - Invalid Code: Check that you’ve entered the code correctly, as they often contain complex combinations of numbers and letters. If you’re sure your code is entered correctly and the error is still showing, contact the seller. - Used Code: Each redeem code Microsoft provides can only be used once. If you’ve purchased a new code, and it’s still showing that it is used, make sure to contact the retailer. - Regional Restrictions: Most Microsoft codes are region-specific, which means they’ll only work in a certain country or region. If you bought a code in one country but are trying to redeem it in another, it might not work.
OPCFW_CODE
Function for calling features as highly (or lowly) variable within a datasert or cell population. This can be thought as a feature selection step, where the highly variable features (HVF) can be used for diverse downstream tasks, such as clustering or visualisation. Two approaches for identifying HVFs (or LVFs): (1) If we correct for mean-dispersion relationship, then we work directly on residual dispersions epsilon, and define a percentile threshold delta_e. This is the preferred option since the residual overdispersion is not confounded by mean methylation levels. (2) Work directly with the overdispersion gamma and define an overdispersion contribution threshold delta_g, above (below) of which we call HVFs (LVFs). scmet_hvf( scmet_obj, delta_e = 0.9, delta_g = NULL, evidence_thresh = 0.8, efdr = 0.1 ) scmet_lvf( scmet_obj, delta_e = 0.1, delta_g = NULL, evidence_thresh = 0.8, efdr = 0.1 ) The scMET posterior object after performing inference, i.e. Percentile threshold for residual overdispersion to detect variable features (between 0 and 1). Default: 0.9 for HVF and 0.1 for LVF (top 10%). NOTE: This parameter should be used when correcting for mean-dispersion relationship. Overdispersion contribution threshold (between 0 and 1). Optional parameter. Posterior evidence probability Target for expected false discovery rate related to HVF/LVF detection (default = 0.1). The scMET posterior object with an additional element named lvf according to the analysis performed. This is a list object containing the following elements: summary: A data.frame containing HVF or LVF analysis output information per feature, including posterior medians for contains the posterior tail probability of a feature being called as HVF or LVF. The logical is_variable column informs whether the feature is called as variable or not. evidence_thresh: The optimal evidence efdr: The EFDR value. efdr_grid: The EFDR values for the grid search. efnr_grid: The EFNR values for the grid search. evidence_thresh_grid: The grid where we searched for optimal # Fit scMET obj <- scmet(Y = scmet_dt$Y, X = scmet_dt$X, L = 4, iter = 100) # Run HVF analysis obj <- scmet_hvf(scmet_obj = obj) # Run LVF analysis obj <- scmet_lvf(scmet_obj = obj) Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
OPCFW_CODE
YOUR USE THE SOFTWARE, SERVICE AND WEBSITE ARE INTENDED FOR INFORMATIONAL PURPOSES ONLY, AND ARE NOT INTENDED FOR USE IN CONNECTION WITH THE DIAGNOSIS, PREVENTION, TREATMENT, OR CURE OF ANY MEDICAL CONDITION. EMOTIV DOES NOT ENDORSE ANY SUCH USAGE AND DISCLAIMS ANY AND ALL LIABILITY WITH RESPECT TO ANY SUCH USAGE. IMPORTANT NOTE REGARDING CHILDREN. USE OF THE SOFTWARE, SERVICE OR WEBSITE BY PERSONS UNDER THE AGE OF 16 IS PROHIBITED. IF YOU ARE UNDER THE AGE OF 16, DO NOT USE THE SOFTWARE, SERVICE OR WEBSITE. - Third Party Programs. The Software and Service may facilitate Your participation (at Your option) in Experiments relating to EEG Data conducted by third parties, such as Your employer or the person or entity conducting the Experiment. If You choose to participate, EMOTIV will disclose, to the applicable third party, information collected from You during Your voluntary participation in such activities or Your use of the Service, such as Your EEG Data, demographic data, performance data, responses to experiment questions, and any data collected from related monitoring equipment, associated data such as event timing markers, mouse, touchscreen, gestural, and keyboard events, eye movements, survey responses, choices, and preferences, tactile, audio, visual, and other sensory stimuli, reaction times, self-assessment, and cognitive performance (collectively, “Experiment Data”). Experiment Data also includes information that may be inferred from the foregoing sources, either alone or in any combination. Third parties may offer You compensation in connection with Your participation. You acknowledge and agree that EMOTIV only facilitates the transfer of information to You about the third parties’ Experiments, the transfer of pseudonymized Experiment Data to the third parties, and the payment of compensation between You and the third parties, and that EMOTIV does not conduct the Experiments or control the actions of the third parties, EMOTIV itself is not liable for any payments and EMOTIV has no liability to You relating to Your participation. Third party Experiments may be subject to additional terms and conditions required by the third party. EMOTIV recommends that You review and ensure You agree with any such terms and conditions before participating. - Third Party Software. If You access any Third Party Software, You hereby authorize EMOTIV to share all or certain portions of Your pseudonymized EEG Data or pseudonymized Experiment Data, with the applicable Third Party Software provider based on the terms and conditions (including the license tier and number of license seats) of EMOTIV‘s license agreement with that provider. In the event the provider fails to comply with any terms and conditions of the license agreement, EMOTIV has the right to cease sharing Your pseudonymized EEG Data or pseudonymized Experiment Data with that provider, which may prevent the Third Party Software from functioning properly. “Third Party Software” means software developed by a third party using EMOTIV’s software development kit, which relies on Your pseudonymized EEG Data or pseudonymized Experiment Data to function, including but not limited to enterprise, consumer and free software applications. - Our Communications with You. You consent to receive communications from EMOTIV electronically. We will communicate with You by email or by posting notices on the Service. You agree that all agreements, notices, disclosures, and other communications that EMOTIV provides to You electronically satisfy any legal requirement that such communications be in writing. - Discussion Groups. EMOTIV may allow You and others to post messages on the Website through discussion groups and other community forums. All such messages posted on the Website, and any opinions, advice, statements, content or other information contained in such messages are the sole responsibility of the author of those messages and not EMOTIV. The fact that a particular message is posted on or transmitted using the Website does not mean that EMOTIV has endorsed, encouraged or approved that message in any way or verified the accuracy, completeness, legality or usefulness of that message. To the maximum extent permitted by law, You agree to defend, indemnify, and hold harmless EMOTIV, its affiliates, and their respective directors, officers, employees, and agents from and against any and all claims, actions, suits, or proceedings, as well as any and all losses, liabilities, damages, costs, and expenses (including attorneys’ fees) arising out of or accruing from any messages You post on the Website or which are otherwise attributable to You. Although EMOTIV does not regularly monitor such discussion groups or other community forums, EMOTIV has the right, but not the obligation, to remove or otherwise delete any message from the Website at any time without prior notice, in EMOTIV’s sole discretion. You may report any objectionable message to EMOTIV at ______________________. Last Updated Mar 7th, 2023
OPCFW_CODE
I am trying to put together a master collection of unicode faces and symbols. However, when I paste a character into Notepad++ it doesn't recognize them. What is the best and most efficient way to display them? This site seems to use adobe flash player. Thanks. Is the end-goal to display these chars (faces, symbols) on some web page (faces.html) or simply within your text file (faces.txt) ? If you simply want to display them in a text file, you most likely need to change the encoding of your Notepad++ file. By default, new Notepad++ files are encoded in ANSI which will garble any multibyte chars that you paste. I'm using Notepad++ v5.8.7, hopefully the menus / options are not too different on your version: Then try pasting your one of your multibyte faces/symbols into the file. Assuming Notepad++ has the font necessary to display this char, it should render correctly. However, if your end-goal is to display these chars on a web page, you should instead use the equivalent HTML "numeric character reference" instead of the raw char. In your html file, you refer to the char using a hex or decimal code. For example, here is a Japanese "kome" symbol: (I removed the final semicolon to prevent HTML rendering within SO page) A "numeric character reference" refers to a char by its Unicode code point. There are various way to get the Unicode code-point. Once quick way is to copy-paste your char into the awsome search page here: http://www.fileformat.info/info/unicode/char/search.htm That will give you all the details for the char, including its HTML numeric char ref. Ensure you declare the charset of your webpage as UTF-8 like this: You can test the rendering of your faces/symbols by saving the file (faces.html) to your local computer, open a new browser tab, then drag the files onto the tab and the browser should render the HTML char entities as chars. Hope this helps. You can use the characters as such. Whether the editor (like Notepad++) can display them is immaterial to their use in HTML documents, which will be rendered by browsers. Make sure you set the document encoding to UTF-8 and declare it. See the W3C page Character encodings. The most difficult part is to set fonts so that the characters will be displayed in different environments. There is no single font that covers all Unicode characters. And there is no font that exists in all systems. Using downloadables font (web fonts) you can cover most rendering situations, but not for all characters using a single font. See my Guide to using special characters in HTML. You should first consider how realistic your project is; a “master collection” is a rather ambitious goal, and such collections already exist, e.g. at FileFormat.info and at Codepoints.net – and, of course, at Unicode.org.
OPCFW_CODE
How to pass NSObject from one view controller to another I am using iOS 5 SDK with arc. I want to pass NSObject from VC1 view controller to VC2 view controller, do modification and put back into VC1 controller. I don't want to point same nsobject from both VC1 and VC2 controllers. Whenever I am passing nsobject, it should create a copy of that nsobject and do the modification. (Do not modify actual nsobject). I have tried below code but it crashes and giving error as -[ImageObject mutableCopyWithZone:]: unrecognized selector sent to instance 0x1364ec20 Code: I have NSObject as : @interface ImageObject : NSObject @property (copy,nonatomic) NSString *path; @property (nonatomic, readwrite) int id; @end In VC1 view controller: I am passing my nsobject to VC2 view controller as follows: VC2ViewController *vc2 = [[VC2ViewController alloc] initWithNibName:@"VC2ViewController" bundle:nil]; vc2.imageObj = [imgObj mutableCopy]; [self.navigationController pushViewController:vc2 animated:YES]; In VC2 view controller: VC2ViewController.h file #import "ImageObject.h" @interface VC2ViewController : UIViewController @property (retain,nonatomic) ImageObject *imageObj; @end VC2ViewController.m file // modifying nsobject as below -(void)modifyObject { UIViewController *previousViewController = [self.navigationController.viewControllers objectAtIndex:self.navigationController.viewControllers.count-2]; if ([previousViewController isKindOfClass:[VC1ViewController class]]) { VC1ViewController *parent = (VC1ViewController *) previousViewController; if(parent != nil) { _imageObj.id = 2; [parent reloadData:_imageObj]; } parent = nil; } [self.navigationController popViewControllerAnimated:YES]; } Have any idea ? How to resolve this issue? Your ImageObject class needs to conform to the NSCopying Protocol. This answer here explains better and shows you how the code looks like. I also think that you need to use [imgObj copy] instead of [imgObj mutableCopy] because according to apple docs: The NSMutableCopying protocol declares a method for providing mutable copies of an object. Only classes that define an “immutable vs. mutable” distinction should adopt this protocol. Classes that don’t define such a distinction should adopt NSCopying instead. Hi Mariam, Thank you for answer. I read about NSCopying Protocol. Also looked difference between copy and mutableCopy from here (http://sarojsblog.blogspot.in/2011/07/difference-between-copy-and-mutablecopy.html). As per I understood that if i use copy, then both arrays will point on same address, but if i use mutableCoopy instead of copy then both arrays will have different addresses. So, which protocol should i use? NSCopying or NSMutableCopying You should be using NSCopying unless your ImageObject has two types immutable and mutable which I think is not the case here. The post talks about the difference between a normal NSArray and a NSMutableArray these are 2 different types of object classes...much like a normal string and mutable string. From a logical point of view I see why you see it makes more sense to use mutableCopy (you can change the contents of your object) but the point here is there is NO Immutable Version of your class...hence, no "immutable vs. mutable distinction" therefore..." adopt NSCopying instead."
STACK_EXCHANGE
|Date: ||Sun, 21 Nov 1999 18:14:20 EST| |Sender: ||"SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>| |Subject: ||Re: Limiting observations with SQL| |Content-Type: ||text/plain; charset="us-ascii"| Peter, Nancy and Melanie, Peter came up with the quick easy answer. I didn't send the rownum statement because it attaches the rownum before the order by. Realizing later that Melanie just wanted 100 obs, this is the way to go. Nancy, you were correct. I thought Oracle passed the info to SAS as it got them, not after it has them all. My suggestion would make the complete Oracle program run, then taking the first 100 observations. Definitely the long way around. Melanie, I'm sorry for giving you the incorrect answer. I hope you ran it over the weekend or realized it was taking too long and killed it. I especially didn't want to give bum info to a fellow New Yorker! In a message dated 20/11/99 6:39:04 AUS Eastern Daylight Time, << Try adding: AND ROWNUM <= 100 into your where clause. >>> <MelliJ@AOL.COM> 11/19/99 11:05AM >>> Hello all - I am trying to limit the number of observations from an SQL download via an Oracle connection. I am connecting to a very large database, using a fairly complicated query. I would like to test the code by pulling the first 100 observations or so. This way, I can verify that it works without having it run for hours. This is a sample of what my code looks like: SELECT HSP_ID as MPN, BENE_CLM_NUM as HIC, HSE_CLM_FROM_DT as from_dt, HSE_CLM_THRU_DT as thru_dt, HSE_CLM_STUS_CD as STATUS, ADDED_TO_FILE_DATE as ADDED WHERE HSE_CLM_THRU_DT >= '10/01/1998' AND HSE_INPAT_OUTPAT_IND = 'I' AND HSP_ID like '330%' AND HSE_CLM_STUS_CD not in (33,34,35,36,37,38,39) ORDER BY HSP_ID,BENE_CLM_NUM,HSE_CLM_FROM_DT,) Is there any syntax that will allow me to select how many records are pulled out of Oracle? Thanks in advance for any assistance. Sr. Data Analyst Lake Success, NY
OPCFW_CODE
I can't recall the name of a horror movie / TV-show I saw years ago I'm not sure if this is a nightmare I've had since I was young, because I'm pretty sure I saw this on TV. Anyway, the plot goes something like this: There's this big mansion, I think it's wine-red, out in the middle of nowhere (in a forest, to be exact). A family, or a group of people in different ages, move in. Weird things start happening. I'm not sure if the kids are possessed, or if they get possessed by the time they've settled in, but there's mostly things going on with them... I think. The adults start going loco, and some of them commit suicide. It's kinda like in the movie Shrooms where you think "the main character" is a victim like everyone else in the group, but in the end you're like 'aha, so she was the murderer', because you assume that there are strange creatures killing them or whatever. Anyway, the monsters in my (hopefully not) nightmare are like... swamp monsters? I don't even know. They look like it. One is red and the other one is green. I think the red one is in the house, while the green one is outside. They seem friendly towards the kids, as far as I know... I didn't watch much of it since I got scared. It's the one thing I keep dreaming about, which is weird, because I can't find any information about it. No one else I know has seen it either. Also, I saw the movie around 2005 and the only fitting genre I can think of is psychological horror. It basically screws you over. +1 in hopes you didn't just give a great movie plot away which came from your dreams :D When you watched it on TV in 2005, was it old or new? Was it in English? I don't know which movie this could be, but it sounds like Scary Movie 2 is a parody on this movie (amongst a bunch of others). Maybe you can find the movie by checking the 'parodies' section of the Wikipedia page: http://en.wikipedia.org/wiki/Scary_Movie_2 @poepje I think Scary Movie 2 is mostly a parody of House on Haunted Hill Could it be Rose Red? Dr. Joyce Reardon, a psychology professor, leads a team of psychics into the decrepit mansion known as Rose Red. Her efforts unleash the spirit of former owner Ellen Rimbauer and uncover the horrifying secrets of those who lived and died there. Here is the trailer, maybe parts will look familiar:
STACK_EXCHANGE
An interesting post from Steve Blank talks about how he met the guy who went on to found The Startup Genome Project and just released their first report on what makes a startup successful. Key findings are: - Founders that learn are more successful - Startups that pivot once or twice times raise 2.5x more money - Many investors invest 2-3x more capital than necessary in startups that haven’t reached problem solution fit yet - Investors who provide hands-on help have little or no effect on the company’s operational performance - Solo founders take 3.6x longer to reach scale stage - Business-heavy founding teams are 6.2x more likely to successfully scale - Technical-heavy founding teams are 3.3x more likely to successfully scale with product-centric startups with no network effects - Balanced teams with one technical founder and one business founder raise 30% more money - Most successful founders are driven by impact - Founders overestimate the value of IP before product market fit by 255% - Startups need 2-3 times longer to validate their market than mostfounders expect - Startups that haven’t raised money over-estimate their market size by 100x - Premature scaling is the most common reason for startups to performworse - B2C vs. B2B is not a meaningful segmentation of Internet startupsanymore because the Internet has changed the rules of business The StartupDigest and the Founder Institute have teamed up to offer via Udemy a series of lectures aimed at helping startup founders with the initial steps in building a technology company. “Startup and Go” is online and available at Udemy’s website (http://www.udemy.com/startup-and-go/). Although by ‘invitation only’ there are some codes you can use to gain free access to the course. The first 100 people to use the password “startupdigest” will have full access to the online talks, per a recent email sent to members of The Funded website. I didn’t know this, but it seems another code was flowing around from an earlier post by ReadWriteWeb for the first 1,000 users, so you can try that one as well: “readwriteweb”. The talks are a great way to gain additional insights from some experienced entrepreneurs, and Adeo’s always a truly motivational guy! Worth checking out. Eric Ries, the creator of the “Lean Startup” methodology for launching and running your startup has now made available a totally free online course teaching you the basics of his methodology, called “How to Build a Better Startup“. Beautifully created on the SocratED platform, the course is a combination of short videos and text. Definitely worth checking out, and the link has just gone to my ‘favorite links’ on the home page of this blog.
OPCFW_CODE
How to open a local file using JavaFX? Oracle Community 28/03/2015аи JavaFX example show how to get content from TextArea, and save it as txt file using FileChooser. http://java-buddy.blogspot.com/2015/0...... Volkan; As Craig Dennis points out, the Image class requires a URL and does not have a direct way of taking a path as an argument. However, there is help. How do i add/use FileChooser on or with scene b 13/05/2012аи Hi, I'm using a JavaFX fileChooser.showSaveDialog with two extension filters (*.csv and *.xml). How can I determine which filter was selected when the end-user clicks the save button? I am writing a program that allows the end-user to create an output file in CSV or XML format. If the end-user enters a file name with no extension, I need a way to determine if I should create a CSV or XML file... The FileChooser allows users to navigate the file system and choose a file or multiple files. A similar component is DirectoryChooser, which allows users to select a folder. Save files with JavaFX FileChooser Genuine Coder I am in the process of learnig JavaFx, and I encountered a problem. I was trying to use FileChooser from JavaFx the way I was used to working with JFileChooser from Swing-in the main() method. how to download and use mircosoft powerpoint for windows 10 Save data as XML with JAXB. Learn how to use the JavaFX FileChooser and the JavaFX Menu. How do I use multiple windows or show FileChooser GitHub The following are Jave code examples for showing how to use setSelectedExtensionFilter() of the javafx.stage.FileChooser class. You can vote up the examples you like. javafx how to set canvas to fill a space Volkan; As Craig Dennis points out, the Image class requires a URL and does not have a direct way of taking a path as an argument. However, there is help. How long can it take? Use JavaFX FileChooser in Swing Stack Overflow - How do you use the FileChooser.ExtensionFilter - Complete Oracle JavaFX with Database & Advance Java - java Where is filechooser in javafx scene builder 2.0? I - How to determine which FileChooser ExtensionFil How To Use Filechooser In Javafx What are Key Frames? When an animation occurs, each step is called a keyframe which is defined by a KeyFrame object. The key frame interpolates key values defined in KeyValue objects over a period of time which is defined in javafx.util.Duration. - This property is used to pre-select the extension filter for the next displayed dialog and to read the user-selected extension filter from the dismissed dialog. - By using the same file chooser instance to display its open and save dialogs, the program reaps the following benefits: The chooser remembers the current directory between uses, so the open and save versions automatically share the same current directory. - For those curious like me (who has so far only been using Swing and hates the Swing file dialog, it doesn't even stretch the table to fill the whole window seriously WTF) - this is a complete example to use the JavaFX file dialog from any kind of Java application at any time. - Uses of Class javafx.stage.FileChooser. No usage of javafx.stage.FileChooser. Skip navigation links
OPCFW_CODE
after update to <IP_ADDRESS> Missing Reference Mono.Posix Hi Tolis, after update <IP_ADDRESS> the csScript.CS nget needs updated, recompile was ok, without a error. after deployment on test server the worldcreator stops with missing DLL / reference Mono.Posix Its a bit strange that mono and in this case Posix is missing. Please look at the csScript requirements to avoid all un-needed nuget libs. Cheers Lars Hi there, Indeed cs-script has dependancy on Mono.Posix.dll. It is required for implementing custom system wide Mutex which is otherwise not available on Mono-Linux. The cs-script codebase contains Mono.Posix.dll which is only a facade assembly. Thus the assembly can be built on Windows. The code that actually invokes Mono.Posix.dll at runtime is not executed on Windows even if it is running on Mono. But on Linux Mono.Posix.dll is automatically loaded from GAC. I hope it helps. Hi Oleg, that the cs-script is referencing the Mono.Posix its clear now. But why need the WorldCreator load that ? The error occurred: Type: InvalidOperationException Message: Exception occurs while initializing the 'Xpand.ExpressApp.WorldCreator.WorldCreatorModule' module: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. here is the error Stack: at DevExpress.ExpressApp.ApplicationModulesManager.SetupModules() at DevExpress.ExpressApp.ApplicationModulesManager.Load(ITypesInfo typesInfo, Boolean loadTypesInfo) at DevExpress.ExpressApp.XafApplication.Setup(String applicationName, IList`1 objectSpaceProviders, ApplicationModulesManager modulesManager, ISecurityStrategyBase security) at DevExpress.ExpressApp.XafApplication.Setup() at x.x.x.Web.Global.Session_Start(Object sender, EventArgs e) in c:\tfsagents\agent1_work\26\s\x.x.x.Web\Global.asax.cs:line 112 at System.Web.SessionState.SessionStateModule.CompleteAcquireState() at System.Web.SessionState.SessionStateModule.BeginAcquireState(Object source, EventArgs e, AsyncCallback cb, Object extraData) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStepImpl(IExecutionStep step) at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) InnerException: Type: ReflectionTypeLoadException Message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Data: 0 entries LoaderExceptions: 1 entries &#39;0&#39; &#39;Could not load file or assembly &#39;Mono.Posix, Version=<IP_ADDRESS>, Culture=neutral, PublicKeyToken=0738eb9f132ed756&#39; or one of its dependencies. The system cannot find the file specified.&#39;Stack trace: at System.Reflection.RuntimeModule.GetTypes(RuntimeModule module) at System.Reflection.Assembly.GetTypes() at System.Linq.Enumerable.<SelectManyIterator>d__172.MoveNext() at System.Linq.Enumerable.WhereEnumerableIterator1.MoveNext() at Xpand.ExpressApp.WorldCreator.WorldCreatorModule.RegisterDerivedTypes() in C:\expandframework\eXpand\Xpand\Xpand.ExpressApp.Modules\WorldCreator\Module.cs:line 163 at Xpand.ExpressApp.WorldCreator.WorldCreatorModule.Setup(ApplicationModulesManager moduleManager) in C:\expandframework\eXpand\Xpand\Xpand.ExpressApp.Modules\WorldCreator\Module.cs:line 133 at DevExpress.ExpressApp.ApplicationModulesManager.SetupModules() InnerException is null i assume that his additional loading of - my point of view - unnecessarily objects and classes slow down the app Regards Lars I published nugets for v<IP_ADDRESS> they should fix this problem. I haven't managed to load the POSIX assembly on my side. However it looks like assembly.GetTypes used in RegisterDeriveTypes and in a few other places needs to be replaced with Mono calls that will not load into the domain. Bingo! The type CS-Script type Mono.Unix.FileMutex references Mono.Posix.dll. But this type is never touched (loaded) at runtime on Windows: internal class LinuxSystemWideLock : ISystemWideLock, IDisposable { Mono.Unix.FileMutex mutex; . . . public SystemWideLock(string file, string context) { bool isLinux = (Environment.OSVersion.Platform == PlatformID.Unix); if (isLinux) { mutex = new LinuxSystemWideLock(file, context); } else . . . But when you call .GetTypes() you effectively is trying to load the type. In a way it is not so uncommon. I remember experiencing very similar problem with one of the major dotnet.core assemblies, which contained types that were not loadable at runtime. I just needed to handle the exception in my case. @oleg-shilo thnks a lot for the details. Such generic calls are not ideal when done from reflection. @larspl I modified the source code and test it and lazy loading seems to work fine. The Cs-script dll gets loaded only when the ExpressionEvaluator is used. As I said before I never got POSIX to load on my side so I cannot test properly. However @oleg-shilo statements are true and since the framework does not call GetTypes anymore there is no need for the package and I removed it as well. @larspl I can share the nuget packages for that build if you wish to test before next release. @vimarx System.Core apparently is a .NET system dll and is needed. why it is looking for v2.0.5 I am not certain, however all well since windows updates addressed your problem. From your stack it looks that you found another place that it might relate to loading problems due to reflection calls. So I have replace reflection again with Mono calls from this patch in <IP_ADDRESS> i re-implemented a compile on the fly approach with no dependencies. Let me know if I miss anything but there shouldn't be any cost unless you actually use the EvaluateExpressionOperator Closing issue for age. Feel free to reopen it at any time. .Thank you for your contribution.
GITHUB_ARCHIVE
// MIT license. Copyright © 2017 Simon Strandgaard. All rights reserved. import XCTest @testable import SwiftySchwartzianTransform enum RecordSortKey { case textAndId(text: String, id: Int) case id(id: Int) } extension RecordSortKey: Equatable { static func == (lhs: RecordSortKey, rhs: RecordSortKey) -> Bool { switch (lhs, rhs) { case let (.textAndId(text0, id0), .textAndId(text1, id1)): return text0 == text1 && id0 == id1 case (.textAndId, .id): return false case (.id, .textAndId): return false case let (.id(id0), .id(id1)): return id0 == id1 } } } extension RecordSortKey: Comparable { static func < (lhs: RecordSortKey, rhs: RecordSortKey) -> Bool { switch (lhs, rhs) { case let (.textAndId(text0, id0), .textAndId(text1, id1)): if text0 == text1 { return id0 < id1 } else { return text0 < text1 } case (.textAndId, .id): return true case (.id, .textAndId): return false case let (.id(id0), .id(id1)): return id0 < id1 } } } extension RecordSortKey: CustomStringConvertible { var description: String { switch self { case let .textAndId(text, id): return "\(id);\(text)" case let .id(id): return "\(id);NIL" } } } struct Record { let id: Int let text: String? } extension Record { static func sorted(_ records: [Record], reverse: Bool) -> [Record] { typealias ST = SchwartzianTransform<Record, RecordSortKey> let st = ST(records, reverse: reverse) { (_, record) -> RecordSortKey in if let text = record.text?.lowercased() { return RecordSortKey.textAndId(text: text, id: record.id) } else { return RecordSortKey.id(id: record.id) } } print(st) return st.result } } class SchwartzianTransformTests: XCTestCase { func testBasic() { let allRecords: [Record] = [ Record(id: 5, text: "B"), Record(id: 0, text: "A"), Record(id: 7, text: nil), Record(id: 1, text: "a"), Record(id: 6, text: nil), Record(id: 3, text: "B"), Record(id: 8, text: nil), Record(id: 4, text: "b"), Record(id: 2, text: "A"), ] do { let records = Record.sorted(allRecords, reverse: false) let ids: [Int] = records.map { $0.id } XCTAssertEqual(ids, [0, 1, 2, 3, 4, 5, 6, 7, 8]) let texts: [String] = records.map { $0.text ?? "NIL" } XCTAssertEqual(texts, ["A", "a", "A", "B", "b", "B", "NIL", "NIL", "NIL"]) } do { let records = Record.sorted(allRecords, reverse: true) let ids: [Int] = records.map { $0.id } XCTAssertEqual(ids, [8, 7, 6, 5, 4, 3, 2, 1, 0]) let texts: [String] = records.map { $0.text ?? "NIL" } XCTAssertEqual(texts, ["NIL", "NIL", "NIL", "B", "b", "B", "A", "a", "A"]) } } func testEmpty() { let allRecords = [Record]() do { let records = Record.sorted(allRecords, reverse: false) XCTAssertTrue(records.isEmpty) } do { let records = Record.sorted(allRecords, reverse: true) XCTAssertTrue(records.isEmpty) } } static var allTests = [ ("testBasic", testBasic), ("testEmpty", testEmpty), ] }
STACK_EDU
Wasm: memory corruption There is something odd still, perhaps around allocating objects. var a = new Exception[] { new ArgumentException(), null, null}; PrintLine(a.Length.ToString()); This prints 12 when it should be 3. This code var a = new Exception[] { null, null, null}; PrintLine(a.Length.ToString()); Prints 3 correctly. There's also this intermittent crash which might be related Assertion failed: ((CObjectHeader*)p)->IsFree(), at: E:/GitHub/corert/src/Native/Runtime/Portable/../../gc/gc.cpp,3806,unused_array_size exception thrown: RuntimeError: abort(Assertion failed: ((CObjectHeader*)p)->IsFree(), at: E:/GitHub/corert/src/Native/Runtime/Portable/../../gc/gc.cpp,3806,unused_array_size) at Error at jsStackTrace (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2146:17) at stackTrace (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2163:16) at abort (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:1907:44) at ___assert_fail (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2637:7) at WKS::gc_heap::soh_try_fit(int, unsigned long, alloc_context*, unsigned int, int, int*, int*) (wasm-function[21522]:0x100a9f7) at WKS::gc_heap::allocate_soh(int, unsigned long, alloc_context*, unsigned int, int) (wasm-function[21524]:0x100bf22) at WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, unsigned int, int) (wasm-function[21537]:0x10125e8) at WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, unsigned int, int) (wasm-function[21540]:0x1012d6a) at WKS::GCHeap::Alloc(gc_alloc_context*, unsigned long, unsigned int) (wasm-function[21884]:0x10a84dd) at RhpGcAlloc (wasm-function[20535]:0xf9fd04),RuntimeError: abort(Assertion failed: ((CObjectHeader*)p)->IsFree(), at: E:/GitHub/corert/src/Native/Runtime/Portable/../../gc/gc.cpp,3806,unused_array_size) at Error at jsStackTrace (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2146:17) at stackTrace (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2163:16) at abort (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:1907:44) at ___assert_fail (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2637:7) at WKS::gc_heap::soh_try_fit(int, unsigned long, alloc_context*, unsigned int, int, int*, int*) (wasm-function[21522]:0x100a9f7) at WKS::gc_heap::allocate_soh(int, unsigned long, alloc_context*, unsigned int, int) (wasm-function[21524]:0x100bf22) at WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, unsigned int, int) (wasm-function[21537]:0x10125e8) at WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, unsigned int, int) (wasm-function[21540]:0x1012d6a) at WKS::GCHeap::Alloc(gc_alloc_context*, unsigned long, unsigned int) (wasm-function[21884]:0x10a84dd) at RhpGcAlloc (wasm-function[20535]:0xf9fd04) at abort (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:1913:11) at ___assert_fail (E:\GitHub\corert\tests\src\Simple\HelloWasm\bin\Debug\wasm\native\HelloWasm.js:2637:7) at WKS::gc_heap::soh_try_fit(int, unsigned long, alloc_context*, unsigned int, int, int*, int*) (wasm-function[21522]:0x100a9f7) at WKS::gc_heap::allocate_soh(int, unsigned long, alloc_context*, unsigned int, int) (wasm-function[21524]:0x100bf22) at WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, unsigned int, int) (wasm-function[21537]:0x10125e8) at WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, unsigned int, int) (wasm-function[21540]:0x1012d6a) at WKS::GCHeap::Alloc(gc_alloc_context*, unsigned long, unsigned int) (wasm-function[21884]:0x10a84dd) at RhpGcAlloc (wasm-function[20535]:0xf9fd04) at RhpNewArray (wasm-function[20696]:0xfa9a36) at S_P_CoreLib_System_Diagnostics_StackTrace__InitializeForCurrentThread (wasm-function[842]:0x7aa15)
GITHUB_ARCHIVE
interworks.cloud Platform - General This release include Storefront API. It's a REST API designed to display product details, register customers, add products to the basket, single sign-on the customers to interworks.cloud Storefront. Below is the set of the exposed API methods: Get Products List Get Products By Category Name Get Products By Industry Name Get Products Enabled for Ordering Get Products Enabled for Offering (Tell Me More) Get Trial Products Get Product Groups Order Products List Load Product Data Load Units of a Product Load Attributes of a Product Load Product Group Data Load the Products of a Product Group - Add Item to Basket - Storefront User - Register Customer to Storefront - Log In to Strorefront - Create Lead interworks.cloud Platform - Storefront Workspace quotas for CloudWorks service In this release we limited the CloudWorks quotas displayed in Workspace first page to the following: Number of Virtual Servers, CPU Cores, RAM, Storage, IP Addresses and Total IOPs. The full list of quotas is displayed by selecting the link ‘view all quotas’. Third-party Integration: Citrix CloudPortal Service Manager Adding external links in Storefront Workspace for managing CPSM services For the service providers that use CPSM, we implemented the functionality to add in Storefront Workspace links that will redirect the customer administrator to CPSM pages for configuring their services. Up to now a customer administrator could, through Workspace, create users and provision / de-provision services to them but he couldn’t complete other configuration tasks such as creating public folders for his Exchange service. By adding in Workspace links that redirect the customer administrator to CPSM internal pages (in Single-Sing-On mode), the administrator can have Workspace as the reference point for configuring his services from end-to-end. For the service provider to define the external links per CPSM service, he should go to BSS Setup > Billing Products > Product Types and locate the appropriate product type. In product type details page, a new tab called “External Links” has been added for the service provider to define the list of service’s external links. For each link he must define a description and the URL of the link. These links will be visible to Workspace if the logged in user has purchased the specific service. By clicking a link, the system will automatically login the user to CPSM and will load the selected page. interworks.cloud Platform OSS Enhancements to Hyper-V Service Manager In this release we enhanced our Hyper-V service manager to support the following IaaS offerings: - Hyper-V Predefined Packages. The product manager can create predefined packages of resources (computing power, memory, storage, IP address, IOPs) for the customers to purchase. The customer can then use its dedicated pool of resources for creating virtual machines on demand, as well as change their configuration by re-allocating or moving available resources from their pool. The management of the virtual machines is performed in Storefront’s Workspace. - Hyper-V Virtual Servers. In this case the customer does not buy a pool of resources but rather a single virtual machine that is auto-provisioned during checkout. The customer selects VM’s resources (Operating System, computing power, memory, storage, IP address, IOPs) and the system creates the virtual machine upon checkout completion. The created VM is ‘related’ with the correspondent subscription the system created for billing it. For cancelling the virtual machine, the customer must cancel the related subscription. Analogous, for allocating more resources to VM the user must buy add-on resources for the related subscription. The VM is visible in Storefront’s Workspace and the user can perform power operations such reboot / shut down from inside Workspace. Support for Storage Add-on for Exchange 2013 Service In this release we enhanced our Hosted Exchange 2013 service manager to support allocation of extra storage to existing mail boxes. Up to now for increasing the storage of a mailbox you should allocate a different plan. Now you can create a storage add-on that when will be applied to an existing mailbox will increase mailbox size without affecting the rest parameters of the allocated plan.
OPCFW_CODE
Student Webspace for AppEngine What is Student Webspace? In a nutshell, Student Webspace is a minimalistic hosting platform running on top of Google App Engine. I created it to allow my students to have some place to upload and host their assignments. The app is designed to give you control over the user registration. New users must register using a valid secret CourseID code. Once they register they can upload any files into their account. Each file gets a unique URL such as: How does it work? The files are uploaded to Google's Blobstore database and the username and file name are used to find and retrieve them. I wrote a lengthy blog post explaining the nature of the app, it's specifications and development details. Please note that you may need to enable billing to use Bloblstore. The project uses GAE Sessions instead of Google's user management in order to allow anyone sign up for the service (not only users with Google accounts). This app is provided as is, without any warranty. There are some serious bugs in it, and I haven't really got around to fix them. If you managed to iron these out, please let me know. Deploying Student Webspace Here are the steps you should take to successfully deploy your own version of Student Webspace: appengine_config.pyand generate your own security key. You can use mine, but you probably would want your own. See the comments in the file for instructions. - Deploy your app to Google AppEngine - Navigate your browser to http://yourapp.example.com/initto configure your first account - Choose a password - You will be automatically logged in as admin with that password. Once you log in, you will notice that there is already a CourseID in the system. It will be something like XXXX is a randomly generated number. You can use this CourseID to register new users, but I recommend creating a new one from the admin panel when you introduce this app to the class. Note that this procedure won't work in ver 0.2 or lower. If you are running earlier version please upgrade. Promoting a user to Admin Status All accounts start as student accounts. Running the init procedure (see above) will create your first admin account. If you need more admins, you will want to add them manually. - Go to the AppEngine console - Click on DataStore viewer - Find the WebSpaceUservalue for the specific user - Manually Change its There is no UI for this yet but I expect to have one ready in version 0.4. (c) 2011 Lukasz Grzegorz Maciak Student Webspace is licensed under Apache License Version 2.0.
OPCFW_CODE
How do I change permissions on Synology? Right-click on one of the files or folders, or go to the Action menu. Select Properties. Go to the Owner section to select an owner for the selected files and folders from the Owner drop-down menu. Go to the Permission tab, select a user or group, and then click Edit to open the Permission Editor. How do I give someone access to my Synology NAS? You can specify which users or groups can access, view, or modify a shared folder and its contents….To edit permissions of a shared folder: - Go to Control Panel > Shared Folder. - Select the shared folder whose permissions you wish to edit. - Go to the Permissions tab. - Select one of the following from the drop-down menu: What should I do if I Cannot access my Synology device? - Check if QuickConnect is enabled. - Obtain the IP address via DHCP. - Set the DNS server value. - Temporarily turn off the IPv6 setup. - Temporarily disable the firewall. - Disable the MTU value configuration. - Synchronize the time with an NTP server. How do I enable https on Synology NAS? Connect via HTTPS - Sign in to DSM UC using an account belonging to the administrators group. - Go to Control Panel > Network > Connection Settings. - Tick the Automatically redirect HTTP connections to HTTPS checkbox. The default HTTPS value is 5001. - Click Apply. How do I change permissions on a NAS drive? - Access the Properties dialog box. - Select the Security tab. - Click Edit. - In the Group or user name section, select the user(s) you wish to set permissions for. - In the Permissions section, use the checkboxes to select the appropriate permission level. - Click Apply. - Click Okay. What are inheritable permissions? Inherited permissions are those that are propagated to an object from a parent object. Inherited permissions ease the task of managing permissions and ensure consistency of permissions among all objects within a given container. How do I access my Synology NAS shared folder? Go to Control Panel > Shared Folder. Select the shared folder that you want to access with your NFS client and click Edit. Go to NFS Permissions and click Create. Refer to this article to edit the permission settings. What is the default admin password for Synology? Double-click on your Synology device. Enter the system’s default username, admin, and leave the password field blank. How do I access Synology on my local network? Check the connection of your Synology NAS to the local network - Web Assistant: Enter find.synology.com into the address bar of your web browser. - Synology Assistant: This desktop utility can be found at Download Center > select your Synology NAS model > the Desktop Utilities tab. Why is my Synology connection not secure? “Not Secure” warning may appear on a browser and Synology mobile applications for the following reasons: You are connecting to your Synology device via its IP address. Your Synology device doesn’t have a trusted certificate. A subdomain doesn’t apply to your certificate. What permissions are needed to modify drives? There are six standard permission types which apply to files and folders in Windows: - Full Control. - Read & Execute. - List Folder Contents.
OPCFW_CODE
Protecting Editable Fields - Best practice At our school teachers enter a lot of data and we want to protect ourselves from accidentally clicking or tapping and modifying a fields. Here are a few thoughts on how I am thinking of making this possible. I'd love to get some feedback on which make more or less sense: (1) Initially I thought I have one tab that only has functions displaying the data from the editable fields but on phone tabs show up sequentially which would work. (2) Using a different view, one that has all the editable fields and another only displaying them via function fields. (3) Having at the top of each tab a toggle which will switch all fields from editable input fields to equivalent function fields for view only purposes. Which of these or other options would you guys consider to be a good implementation of protecting fields? Any and all thoughts are much appreciated! I would go for option 3 , but I would make use of the field option 'writable if' in combination with a Yes/No field to protect those fields from input instead of using an extra equivalent function field. Oh man, of course, thank you for your suggestion Steven! I have to manually add my toggle reference to each fields' "writesble if" field or is there a way to make it apply to multiple fields to speed up the process? Is there any way to have the edit toggle reset when I close or open a record? the addition of using buttons on top of the toggle is a great UI addition that Leonard shared https://ninoxdb.de/en/forum/technical-help-5ab8fe445fe2b42b7dd39ee7/button-to-show-fields-5b51c4920eb4ca611f2e2365 You can make the whole table also "writable if".... Good idea Steven, I'm just not sure how I can toggle a condition to make records writable or not. Could you give me an example? Let's say we make use of a seperate table for settings with only one record called SETTINGS and a Yes/No field called EDITTABLE. Then you can put this in the "writable if" section of the desired fields you want to protect in the other tables. Folow this thread for an example database: https://ninoxdb.de/en/forum/use-cases-5abd0b9c4da2d77b6ebfa395/create-tables-with-autonumbering-with-ease.-5d5ee82ab6ba1f2dab0664b4 That's brilliant Steven. Could I also create a multiple choice field that refers to users and easily add or remove users from being able to edit a table? Which version do you use? If it's the cloud (thus web) it's better to use the build in user roles. Else if it's a stand alone version (Ipad,Mac), you will need a sort of login screen first with an optional password to know which user is using the database I think? I'll try to post an example in the weekend... If Nick is not first , we'll see... I'm using the cloud version only for our school. @Steven, I missed that one... Works Greats to protect field , for me much better to protect the Plan Dates and don't move for error in the Gantt Chart. - 1 yr agoLast active - 1 Following
OPCFW_CODE
Preface of EXCEL VBA programing 2nd Edition book The goal of this book is to help you learn VBA programming with Excel. No prior programming experience is required or expected. Although you do not have to be an Excel user, you must have a good understanding of the basic tools involved in using any spreadsheet application. This includes a basic understanding of ranges and cell references, formulas, built-in functions, and charts. I ask my students at the start of every semester if they know how to use Excel. At least 90 percent of them say they are very comfortable with the application. Within two weeks of the start of the semester it is clear that no more than 10 percent of the class can write a proper formula—one that takes advantage of absolute and relative references, and built-in functions. Furthermore, fewer than 5 percent know anything about chart types and the kind of analyses they should be used in. If you’re not comfortable with spreadsheet applications or it’s been a while since you have used a spreadsheet, then I recommend you consider purchasing another introductory book on how to use the Excel application prior to learning how to program in VBA for Excel. In addition to spreadsheets, I also expect you to have a basic understanding of the Windows operating system. What’s in This Book and What Is Required? I developed the programs in this book using Excel 2003 for Windows. Although Excel and VBA don’t change much from one version to the next, I can’t guarantee that the programs in this book will execute without error in earlier versions of Excel. With each new version of Excel, VBA is updated with new objects, and existing objects are expanded with new properties and methods. If I use even one new object, property, or method specific to VBA-Excel 2003 in a program, then it will generate an error if executed in a previous version of Excel; therefore, you need Excel 2003—with VBA installed and activated—to use this book. The chapter projects in this book feature the development of games using VBA with Excel. This is somewhat unusual in the sense that prior to writing this book, I had never seen an Excel application that runs any kind of a game; however, it does serve to make programming more fun. After all, what’s the first thing anybody does when a new computer is purchased? The answer: find the games that are installed and start playing. With this book, you get to write the program and then play the game. It actually works quite well. The games developed in this book illustrate the use of basic programming techniques and structures found in all programming languages as well as all of the common (and some less common) components in Excel. Download EXCEL VBA programing Second Edition By DUANE BIRNBAUM in free pdf format.
OPCFW_CODE
The third axiom is about events that are mutually exclusive. Two events $A$ and $B$ are mutually exclusive if at most one of them can happen; in other words, they can’t both happen. For example, suppose you are selecting one student at random from a class in which 40% of the students are freshmen and 20% are sophomores. Each student is either a freshman or a sophomore or neither; but no student is both a freshman and a sophomore. So if $A$ is the event “the student selected is a freshman” and $B$ is the event “the student selected is a sophomore”, then $A$ and $B$ are mutually exclusive. What’s the big deal about mutually exclusive events? To understand this, start by thinking about the event that the selected student is a freshman or a sophomore. In the language of set theory, that’s the union of the two events “freshman” and “sophomore”. It is a great idea to use Venn diagrams to visualize events. In the diagram below, imagine $A$ and $B$ to be two mutually exclusive events shown as blue and gold circles. Because the events are mutually exclusive, the corresponding circles don’t overlap. The union is the set of all the points in the two circles. What’s the chance that the student is a freshman or a sophomore? In the population, 40% are freshmen and 20% are sophomores, so a natural answer is 60%. That’s the percent of students who satisfy our criterion of “freshman or sophomore”. The simple addition works because the two groups are disjoint. Kolmogorov used this idea to formulate the third and most important axiom of probability. Formally, $A$ and $B$ are mutually exclusive events if their intersection is empty: The Third Axiom: Addition Rule In the context of finite outcome spaces, the axiom says: - If $A$ and $B$ are mutually exclusive events, then $P(A \cup B) = P(A) + P(B)$. You will show in an exercise that the axiom implies something more general: - For any fixed $n$, if $A_1, A_2, \ldots, A_n$ are mutually exclusive (that is, if $A_i \cap A_j = \phi$ for all $i \ne j$), then This is sometimes called the axiom of finite additivity. This deceptively simple axiom has tremendous power, especially when it is extended to account for infinitely many mutually exclusive events. For a start, it can be used to create some handy computational tools. Suppose that 50% of the students in a class have Data Science as one of their majors, and 40% are majoring in Data Science as well as Computer Science (CS). If you pick a student at random, what is the chance that the student is majoring in Data Science but not in CS? The Venn diagram below shows a dark blue circle corresponding to the event $A =$ “Data Science as one of the majors”, and a gold circle (not drawn to scale) corresponding $B =$ “majoring in both Data Science and CS”. The two events are nested because $B$ is a subset of $A$: everyone in $B$ has Data Science as one of their majors. So $B \subseteq A$, and those who are majoring in Data Science but not CS is the difference “$A$ and not $B$”: where $B^c$ is the complement of $B$. The difference is the bright blue ring on the right. What’s the chance that the student is in the bright blue difference? If you answered, “50% - 40% = 10%”, you are right, and it’s great that your intuition is saying that probabilities behave just like areas. They do. In fact the calculation follows from the axiom of additivity, which we also motivated by looking at areas. Suppose $A$ and $B$ are events such that $B \subseteq A$. Then $P(A \backslash B) = P(A) - P(B)$. Proof. Because $B \subseteq A$, which is a disjoint union. By the axiom of additivity, and so If an event has chance 40%, what’s the chance that it doesn’t happen? The “obvious” answer of 60% is a special case of the difference rule. For any event $B$, $P(B^c) = 1 - P(B)$. Proof. The Venn diagram below shows what to do. Take $A = \Omega$ in the formula for the difference, and remember the second axiom $P(\Omega) = 1$. Alternatively, redo the argument for the difference rule in this special case. When you see a minus sign in a calculation of probabilities, as in the Complement Rule above, you will often find that the minus sign is due to a rearrangement of terms in an application of the addition rule. When you add or subtract probabilities, you are implicitly splitting an event into disjoint pieces. This is called partitioning the event, a fundamentally important technique to master. In the subsequent sections you will see numerous uses of partitioning.
OPCFW_CODE
Automatic Identification of Term Citation Object with Feature Fusion Na Ma1,2,Zhixiong Zhang1,2,3,4(),Pengmin Wu5 1National Science Library, Chinese Academy of Sciences, Beijing 100190, China 2School of Economic and Management, University of Chinese Academy of Sciences, Beijing 100190, China 3Wuhan Library, Chinese Academy of Sciences, Wuhan 430071, China 4Hubei Key Laboratory of Big Data in Science and Technology, Wuhan 430071, China 5Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China [Objective] This paper explores methods automatically identifying term citation objects from scientific papers, with feature fusion and pseudo-label noise reduction strategy.[Methods] First, we converted the identification of term citation objects into sequential annotation. Then, we combined linguistic and heuristic features of term citation objects in the BiLSTM-CNN-CRF input layer, which enhanced their feature representations. Finally, we designed pseudo-label learning noise reduction mechanism, and compared the performance of different models.[Results] The optimal F1 value of our method reached 0.6018, which was 8% higher than that of the BERT model.[Limitations] The experimental data was collected from computer science articles, thus, our model needs to be examined with data from other fields.[Conclusions] The proposed method could effectively identify term citation objects. 马娜,张智雄,吴朋民. 基于特征融合的术语型引用对象自动识别方法研究*[J]. 数据分析与知识发现, 2020, 4(1): 89-98. Na Ma,Zhixiong Zhang,Pengmin Wu. Automatic Identification of Term Citation Object with Feature Fusion. Data Analysis and Knowledge Discovery, 2020, 4(1): 89-98. We have adopted the Conditional Maximum Entropy (MaxEnt) modeling paradigm as outlined in REF3 and REF19 To quickly (and approximately) evaluate this phenomenon, we trained the statistical IBM word-alignment model 4 REF7, using the GIZA ++ software REF11 for the following language pairs: Chinese-English, Italian-English, and Dutch-English, using the IWSLT-2006 corpus REF23 for the first two language pairs, and the Europarl corpus REF9 for the last one. In computational linguistic literature, much effort has been devoted to phonetic transliteration, such as English-Arabic, English-Chinese REF5, English-Japanese REF6 and English-Korean. Tokenisation, species word identification and chunking were implemented in-house using the LTXML2 tools REF4, whilst abbreviation extraction used the Schwartz and Hearst abbreviation extractor REF9 and lemmatisation used morpha REF12. Ding Y, Zhang G, Chambers T , et al. Content-based Citation Analysis: The Next Generation of Citation Analysis[J]. Journal of the Association for Information Science and Technology, 2014,65(9):1820-1833. ( Zhao Rongying, Zeng Xianqin, Chen Bikun . Citation in Full-text:The Development of Citation Analysis[J]. Library & Information Service, 2014,58(9):129-135.) Small H G . Cited Documents as Concept Symbols[J]. Social Studies of Science, 1978,8(3):327-340. Qazvinian V, Radev D R. Scientific Paper Summarization Using Citation Summary Networks [C]// Proceedings of the 22nd International Conference on Computational Linguistics, Manchester. Association for Computational Linguistics, 2008: 689-696. Qazvinian V, Radev D R, Ozgur A. Citation Summarization Through Keyphrase Extraction [C]// Proceedings of the 23rd International Conference on Computational Linguistics, Beijing. Association for Computational Linguistics, 2010: 895-903. Jha R, Finegan-Dollak C, King B, et al. Content Models for Survey Generation: A Factoid-Based Evaluation [C]// Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing. Association for Computational Linguistics, 2015,1:441-450. Anderson M H, Sun P Y T . What Have Scholars Retrieved from Walsh and Ungson (1991)? A Citation Context Study[J]. Management Learning, 2010,41(2):131-145. Radoulov R . Exploring Automatic Citation Classification[D]. Waterloo: University of Waterloo, 2008. 许德山 . 科技论文引用中的观点倾向分析[D]. 北京:中国科学院文献情报中心, 2012. ( Xu Deshan . Sentiment Orientation Analysis for Evaluation Information of Citation on Scientific & Technical Paper[D].Bejing: National Science Library, Chinese Academy of Sciences, 2012.) Khalid A, Khan F A, Imran M , et al. Reference Terms Identification of Cited Articles as Topics from Citation Contexts[J]. Computers and Electrical Engineering, 2019,74:569-580. Ma X, Hovy E . End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF [C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany. Association for Computational Linguistics, 2016: 1064-1074. Bengio Y, Ducharme R, Vincent P , et al. A Neural Probabilistic Language Model[J]. Journal of Machine Learning Research, 2003,3:1137-1155 Santos C D, Zadrozny B. Learning Character-Level Representations for Part-of-Speech Tagging [C]// Proceedings of the 31st International Conference on Machine Learning, Beijing. Association for Computational Linguistics, 2014: 1818-1826. Rei M, Crichton G K O, Pyysalo S. Attending to Characters in Neural Sequence Labeling Models [C]// Proceedings of the 26th International Conference on Computational Linguistics, Osaka, Japan. Association for Computational Linguistics, 2016: 309-318. ( Zhao Hong, Wang Fang . A Deep Learning Model and Self-Training Algorithm for Theoretical Terms Extraction[J]. Journal of the China Society for Scientific and Technical Information, 2018,37(9):923-938.) Zhang Z Y, Han X, Liu Z Y , et al. ERNIE: Enhanced Language Representation with Informative Entities[OL]. arXiv Preprint. arXiv: 1905. 07129. Shen Y Y, Yun H, Lipton Z C, et al. Deep Active Learning for Named Entity Recognition [C]// Proceedings of the 2nd Workshop on Representation Learning for NLP, Vancouver, Canada. Association for Computational Linguistics, 2017: 252-256. Ye Z X, Ling Z H . Hybrid Semi-Markov CRF for Neural Sequence Labeling[OL]. arXiv Preprint. arXiv: 1805. 03838. Devlin J, Chang M W, Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[OL]. arXiv Preprint. arXiv: 1810. 04805. Bikel D M, Miller S, Schwartz R, et al. Nymble: A High-Performance Learning Name-finder [C]// Proceedings of the 5th Conference on Applied Natural Language Processing, Washington. Association for Computational Linguistics, 1997: 194-201. Lafferty J, McCallum A, Pereira F C N. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data [C]// Proceedings of the 18th International Conference on Machine Learning, Williamstown, USA. Morgan Kaufmann Publishers Inc, 2001: 282-289. Ma C, Zheng H F, Xie P, et al. DM_NLP at SemEval-2018 Task 8: Neural Sequence Labeling with Linguistic Features [C]// Proceedings of the 12th International Workshop on Semantic Evaluation, New Orleans, USA. Association for Computational Linguistics, 2018: 707-711. Pennington J, Socher R, Manning C. Glove: Global Vectors for Word Representation [C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar. Association for Computational Linguistics, 2014: 1532-1543. Lee D H. Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks [C]// Proceedings of the 30th International Conference on Machine Learning, Atlanta, USA. 2013. Li Z, Ko B S, Choi H J . Naive Semi-supervised Deep Learning Using Pseudo-label[J]. Peer-to-Peer Networking and Applications, 2019,12(5):1358-1368. Dempster A P, Larird N M, Rubin D B . Maximum Likelihood from Incomplete Data via the EM Algorithm[J]. Journal of Royal Statistical Society: Series B, 1977,39(1):1-38. Radev D R, Muthukrishnan P, Qazinian V , et al. The ACL Anthology Network Corpus[J]. Language Resources and Evaluation, 2013,47(4):919-944. Manning C, Surdeanu M, Bauer J, et al. The Stanford CoreNLP Natural Language Processing Toolkit [C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Baltimore, USA. Association for Computational Linguistics, 2014: 55-60. Sang E F, De Meulder F . Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition[OL]. arXiv Preprint. arXiv: 0306050.
OPCFW_CODE
Pyro models with latent discrete variables / enumeration Hi, Is it possible to train pyro models with latent discrete variables with scvi-tools? Here are examples and explanations of what I mean: https://pyro.ai/examples/enumeration.html Looking at the PyroSviTrainMixin class it looks to me that this is not implemented. If it doesn't exist, I could probably help with implementing this. Can you give an example of what you'd like to add? The code should be flexible enough now to enable developers to add TraceEnum_ELBO etc. Below is a minimum example of a Gaussian mixture model with 100 cells, 1 gene and 2 components implemented in pyro. I think there are 3 things needed to train models like this in scvi: 1.) add @config_enumerate to the model as in line 22 (hopefully easily possible by adding @config_enumerate to the forward() method of a module) 2.) hide discrete parameters when constructing an autoguide, like in line 39 ("poutine.block(model, hide=['assignment'])") 3.) use "TraceEnum_ELBO" like in line 44 `import numpy as np import matplotlib.pyplot as plt import torch import pyro import pyro.distributions as dist from pyro import poutine from pyro.infer.autoguide import AutoLowRankMultivariateNormal from pyro.infer.autoguide.initialization import init_to_value from pyro.infer import SVI, config_enumerate, TraceEnum_ELBO, infer_discrete Produce some artificial (normalized) gene expression data: n_components = 2 n_genes = 1 n_cells = 100 mus_k = torch.tensor(np.array((5.,20.)), dtype = torch.float32) weights_k = torch.tensor(np.ones(n_components)/n_components, dtype = torch.float32) k_c = pyro.sample('k_c', dist.Categorical(weights_k).expand([n_cells]).to_event(1)) data = pyro.sample("obs", dist.Normal(loc = mus_k[k_c], scale = 1).expand([n_cells]).to_event(1)) Define 1D Gaussian mixture model with two components @config_enumerate def model(data): n_cells = len(data) n_components = 2 scale = 1 # Global variables. weights = pyro.sample('weights', dist.Dirichlet(torch.ones(n_components))) locs = pyro.sample('locs', dist.Normal(10., 10.).expand([n_components]).to_event(1)) # Local variables. with pyro.plate('cells', n_cells): assignment = pyro.sample('assignment', dist.Categorical(weights)) pyro.sample('obs', dist.Normal(locs[assignment], scale), obs=data) Make appropriate auto guide (and hiding discrete variable) global_guide = AutoLowRankMultivariateNormal(poutine.block(model, hide=['assignment']), init_to_value(values = {'locs': torch.tensor(np.array((1.,10.)), dtype = torch.float32)})) Train model: optim = pyro.optim.Adam({'lr': 0.1, 'betas': [0.8, 0.99]}) elbo = TraceEnum_ELBO(max_plate_nesting=1) svi = SVI(model, global_guide, optim, loss=elbo) for i in range(200): loss = svi.step(data) Predict components: guide_trace = poutine.trace(global_guide).get_trace(data) # record the globals trained_model = poutine.replay(model, trace=guide_trace) # replay the globals def classifier(data, temperature=0): inferred_model = infer_discrete(trained_model, temperature=temperature, first_available_dim=-2) # avoid conflict with data plate trace = poutine.trace(inferred_model).get_trace(data) return trace.nodes["assignment"]["value"] k_c_inferred = classifier(data) Plot ground truth and inferred components: scatter = plt.scatter(range(len(data)), data, c = k_c_inferred) plt.xlabel('cell') plt.title('Simulated Data') plt.ylabel('gene expression') plt.legend(handles=scatter.legend_elements()[0], title="inferred component") plt.show() scatter = plt.scatter(range(len(data)), data, c = k_c) plt.title('Simulated Data') plt.xlabel('cell') plt.ylabel('gene expression') plt.legend(handles=scatter.legend_elements()[0], title="ground truth component")` To me, it seems like all the things you requested are currently capable with minimal overhead. config enumerate can be used internally when developers inherit PyroBaseModuleClass Same for the blocking Devs can use any Pyro Elbo they'd like when they create the PyroTrainingPlan. @vitkl what do you think? Thank you for those pointers. I will implement the minimal Gaussian Mixture model with scvi-tools as a test case. I will let you know how it works out. Thanks I can confirm it is indeed easily possible to implement something like a Gaussian mixture model in scvi-tools. As a reminder @config_enumerate has to be added to the foward() method of the module. I forgot that initially and it gave non-nonsensical results without giving an error message... I used the approach @AlexanderAivazidis proposed - which requires all 3 modifications which are all easy to make.
GITHUB_ARCHIVE
- 15 March 2021 - Journal article Subgenomic RNA identification in SARS-CoV-2 genomic sequencing data - Genome Research We have developed periscope, a tool for the detection and quantification of subgenomic RNA (sgRNA) in SARS-CoV-2 genomic sequence data. The translation of the SARS-CoV-2 RNA genome for most open reading frames (ORFs) occurs via RNA intermediates termed “subgenomic RNAs.” sgRNAs are produced through discontinuous transcription, which relies on homology between transcription regulatory sequences (TRS-B) upstream of the ORF start codons and that of the TRS-L, which is located in the 5′ UTR. TRS-L is immediately preceded by a leader sequence. This leader sequence is therefore found at the 5′ end of all sgRNA. We applied periscope to 1155 SARS-CoV-2 genomes from Sheffield, United Kingdom, and validated our findings using orthogonal data sets and in vitro cell systems. By using a simple local alignment to detect reads that contain the leader sequence, we were able to identify and quantify reads arising from canonical and noncanonical sgRNA. We were able to detect all canonical sgRNAs at the expected abundances, with the exception of ORF10. A number of recurrent noncanonical sgRNAs are detected. We show that the results are reproducible using technical replicates and determine the optimum number of reads for sgRNA analysis. In VeroE6 ACE2+/− cell lines, periscope can detect the changes in the kinetics of sgRNA in orthogonal sequencing data sets. Finally, variants found in genomic RNA are transmitted to sgRNAs with high fidelity in most cases. This tool can be applied to all sequenced COVID-19 samples worldwide to provide comprehensive analysis of SARS-CoV-2 sgRNA. © 2021 Parker et al.; Published by Cold Spring Harbor Laboratory Press This article, published in Genome Research, is available under a Creative Commons License (Attribution 4.0 International), as described at http://creativecommons.org/licenses/by/4.0/. Parker, M., Lindsey, B., Leary, S., Gaudieri, S., Chopra, A., Wyles, M., Angyal, A., Green, L., Parsons, P., Tucker, R., Brown, R., Groves, D., Johnson, K., Carrilero, L., Heffer, J., Partridge, D., Evans, C., Raza, M., Keeley, A., Smith, N., Da Silva Filipe, A., Shepherd, J., Davis, C., Bennett, S., Sreenu, V., Kohl, A., Aranday-Cortes, E., Tong, L., Nichols, J., Thomson, E., Wang, D., Mallal, S. & de Silva, T. 2021, 'Subgenomic RNA identification in SARS-CoV-2 genomic sequencing data', Genome Research, 31(4), pp. 645-658. http://www.genome.org/cgi/doi/10.1101/gr.268110.120 Downloadable citationsDownload HTML citationHTML Download BIB citationBIB Download RIS citationRIS - Repository URI
OPCFW_CODE
I got this problem in track-3, but I haven’t even successfully submitted it once. Why I was banned until next week, Could you help solve this? `Submission failed : The participant has no submission slots remaining for today. Please wait until 2024-04-05 16:00:52 UTC to make your next submission.` I don’t know if it is caused by my behavior? I noticed a submission blocked so I closed that issue (I think it can kill a process) , I wanna know if it is the right way to kill a submission and why there is no submission slots for me. Once a submission is made, its quota is used up, and you cannot cancel it to reclaim the quota. Also, we limit 2 submissions to Track 3 per week. So you probably have to wait until next week. However, my submissions have all failed, I think the failed tasks should not be counted for limitation. This is not conducive to trying to familiarize ourselves with the platform and debug I strongly agree with that. To be honesty, my team spend most of time dealing with environmental problems because there always has some packages that we can install but the platform cannot and tell us the version cannot be found. There was only one “successful” submission, which is killed in validation and raise AssertionError: Timeout while making a prediction. We try to enhance the quantization and submit again however there is no submission slots. I think this is depressing because we do well in our computer but we spend too much time on things that don’t work. @chicken_li @kehan_yin We well understand your concerns. However, it is also a matter of fact, that we have to provision GPU resources even for your failed submissions. This includes building your submission (which should be around tens of minutes), which is not a very small number, and we cannot afford an unlimited amount of failure submissions. Here are some tips that we can provide. - Use the Dockerfile (and the docker_run.sh ) provided in the starter kit to test whether the Docker can be built locally. - I think if your submission can be built for one track, it will also work for other tracks. We have 4 failures for each track per week, which I think should be enough for you to find out working solutions. I attempted to execute the identical code on two separate occasions, altering only the baseline model to Mistral-7B, without making any other adjustments to the code. Both times, the process timed out without producing any errors. However, according to the overview documentation, Mistral-7B should operate smoothly on this machine." I have tried 3 times to submit my code, another one encountered an environmental error, Many teams face these problems, Could you please consider removing the limitation on failed submissions for participating teams until they have achieved their first successful submission on each track? Replied in another thread. We just made a small test, and found out that Mistral inference is slower than Vicuna, which may indeed lead to timeout. The “smoothly” is considered in terms of GPU memory but not the overall time limit. We will revise that. Also, regarding the failed submissions, we are considering giving a more generous failure quota, and are also working on analyzing existing errors, which should help. In the meantime, you can consider submitting to other tracks to see whether your solution go through the time limit. Thanks for your reply, I just got a failure another time 10 minutes ago. In this submission, I just submitted the baseline without any modification. I honestly hope you could give more failure limits for about 420 teams focusing on exploring and solving the challenges themselves.
OPCFW_CODE
from testers.monad_law_tester import MonadLawTester from testers.functor_law_tester import FunctorLawTester from testers.monad_transform_tester import MonadTransformTester from testers.applicative_law_tester import ApplicativeLawTester from pymonet.maybe import Maybe from pymonet.box import Box from pymonet.either import Left from pymonet.monad_try import Try from pymonet.validation import Validation from pymonet.utils import increase from hypothesis import given from hypothesis.strategies import integers import pytest class MaybeSpy: def mapper(self, value): return value + 1 def binder(self, value): return Maybe.just(value + 1) @pytest.fixture() def maybe_spy(mocker): spy = MaybeSpy() mocker.spy(spy, 'mapper') mocker.spy(spy, 'binder') return spy @given(integers()) def test_maybe_eq_operator_should_compare_values(integer): assert Maybe.just(integer) == Maybe.just(integer) assert Maybe.just(None) == Maybe.just(None) assert Maybe.just(integer) != Maybe.nothing() @given(integers()) def test_maybe_map_operator_should_be_applied_only_on_just_value(integer): assert Maybe.just(42).map(increase) == Maybe.just(43) assert Maybe.nothing().map(increase) == Maybe.nothing() def test_maybe_map_should_not_call_mapper_when_monad_has_nothing(maybe_spy): Maybe.nothing().map(maybe_spy.binder) assert maybe_spy.binder.call_count == 0 def test_maybe_bind_should_retrun_result_of_mapper_called_with_maybe_value(maybe_spy): assert Maybe.just(42).bind(increase) == 43 def test_maybe_bind_should_not_call_mapper_when_monad_has_nothing(maybe_spy): Maybe.nothing().bind(maybe_spy.binder) assert maybe_spy.binder.call_count == 0 @given(integers()) def test_maybe_get_or_else_method_should_return_maybe_value_when_monad_is_not_empty(integer): assert Maybe.just(integer).get_or_else(0) is integer @given(integers()) def test_maybe_get_or_else_method_should_return_argument_when_monad_is_empty(integer): assert Maybe.nothing().get_or_else(integer) is integer @given(integers()) def test_maybe_is_nothing_should_return_proper_boolean(integer): assert Maybe.just(integer).is_nothing is False assert Maybe.nothing().is_nothing is True def test_maybe_if_filter_returns_false_method_should_return_empty_maybe(): assert Maybe.just(41).filter(lambda value: value % 2 == 0) == Maybe.nothing() def test_maybe_if_filter_returns_true_method_should_return_self(): assert Maybe.just(42).filter(lambda value: value % 2 == 0) == Maybe.just(42) @given(integers()) def test_maybe_monad_law(integer): MonadLawTester( monad=Maybe.just, value=integer, mapper1=lambda value: Maybe.just(value + 1), mapper2=lambda value: Maybe.just(value + 2) ).test() @given(integers()) def test_maybe_functor_law(integer): FunctorLawTester( functor=Maybe.just(integer), mapper1=lambda value: value + 1, mapper2=lambda value: value + 2 ).test() @given(integers()) def test_maybe_transform(integer): MonadTransformTester(monad=Maybe.just, value=integer).test(run_to_maybe_test=False) assert Maybe.nothing().to_box() == Box(None) assert Maybe.nothing().to_either() == Left(None) assert Maybe.nothing().to_lazy().get() is None assert Maybe.nothing().to_try() == Try(None, is_success=False) assert Maybe.nothing().to_validation() == Validation.success(None) @given(integers()) def test_maybe_applicative_law(integer): ApplicativeLawTester( applicative=Maybe.just, value=integer, mapper1=lambda value: value + 1, mapper2=lambda value: value + 2, ).test() def test_maybe_ap_on_empty_maybe_should_not_be_applied(): def lambda_fn(): raise TypeError assert Maybe.nothing().ap(Maybe.just(lambda_fn)) == Maybe.nothing() assert Maybe.just(42).ap(Maybe.nothing()) == Maybe.nothing()
STACK_EDU
Horizon supports Personal Weather Stations (PWS's) streaming data to the Weather Underground. See Personal Weather Station. Weather is a universal, ubiquitous, phenomenon; weather is all around us; weather is a frontier to be explored, and a big data problem to be tamed. Weather stations have become affordable enough to put in our backyards, and sensor arrays give valuable insight to the big picture. PWS Setup and Registration To add a weather station to Horizon: - Set up your PWS according to the manufacturer's directions. - Connect the USB cable on your indoor PWS base station unit to one of your Raspberry Pi's USB ports. - Set up, boot, and register your Raspberry Pi as you normally would, using the setup and registration instructions. Horizon Supported Stations List The Horizon team is continually working to add support for weather stations of various types. Currently, Horizon supports only USB-connected stations. Personal weather stations are manufactured to a variety of standards and use various methods of data streaming / publication. Horizon supports a subset of those available. Stations on the supported list have been tested on Horizon to a limited extent. The supported stations list is a subset of those supported by WeeWX, the Python-based software driver used for this application. Purchasing a PWS on the supported hardware list is recommended but not required. Feel free to consult the Horizon team for recommendations and help us experiment with stations not yet tested. * Indicates that the device driver has been tested on Horizon, but the particular station model has not been tested. In each case, the Horizon team has tested a similar station in the same PWS driver family. Some PWS drivers span multiple manufacturers. Ambient Weather WS1090* Ambient Weather WS2080 Ambient Weather WS2080A Ambient Weather WS2090* Ambient Weather WS2095* Elecsa 6975* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) Elecsa 6976* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) Fine Offset WH1080* Fine Offset WH1081* Fine Offset WH1091* Fine Offset WH1090* Fine Offset WS1080* Fine Offset WA2080 Fine Offset WA2081* Fine Offset WH2080* Fine Offset WH2081* Maplin N96GY* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) Maplin N96FY* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) National Geographic 265* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) Oregon Scientific WMR88* Oregon Scientific WMR88A Oregon Scientific WMR100* Oregon Scientific WMR100N* Oregon Scientific WMR180* Oregon Scientific WMR180A* Sinometer WS1080 / WS1081* Sinometer WS3100 / WS3101* Watson W-8681* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) Watson WX-2008* (Rebranded Fine Offset Electronics WH-1080, WH-2080, WH-3080) If you have any difficulties with weather station or device setup, please contact the Horizon community for help by clicking on the Forum tab at the top of this page. You can also report Horizon bugs in the forum.
OPCFW_CODE
double free or corruption (!prev) while calling __do_global_dtors_aux I'm getting this error message after my app has done everything right /lib64/libc.so.6[0x3f1ee70d7f] /lib64/libc.so.6(cfree+0x4b)[0x3f1ee711db] /home/user/workspace/NewProject/build/bin/TestApp(_ZN9__gnu_cxx13new_allocatorIN5boost10shared_ptrINS1_5uuids4uuidEEEE10deallocateEPS5_m+0x20)[0x49c174] /home/user/workspace/NewProject/build/bin/TestApp(_ZNSt12_Vector_baseIN5boost10shared_ptrINS0_5uuids4uuidEEESaIS4_EE13_M_deallocateEPS4_m+0x32)[0x495b84] /home/user/workspace/NewProject/build/bin/TestApp(_ZNSt12_Vector_baseIN5boost10shared_ptrINS0_5uuids4uuidEEESaIS4_EED2Ev+0x47)[0x49598b] /home/user/workspace/NewProject/build/bin/TestApp(_ZNSt6vectorIN5boost10shared_ptrINS0_5uuids4uuidEEESaIS4_EED1Ev+0x65)[0x48bf27] /lib64/libc.so.6(__cxa_finalize+0x8e)[0x3f1ee337fe] /home/user/workspace/NewProject/build/components/lib_path/libhelper-d.so[0x2aaaab052b36] If I run the program in gdb I can get the following backtrace, but it is all I get: #0 0x0000003f1ee30285 in raise () from /lib64/libc.so.6 #1 0x0000003f1ee31d30 in abort () from /lib64/libc.so.6 #2 0x0000003f1ee692bb in __libc_message () from /lib64/libc.so.6 #3 0x0000003f1ee70d7f in _int_free () from /lib64/libc.so.6 #4 0x0000003f1ee711db in free () from /lib64/libc.so.6 #5 0x000000000049c174 in __gnu_cxx::new_allocator<boost::shared_ptr<boost::uuids::uuid> >::deallocate (this=0x2aaaab2cea50, __p=0x1cfd8d0) at /opt/local/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.5/../../../../include/c++/4.4.5/ext/new_allocator.h:95 #6 0x0000000000495b84 in std::_Vector_base<boost::shared_ptr<boost::uuids::uuid>, std::allocator<boost::shared_ptr<boost::uuids::uuid> > >::_M_deallocate ( this=0x2aaaab2cea50, __p=0x1cfd8d0, __n=8) at /opt/local/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.5/../../../../include/c++/4.4.5/bits/stl_vector.h:146 #7 0x000000000049598b in std::_Vector_base<boost::shared_ptr<boost::uuids::uuid>, std::allocator<boost::shared_ptr<boost::uuids::uuid> > >::~_Vector_base ( this=0x2aaaab2cea50, __in_chrg=<value optimized out>) at /opt/local/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.5/../../../../include/c++/4.4.5/bits/stl_vector.h:132 #8 0x000000000048bf27 in std::vector<boost::shared_ptr<boost::uuids::uuid>, std::allocator<boost::shared_ptr<boost::uuids::uuid> > >::~vector (this=0x2aaaab2cea50, __in_chrg=<value optimized out>) at /opt/local/bin/../lib/gcc/x86_64-unknown-linux-gnu/4.4.5/../../../../include/c++/4.4.5/bits/stl_vector.h:313 #9 0x0000003f1ee337fe in __cxa_finalize () from /lib64/libc.so.6 #10 0x00002aaaab052b36 in __do_global_dtors_aux () from /home/user/workspace/NewProject/build/components/lib_path/libhelper-d.so #11 0x0000000000000000 in ?? () I really have no idea of how to proceed from here. UPDATE I forgot to mention that the only global variable of the type which appears in the error is cleared m_uuids.size() == 0 by the time the error appear. This is exactly why they made programs like valgrind. You're mistaken about your app doing "everything right". Your app has undefined behaviour because you coded it wrong, and you're now seeing the fallout of that error. I really have no idea of how we can find a bug if we don't see the code. Is difficult to understand without seeing the code, but this seems a problem related to some delete or free. Are you sure that the memory allocated with new is always freed with delete and the memory allocated with malloc is freed with free? you're also probably freeing a already freed memory...or deleting an already deleted area Well, I didn't write the code, but I became the mantainer. All code uses shared_ptr's which make the problems even more strange. And I cannot show the code, because I haven't even found the vector with the problems Compiling with -ggdb and using gdb's ability to list code might help. (If it really is a double-free, check whether you aren't freeing some memory in a class that a destructor handles on its own.) I had this same problem using glog. In my case, it was this scenario: I had a share library, call it 'common.so' that linked glog. My main executable, call it 'app' also linked glog, and linked in common.so. The problem I had was that glog was linked statically in both the .so and the exectuable. When I changed both #1 and #2 to link the .so instead of the .a, the problem went away. Not sure this is your problem, but it could be. Generally speaking, corruption when freeing up memory often means that you corrupted the memory pool (such as deleting the same pointer twice). I believe linking in the .a in both cases, I was getting cleanup behavior on the same global pointer (an std::string in my case) twice. Update: After much investigation, this is very likely the problem. What happens is that each the executable and the .so have a global variable of std::string type (part of glog). These std::string global variables must be constructed when the object (exe, .so) is loaded by the dynamic linker/loader. Also, a destructor for each is added for cleanup using at_exit. However, when it comes time for at_exit functions to be called, both global reference point to the same std::string. That means the std::string destructor is called twice, but on the same object. Then free is called on the same memory location twice. Global std::string (or any class with a constructor) are a bad idea. If you choose to have a .so based architecture (a good idea), you have to be careful with all 3rd party libraries and how they handle globals. You stay out of most danger by linking to the .so for all 3rd party libraries. Where the error is appearing is probably a little misleading. My best guess would be that you've got a vector of shared pointers and as it's being destroyed, one (at least) of those shared pointers is trying to delete the object that it's pointing to, only to find that it has already been deleted. Are you mixing raw pointers with shared pointers anywhere? If so, you might find a perfectly innocuous looking delete somewhere which is pulling the rug from under the feet of your shared_ptr
STACK_EXCHANGE
Novel–The Legend of Futian–The Legend of Futian Chapter 2149 – Big Change marvelous mere “Brother Duan,” Ye Futian welcomed Duan Qiong. Possibly he just wished to take a stroll outside the house. “It’s that significant definitely?” expected Ye Futian. bringing the farm to live in another world download The villagers accumulated around and required, “What’s it about?” At this moment, they read some disturbance from afar. Ye Futian searched because track and saw Fang Gai as well as some others actually talking to anyone. Ye Futian nodded. He was really a small stunned to hear the turmoil acquired long gone this far. “No, we have not,” Ye Futian shook his mind and responded to. “Some variations in the Divine Prefecture?” Donghuang the fantastic Emperor single the Divine Prefecture and promoted martial arts training. He usually remained outside of unimportant concerns and allow men and women grow freely. On the other hand, the whole Divine Prefecture surely could well be tightly ruled from the Donghuang Imperial Palace during wartime. No person could break free coming from the fate of dealing with inside the warfare. A astonished appearance crossed Ye Futian’s facial area. Obviously, he was aware one thing regarding it. Only equally powerful energies may very well be in opposition for the Divine Prefecture. There had been some issues without a doubt when Ye Futian was still in the First Kingdom. Their location was the top principal country from the Top Nine Heavens of the Shangqing Domain—Shangqing Region! Fang Gai’s terms motivated the audience to move in Ye Futian’s motion. Duan Qiong duplicated what he explained to Ye Futian just now. The villagers were actually astounded. Nobody estimated so that it is some thing so essential and critical. Duan Qiong specifically said the Divine Prefecture rather than the Shangqing Area as well as other websites. “Thank you.” The envoy nodded and explained, “We have delivered the message and you will be abandoning now.” “Thank you.” The envoy nodded and stated, “We have delivered your message and will also be leaving now.” Ye Futian and also the other people would be less hazardous when you are traveling with others from the medieval noble group of Duan. At the least the highest pushes in the Shangqing Domain wouldn’t strike them in public places. “What a fortuitous time for our own Four Corner Community to rejoin the entire world.” Fang Gai shook his travel and crafted a wry smile. It was subsequently out of the question to foretell the consequence for this turmoil. Cultivators staying in the eighteen websites within the Divine Prefecture would probably be drafted with the Donghuang Imperial Palace if the warfare really broke out. “I goes on you,” reported Fang Gai. Ye Futian stored him from your early royal family of Duan. It absolutely was only suitable for him to ensure Ye Futian’s protection this point. “Since the Domain name Chief’s Manor delivered your message to all or any the energies this time around, many expert cultivators and top notch Renhuangs will probably be there,” Duan Qiong included. “Thank you.” The envoy nodded and reported, “We have delivered the message and you will be abandoning now.” Duan Qiong specifically explained the Divine Prefecture rather than the Shangqing Website and other domain names. The Legend of Futian “I know somewhat.” Ye Futian nodded. Bridge; its Principles and Rules of Play Duan Qiong and also the other people came over and checked out the farming internet site. They searched up for the amazing phenomena in the sky and the mystical ancient plant and exclaimed, “Four Side Small town is truly a wonderful area now. This can be regarded a sacred area for farming.” “I will go on you way too,” Fang Huan said. He possessed manufactured extensive advancement and noticed he come across a cultivation bottleneck period of time. He essential a way to develop a advancement. “It’s that serious presently?” requested Ye Futian. Duan Qiong personally got all the way up from Massive G.o.ds Area for some thing other than creating in the village it must be a crucial and immediate issue. The Dark Court, the Vacant Divine Realm… Many of the most strong energies on earth had element within the chaos inside the Genuine World. However, Ye Futian considered that the Divine Prefecture got currently governed the situation. Do the stress and hostilities become a whole lot worse now? “Okay,” Duan Qiong nodded and carried on, “As you can imagine, you will have devastating results if a combat with this size smashes out. It has been calm for almost 400 years for the reason that Great Emperor unified the Divine Prefecture. We established our home slowly and gradually during this time. But I am afraid that most the cultivators in the eighteen domains from the Divine Prefecture might be in risk if your war pauses out.” “Yes, we noticed which it has something connected with the first World. There are a few clashes between Divine Prefecture and various other forces. Could be you can find another combat arriving,” Duan Qiong ongoing, “You originated from the first Realm. I presume you already know something in regards to this?” He experienced not a clue just what the circumstance was as with an original Realm. He was during the Divine Prefecture for years and extremely hoped to secure a possible opportunity to pay a visit to lower back. Duan Qiong personally got all the way up from Huge G.o.ds Location for something apart from creating on the town it needs to be a vital and immediate subject. “Okay,” Duan Qiong nodded and persisted, “As you can imagine, you will find disastrous results if the conflict for this range splits out. This has been tranquil for nearly 400 years for the reason that Excellent Emperor unified the Divine Prefecture. We developed our house bit by bit during this time period. But I am frightened that every the cultivators in the eighteen areas inside the Divine Prefecture can be in risk if your combat smashes out.” Then, he checked out Ye Futian and said, “Futian, you can sign up for me if you prefer. Who else really wants to go?” “I want to do this. Nonetheless, I am arriving for something else this time around,” Duan Qiong responded. Curious, Ye Futian inquired, “What could it be?” “We acquired this news through the Domain name Chief’s Manor on the Upper Nine Heavens from the Shangqing Site. It’s mentioned that some improvements may occur in the Divine Prefecture in the near future. Most likely they may summon impressive cultivators from the eighteen domain names during the Divine Prefecture. Now, the Sector Chief’s Manor has supplied your order to necessitate staff from many very best pushes to debate what is situated ahead. Has Four Corner Community received the news?” Duan Qiong required. He acquired not a clue what the problem was such as the initial World. He was during the Divine Prefecture for years and incredibly hoped to secure a possibility to stop by backside. Novel–The Legend of Futian–The Legend of Futian
OPCFW_CODE
|« 2011 FIFA Women's World Cup Streaming works on Linux too!||502 Bad Gateway Error on Yahoo Mail from Blackberry Fix »| The problem? A Quickbooks install that was running just fine with the company file hosted on a Linux server until a Quickbooks client software update results in the dreaded H202 error. Evidently other Quickbooks linux server users also ran into the same problem as can be seen from these posts on the quickbooks user support forums http://community.intuit.com/posts/qb-2011-h202-error-with-redhat-linux-database-server-manager and http://community.intuit.com/posts/i-cant-switch-to-multiuser-mode. The solution? So simple yet so hard, to find. Digging through the wireshark capture of the connection attempt shows at attempt to connect to the server on 55333 and then UDP broadcasts and attempt to connect to UDB ports on the server which fail with port unreachable. An article on the sybase site gives a clue, the UDP packets are an attempt to discover the database server because connecting to the server failed. Digging into the Wireshark packets I could see that the client was trying to connect to the server using the servers netbios name not it's hostname. When the SQLAnywhere server used by Quickbooks starts it's name is set to the hostname. It seemed that the easiest solution would be to set the SQLAnywhere database name to match the server's Netbios name unfortunately the qbdbfilemon doesn't let you specify the database name and uses the hostname of the server so it failed to find the find the database server. Since qbdbfilemon couldn't find the database server and report back the port number the client needed to use to make it's connection to the server the client connection still failed. The server had a Netbios name that didn't match it's hostname for historical reasons, changing it might have caused too many other side effects so I moved the database to another server where the Netbios name already matched the server name with is the default configuration with a Samba server. The other thing I had to do is change the servers hostname so that it didn't include our internal domain name, so instead of being server1.internal.company.com it reported it's hostname as simply server1. That's what it took to banish the H202 error. Thanks! I stumbled across this post some 3 years later, and between this and somebody who linked to you, I was able to find a solution of my own. Maybe this will help somebody else. Thanks for discovering this in the first place! Thank you so much for this. I have been fighting this issue for days and this resolved the issue for me. Thanks! We had Quickbooks Enterprise 13.0 installed, and it was working fine with the database server on a CentOS Samba host. Upgraded to version 16.0, and could not make it work. What was ultimately required was as you documented, using the NETBIOS name, and changing the hostname of the Samba server. Once the hostname was changed, the database started as “QBDBMgrN_26 -n QB_server_26 …” and multi-user mode began to work. That was a lifesaver, fixed a problem that is practically impossible to find. Thanks really doesn’t explain the gratitude required.
OPCFW_CODE
How do I fix this branching / story making on python? sceneDict = {} sceneDict["Beggining"] = { "Branching" : False "SceneText" : "Walking towards the enterance to the hollowed out tree you \ notice some large skulls on the sides of the dirt path along with other \ large sized bones which don't look like anything you've ever seen before.", "NextScene" : "Enterance" } sceneDict["Enterance" = { "Branching": True, "SceneText" : "You come up to the cave enterance and upon inspecting it you notice some \ liquid dripping from the roof of the enterance, you catch a small amount in your hand.. It's \ red and has the consistency of blood. All of a sudden a thundering roar is heard behind you and a massive \ troll has appeared!", "Choices" : [ {"ChoiceNumber" : "1. ", "ChoiceText" : "Run into the cave to escape the troll", "NextScene" : "Cave"}, {"ChoiceNumber" : "2. ", "ChoiceText" : "Attempt to escape between the trolls legs", "NextScene" : "Chase"}, {"ChoiceNumber" : "3. ", "ChoiceText" : "Pull out your sword in an attempt to combat the troll" "NextScene" : "Combat"} ] } sceneDict["Combat"] = { "Branching": False, "SceneText" : "As you reach back to grab your sword from your back the troll puts its arm out to the \ side and in one sweeping motion smashes you into a tree, unfortunately ending your life.", "NextScene" : "Afterlife" } sceneDict["Afterlife"] = { "Branching": True, "SceneText" : "You feel sunlight on your face and as you open your eyes there is two pathways infront of you \ as well as a sign over top of both'that's strange, I should probably be dead' however, above the arrow signs \ there is a sign that reads 'You messed up, it happens but try again' the paths both lead to the two other choices\ you had before you took the worst of the three choices.", "Choices" : [ {"ChoiceNumber" : "1. ", "ChoiceText" : "Run into the cave to escape the troll", "NextScene" : "Cave"}, {"ChoiceNumber" : "2. ", "ChoiceText" : "Attempt to escape between the trolls legs", "NextScene" : "Chase"} ] } sceneDict["Chase"] = { "Branching": False, "SceneText" : "You slide on your knees between the trolls legs as it's enormous fists come crashing down behind \ you, you get back up and start running down the hill towards your village hearing its thunderous steps close \ behind. The people of your town hear the ruckus and go towards the gate to see you being chased and they all \ start cheering you to keep going!", "NextScene" : "Decisions" } sceneDict["Decisions"] = { "Branching": True, "SceneText" : "You are about 100 meters out from your village but you remember that there is a river to the \ right of the gate you could most likely lead the troll over there and have it crash through the bridge \ into the water, or you could chance the guards speed of closing the gate to block the troll.", "Choices" : [ {"ChoiceNumber" : "1. ", "ChoiceText" : "Continue towards the village", "NextScene" : "Village"}, {"ChoiceNumber" : "2. ", "ChoiceText" : "Run towards the water", "NextScene" : "Bridge"} ] } sceneDict["Village"] = { "Branching": False, "SceneText" : "You run towards the village as fast as you possibly can the troll clearling gaining on you \ you call out for the guards to start closing the gate as you are running, the gate begins to close as you \ close in on it, you quickly slide through the last remaining opening as the gate slams shut followed by a loud \ 'BANG!' as the troll smashes his head into the gate. The village cheers as you made it back alive!", } sceneDict["Bridge"] = { "Branching": False, "SceneText" : "As the villagers are cheering they start questioning what are you doing?! You change your path \ and go for the bridge. You start running over the wooden bridge as it is creeking and after a few seconds \ really shaking. Suddenly you hear a snap as the bottom gives out and you grab onto the side of the bridge. \ The bridge falls away at your feet only leaving the sides to hold onto as you hear a loud splash and see \ the troll floating away into the distance.", } sceneDict["Cave"] = { "Branching": True, "SceneText" : "You run forward into the cave, and slip on some of the blood into the depths of it. Behind you \ the troll is coming, however you hear the sound of splashing further into the cave.", "Choices" : [ {"ChoiceNumber" : "1. ", "ChoiceText" : "Run deeper into the cave towards the splashing", "NextScene" : "Jump"}, {"ChoiceNumber" : "2. ", "ChoiceText" : "Stand your ground to the troll", "NextScene" : "Learn"} ] } sceneDict["Jump"] = { "Branching": False, "SceneText" : "You run towards the end of a cliff which has a small waterfall going over the edge of it \ you hear the troll gaining on you, but the drop is managable, you take a step back and leap for glory! You \ land with a splash and the troll sits at the top roaring in anger at your escape, you made it.. now to find \ your way home..", } sceneDict["Learn"] = { "Branching": False, "SceneText" : "You reach back to grab your sword from your back but in one swift movement the troll smashes you \ into the ground turning you into mince meat, very unfortunate.. You hear a shimmering sound and you appear back \ in on the path towards the cave again as if a God has given you a second chance or something.", "NextScene" : "Enterance" } currentScene = "Beggining" while currentScene != "": sceneData = sceneDict[currentScene] print(sceneData["SceneText"]) print() if sceneData["Branching']: for choice in sceneData["Choices"]: print(choice["ChoiceNumber"] + choice["ChoiceText"]) print() answer = input("> ") print() answer = int(answer) - 1 if answer <= len(sceneData["Choices"]) currentScene = sceneData["Choices"][answer]["NextScene"] else: currentScene = sceneData ["NextScene"] window.exitonclick() I have this code, and it should work it looks flawless, however I am getting a syntax error every time I run it but it does not point me to the error? Where is the error and how do I fix it if you could help that'd be great thanks! In your while loop, you have used the incorrect closing quote in your if statement: if sceneData["Branching'] You should use either: if sceneData["Branching"] or: if sceneData['Branching'] Also, in order to have text on multiple lines you should do the following: sceneDict["Beggining"] = { "Branching" : False "SceneText" : "Walking towards the enterance to the hollowed out tree you \n" "notice some large skulls on the sides of the dirt path along with other \n" "large sized bones which don't look like anything you've ever seen before.", "NextScene" : "Enterance" } Hey, thanks for responding I fixed that and also noticed earlier I had sceneDict["Enterance" = { so I fixed that to sceneDict["Enterance"] = { but after fixing those two things I still receive a syntax error and it doesn't point to it. So I indented everything that you showed above, and did the adjustments to the splitting of lines.. but I still get syntax errors without them being pointed to by Python, so I really have no idea as to where the code is faulted. Make sure that your key value pairs in your dictionaries are all separated by commas and the last thing I can see is you are missing a colon after the following if statement: if answer <= len(sceneData["Choices"]) Added the commas in the right places for the choices and dictionaries, aswell as the colon after choices but now I am getting a syntax error near the top in the first set on choices on the 3rd one { bracet
STACK_EXCHANGE
Yichen Yan (Alibaba Cloud Senior Engineer) gave a technical speech on the Exploration and Practice of Python Startup Acceleration at PyCon China 2022. The author introduced the CPython community-related work, the design and implementation of this solution, and business-level integration. The following is the content of this speech: Start with the time-consuming analysis of the startup time of the Python 3 empty interpreter. As we can see, the main time-consuming part is related to Python package loading. Among them, package loading occupies about 30% of the CPU time. The time spent on disk IO is related to package loading 37% of the time. Those familiar with the Python mechanism know that when loading a package in Python, it will search for the corresponding pyc file first, which is a serialized bytecode format. Once found, it will be deserialized, and the code inside will be executed. If the corresponding pyc file does not exist, the pyc file is recompiled to obtain the bytecode and serialized to a pyc file for persistent storage. The main goal of the optimization is in the package loading process, hoping to avoid the overhead of search, read, and deserialization. Let’s take Python 3.10 as an example. Here is the time it takes to use the Python interpreter to start an empty statement. It also uses -X importtime to print the time consumed by loading each package. As you can see, the package load time accounts for about 30% of the total time. We found this to be similar to the Java Virtual Machine. Java compiles Java source code into Java bytecode, which is then executed by We know the advantages of Java do not include startup speed, and this process is one of the reasons. How does Java partially solve this problem? There is a mechanism called CDS/AppCDS in Java, which saves the overhead of disk IO and parsing and verifying class files by persistently saving Java bytecode and some auxiliary data and using mmap to load them during subsequent startup. If we want to use similar techniques in Python, we should target Python bytecode. Python imports a module from the py file by default. The logic is shown on the left side of the preceding figure. The system obtains the corresponding rules based on the specified name and then tries to find the pyc file or recompile the code. Finally, use the exec command to create the module with the code and an empty dict and add it to the runtime. What we do can be simplified to the right side logic. Based on the package name, try loading from mmap. If successful, the same codeobject can be used for initialization. What are the immediate obstacles? As you can see, the C data structure of code objects in Python is shown in the figure, including Python data types (such as consts, string, and bytes). Serialize and store the involved data into a memory map, using the used codeobject as the root. In this step, the most direct problem is the memory randomization mechanism. When processing Python objects in code objects, each Python object header holds a pointer to the corresponding type information in the current process. The runtime uses this pointer to determine the type of object in Python. Let’s take PyCode_Type as an example. If you do not perform operations, the type information (offset in red) is lost. The pointer of the involved object will be saved in the image file we created to solve this problem. Dynamic patch-related pointers during loading. The following Python types are involved: For constant and literal, you can save them by assigning them directly after allocating space in the memory map. For variables and containers, you need to simulate the logic of variable initialization in Python, create the appropriate memory size, and write them to the corresponding location. At the same time, for specific types, you need to assign additional values to the reference count in the memory map to prevent accidental recycling in Python. The preceding is the general content of this project. Please visit the PyCDS project home page to view the specific usage of the project. PyCDS Homepage: https://github.com/alibaba/code-data-share-for-python Alibaba Cloud Community - May 6, 2023 Alibaba Cloud Serverless - April 21, 2022 Alibaba Cloud Serverless - June 13, 2022 Alibaba EMR - November 18, 2020 Alibaba Cloud MaxCompute - July 15, 2021 Alibaba Cloud MaxCompute - November 15, 2021 More Posts by OpenAnolis
OPCFW_CODE
Enable "read with streaming" test in test/unit/network_spec.js The "read with streaming" test looks like this: https://github.com/mozilla/pdf.js/blob/a4cc85fc5f5cb144b6ee7a7978dc83e906677233/test/unit/network_spec.js#L67-L77 There are two issues here: Lines 74 - 76 should use pending('Streaming not supported by user agent'); instead of an incorrect done() call. With #8768, it seems that the test can be enabled for other browsers (Chrome). @yurydelendik Can the test be enabled for other browsers? Can the test be enabled for other browsers? Yes. I don't see why not. Hi. I would like to solve this issue. Please assign me this. And also, please guide me how do i move forward in contributing more to this repository ! @aryanshar That's fast, I was still typing the motivation for the good-beginner-bug label. Here it goes: This bug consist of two parts. The first part is fully explained in the original report. For the second part, we need to update the if condition to logic along the lines of "if (!isFirefoxWithMozChunkedEncodingSupport && !isFetchWithStreamSupport)` (choose shorter but understandable variable if you wish). The first variable name is based on the current content of the if-statement (by the way, that has a flaw too. If Firefox 90 becomes available, the condition is incorrect). The second variable should indicate whether ReadableStream is supported (see e.g. cd95b426c730108a529fbf6ab48b3192e22922dd). Is the issue still up? I would like to solve if it is. PS: Noob here. There is a PR above, but there was no response since 24 October. You're welcome to use that PR, and in particular the comments there, as inspiration to create a new PR so we can hopefully merge it soon. Hi there! Do you need help for a "beginners" :p Can you explain more concretely where do you need help? I see that this issue is for @kushagra189. Do you have an other one where I can participate? Under the section titled "Browser compatibility", there is a summary of native browser support across desktop and mobile platforms. What does 'streaming response body' refer to, and is it relevant to our needs? Under whatwg's streams standard, a readable stream is a class that uses chunks...is that what line 70 in the code cited above refers to? As an aside, instead of checking for browser support of fetch() by looking at the user-agent header for information on the browser version, can we do something like expect('fetch' in window).toBe(true);? @choilmto I have updated the MDN pages to be more clear. Take another look if you want to get a better understanding. Line 70 in the above code refers to the non-standard way of obtaining a response stream in Firefox, via the xhr.responseType = "moz-chunked-arraybuffer" (where xhr is a XMLHttpRequest instance) (so the comment has a typo, "array" should be "arraybuffer"). As an aside, instead of checking for browser support of fetch() fetch support does not equate support for streaming responses (see the note at the end of https://github.com/mozilla/pdf.js/issues/6126#issuecomment-130572300). Checking for existence of ReadableStream would be sufficient though (this was also done in the proposed pull request at #9050). The user agent line is still necessary; Alternatively, feature detection can be used, as shown at: https://github.com/mozilla/pdf.js/blob/edaf4b3173607cf19664ab764e5be0742b51c5a4/src/display/network.js#L55-L72 Hi. I would like to solve this issue. Please assign me this. And also, please guide me how do i move forward in contributing more to this repository ! Resolved!
GITHUB_ARCHIVE
SQL: Enforcing foreign keys for inherited tables This is a tricky one. Inheritance, where one database table INHERITS the properties of another while adding some extra fields of its own, is a very useful concept in SQL, but at the same time it breaks indexes and foreign key constraints. A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign key constraints only apply to single tables, not to their inheritance children. This is true on both the referencing and referenced sides of a foreign key constraint. Setting up the TABLEs In this example we have a parent table users which is used to store unprivileged users, and a child table regulars that INHERITS the columns from users while adding extra columns: # CREATE TABLE users ( userid SERIAL, name character varying NOT NULL, "timestamp" timestamp with time zone DEFAULT CURRENT_TIMESTAMP NOT NULL, PRIMARY KEY (userid) ); # CREATE TABLE regulars ( username character varying UNIQUE, password character varying, admin_user boolean DEFAULT false NOT NULL, PRIMARY KEY (userid) ) INHERITS (users); In this model we can INSERT new users either into the users table with just a name, or into the regulars table where they can also have login details and other information. Why would we do it this way and not just have a single table? There are couple of good reasons: - Storage: If the child table or tables (there can be multiple inherited tables) have a lot of extra fields, and you have a large number of unprivileged users, then large parts of the storage space will be filled up with NULL values; - Simplicity We can identify various subsets of users with SQL without needing a WHERE condition: - SELECT name FROM users; -- all users - SELECT name FROM regulars; -- privileged users - SELECT name FROM ONLY users; -- unprivileged users What we can't do, however, is use userid as a foreign key constraint as shown here: # CREATE TABLE signins ( userid integer NOT NULL REFERENCES users, time_in timestamp with time zone DEFAULT CURRENT_TIMESTAMP NOT NULL, time_out timestamp with time zone ); The problem with foreign keys As explained earlier, "foreign key constraints only apply to single tables" so in the TABLE defined above you will only be able to insert references to unprivileged users (ONLY users) and not any from inherited tables. You can see this demonstrated here: # INSERT INTO users (name) VALUES ('Tom'); INSERT 0 1 # INSERT INTO users (name) VALUES ('Mary'); INSERT 0 1 # INSERT INTO regulars (name) VALUES ('Harry'); INSERT 0 1 # SELECT * FROM users; userid | name | timestamp --------+-------+------------------------------- 1 | Tom | 2023-05-08 08:06:10.217267+00 2 | Mary | 2023-05-08 08:06:10.217267+00 3 | Harry | 2023-05-08 08:06:10.217267+00 (3 rows) # INSERT INTO signins (userid) VALUES (1); INSERT 0 1 # INSERT INTO signins (userid) VALUES (2); INSERT 0 1 # INSERT INTO signins (userid) VALUES (3); ERROR: insert or update on table "signins" violates foreign key constraint "signins_userid_fkey" DETAIL: Key (userid)=(3) is not present in table "users". While the SELECT works as expected - returning rows from both the parent and child tables - trying to INSERT a reference to a record in a child table results in an error. At this point you can either throw in the towel and revert to a single table with extra fields, or replace the foreign key constraint with a TRIGGER. Workaround using a TRIGGER function First we have to remove the (useless) foreign key constraint from our signins TABLE: # CREATE TABLE signins ( userid integer NOT NULL REFERENCES users, time_in timestamp with time zone DEFAULT CURRENT_TIMESTAMP NOT NULL, time_out timestamp with time zone ); Implementing a TRIGGER then is a two step process starting with defining a TRIGGER FUNCTION: # CREATE FUNCTION userid_exists() RETURNS trigger LANGUAGE plpgsql AS $$ DECLARE userid_exists boolean; BEGIN SELECT (userid IS NOT NULL) FROM users WHERE userid = NEW.userid INTO userid_exists; IF (userid_exists) THEN RETURN NEW; ELSE RAISE EXCEPTION 'Nonexistent ID --> %', NEW.userid; RETURN NULL; END IF; END; $$; and then applying it as a TRIGGER on your table: # CREATE TRIGGER userid_signin_check BEFORE INSERT OR UPDATE ON signins FOR EACH ROW EXECUTE FUNCTION userid_exists(); If the userid exists our trigger function returns NEW allowing the INSERT to continue. Otherwise NULL is returned which effectively blocks the INSERT operation, and an (optional) EXCEPTION is raised. We can now re-run our test case: # INSERT INTO signins (userid) VALUES (1); INSERT 0 1 # INSERT INTO signins (userid) VALUES (2); INSERT 0 1 # INSERT INTO signins (userid) VALUES (3); INSERT 0 1 # INSERT INTO signins (userid) VALUES (4); ERROR: Nonexistent ID --> 4 CONTEXT: PL/pgSQL function userid_exists() line 9 at RAISE Note that without the foreign key constraint, separate triggers may be required to implement ON DELETE CASCADE and similar functionality for tables referencing users. - Stack Overflow: Foreign keys + table inheritance in PostgreSQL?
OPCFW_CODE
How can I search a word in whole project/folder recursively? Suppose I'm searching a class JFactory inside a folder and it's sub-directories. How can I file that file which contains class JFactory? I don't want to replace that word but I need to find that file that contains class JFactory. It's much more convenient to not having to leave your editor/IDE. :vimgrep /JFactory/ **/*.java You can replace the pattern /JFactory/ with /\<JFactory\>/ if you want full word match. :vim is shorthand for :vimgrep. If JFactory or \<JFactory\> is your current search pattern (for example you have hit * on one occurrence) you can use an empty search pattern: :vimgrep // **/*.java, it will use last search pattern instead. Handy! Warning: :vimgrep will trigger autocmds if enabled. This can slow down the search. If you don't want that you can do: :noautocmd vimgrep /\<JFactory\>/ **/*.java which will be quicker. But: it won't trigger syntax highlighting or open gz files ungzipped, etc. Note that if you want an external program to grep your pattern you can do something like the following: :set grepprg=ack :grep --java JFactory Ack is a Perl-written alternative to grep. Note that then, you will have to switch to Perl regexes. Once the command of your choice returned, you can browse the search results with those commands described in the Vim documentation at :help quickfix. Lookup :cfirst, :cnext, :cprevious, :cnfile, etc. 2014 update: there are now new ways to do that with the_silver_searcher or the_platinum_searcher and either ag.vim or unite.vim plugins. For the reference: :vim is a short name of the :vimgrep command. also note that :lvimgrep does the same with the location window instead of the quickfix window. Handy if you're also dealing with compiler output and don't want to go back and forth with :colder, :cnewer all the time but still can't get the list of filename that contains the word "JFactory". @guru: Use :copen, :cnext, :cprev and related commands to jump through search results. Reference is on :help quickfix how to jump to the next match, n or N are useless after :vimgrep, use :cw to list all of results pls put the update at the top of the comment @benoit From the project root folder, run following: grep -H -r 'what_you_search' * | less You will get a list of folders and matching lines with that string. can you explain what each of the operators do? Explaining the flags and command, from man grep: -H: Always print filename headers with output lines. And the -r: Recursively search subdirectories listed. the | less part will "pipe" the output into the less program, allowing you to scroll through content using the arrows, and quit by pressing q. The Silver Searcher(https://github.com/ggreer/the_silver_searcher) highly recommended, really fast! install sudo pacman -S the_silver_searcher // arch linux sudo apt install silversearcher-ag // ubuntu usage $ ag keywords integrate with vim rking/ag.vim (https://github.com/rking/ag.vim) after installing :Ag keywords Apparently that repo is deprecated as of 2016 Take a look at ctags and cscope which let you jump to class and function definitions, and find where those functions/classes are used. This script may help: Filesearch. Open the command line window by: Esc - to enssure you are in Normal mode type q , type : the command line should open ( it like a tmp file to write the command you can navigate as you would navigate normally in any vim file ... type i to enter insert mode this example will search for the to_srch string recursively bellow the current dir for all file types of type '.js' and '.java' but omit all file paths containing the string node_modules :g/console.log/ | :vimgrep /console.log/ `find . -type f -name '*.js' -o -name '*.java' -not -path '*node_modules/*'` Now wheen you :copen you could navigate with the arrow keys through the sarch results ... you could also set those in .vimrc " how-to search recursively under the current dir for the files of type js and java but omit the " node_modules file paths ":g/console.log/ | :vimgrep /console.log/ `find . -type f -name '*.js' -o -name '*.java' -not -path '*node_modules/*'` " reminder open the quick fix window by :copen 20 " reminder close the quick fix window by :ccl you could omit the first :q/to_srch/ I use it to highlight the search results automatically since I have "set hlsearch" in my ~/.vimrc Any hint how-to enable automatically the srch results from the vimgrep or in vimrc will be highly appreciated ...
STACK_EXCHANGE
Methods for integrated analysis of multiple omics datasets Statistical methods are developed for the integrated analysis of Human Microbiome data, metabolomics, glycomics, proteomics and genomic datasets. Research topics are joint modeling of multiple variables as alternative for ad hoc defined combinations of multiple phenotypes, methods for secondary phenotypes to take into account study design, network analysis for dimension reduction and visualisation and meta analysis of results from omics datasets with special interest in heterogeneity across platforms, populations and study designs. The MIMOmics project has received funding from the European Union’s Seventh Framework Programme (FP7-Health-F5-2012) under grant agreement n° 305280. Its main objective is statistical data integration. Simultaneous integration of omics data is essential for understanding the underlying biological system, both qualitatively as quantitatively. Within MIMOmics we are involved in three work packages (WPs) for statistical data integration: For WP1 – Data Harmonization, we develop a data integration method, in which probabilistic method of embedding a high dimensional dataset in terms of low dimensional ‘latent’ variables. This method can be applied to integrate the same omic datasets measured by different technologies such as LC-MS and NMR, and to integrate different omics datasets such as metabolomics and glycomics. For WP2 – Network analysis, we combine correlated omics data sets using (Weighted) Multiplex Networks to capture the underlying patterns. We also investigate how to identify modules or sub-networks that are enriched with multi-omics information. For WP3 – Risk prediction, we develop method to determine the augmented value of an omics data set and we study prediction models which are biologically interpretable by using network information. For WP4 – Meta Analysis, we develop methods for combining multilevel biomarkers across studies, called Super-Meta. Joining forces of the experts from the different scientific fields, we aim to combine profiles representing the same mechanism but based on various omics variables. In particular these sub-projects are performed in collaboration with WUR and TUDelft. Statistical methods for family data This was the topic of my PhD thesis. The analysis of family data is challenging due to the correlation structure and the outcome dependent sampling to enrich for genetic variants. A lot of my work has been focussed on testing for the presence of genetic effects in families. The last decade I am more interested in modelling the relationship between genetic effects and disease and health outcomes. In addition to genetic factors, I also model the contribution of life style factors and of the interplay between genetic and life style factors to aggregation of diseases in families. I am involved in the following family studies in Leiden: Research profile: Health, Prevention and the Human Life cycle Study populations: Leiden Longevity Study While family data contains information on risk factors for disease, this information is not used for risk prediction. I develop methods for risk prediction based on family data. This topic is closely related with the STW project TOPBREED of Prof Fred van Eeuwijk (WUR) in which I am involved. Here we develop methods to increase genomic breeding values. Statistical methods for modelling the effect of helminth infections on health For data from a household randomized clinical trial performed in Nangapanda Indonesia, I develop statistical methods for the analysis of categorical count data (human microbiome) in clustered designs (longitudinal, families, households) and for the joint analysis of several (mixed) outcomes over time.
OPCFW_CODE
It’s been nearly 10 years since Affino last ran on Linux, and at the time there was simply no demand for it, surprising as that might seem. There was also still a fair amount of confusion in the market over proprietary software such as Affino running on Open Source platforms. Everything has changed in the interim, and as Affino is now primarily a SaaS platform, the equation has flipped on its head. Since the vast majority of Affino sites run off the Amazon Web Services cloud, including all the Affino run ones, the OS question has evolved more towards auto-scaling, rapid deployment, performance and cost. We still love Microsoft and Windows and it will remain core to our Data platform moving ahead, but we see Linux playing a much greater part in the future for Affino application servers. Affino 7 is already running well on Apache, and Tomcat and these are our Web and Java service technologies of choice moving ahead. They make it very easy for us to embrace Linux, and today is a red letter day as Affino is running on Linux again for the first time in the 21st century. It’s early days yet, it will probably take a week to iron out the remaining issues, but the time is right for us to embrace Linux and the incredible automation it offers. It will allow us to realise a lot of the goals we have with Affino 7 in a much faster timeframe than we had anticipated. Also, Affino on Linux is Fast. It took at bit longer in the end to make the full transition to Linux, we had to overcome dozens of bugs spread across most of the underlying platforms to do so. Fortunately the platform updates have been frequent, and during the interim all the platforms we’re using have improved a great deal to the extent that there are no longer any issues. We’ve also had to deal with long running tasks, i.e. what happens if a task is longer than the life expectancy of a server. Once servers become disposable as they do on Scalr, i.e. they’re created whenever the demand requires and removed as soon as it subsides, it’s essential that long running tasks are handled gracefully. We will be using the auto-scaling in the future to distribute tasks across multiple servers as they become available to speed them up. We’ve reduced greatly the time it takes for an Affino instance to start up since every second counts when sites are busy. This will be something we will be focusing on with every new Affino release. We can now also be less frugal with server resources and focus more on performance and less on stability since down-time is now virtually zero when servers fail. We’ve further evolved the separation of files from the application servers. Most files now run from separate file repositories, whilst some are only kept for the life of the application servers. A lot of tuning has gone into this. We’ve moved our search to Solr and as a result have set up a separate Solr cloud since the Solr indexes can’t run effectively on displosable application servers. No doubt we’ll extend this further as the need to scale the indexes arises. Finally we’ve evolved the way we handle caching, since auto-scaling means we can now scale better to deal with demand peaks rather than simply serve cached content. The benefits with the instant scaling and fail-over are immence and have been well worth the effort. We’re busy now at looking at what else we can automate. We’ve had a very interesting couple of months reviewing all our assumptions about how to set up and run servers and networks. The reality is that working with dynamic systems like Scalr where the application server is entirely disposable means that all assumptions need to be re-viewed and re-evaluated on real use-cases. Many of the methodologies we developed for keeping application servers as robust as possible in the pre-scalable cloud era, i.e. last 20 years, absolutely work against you. With scalable nodes, priorities shift from trying to make individual nodes as robust as possible to making site update as high as possible. The most basic of which is that if a node shows any signs of dropped performance in needs to be killed immediately and be replaced by a new one. It took us a while to learn that one. Our learnings are still ongoing, but we’ve re-written major aspects of our application platform over the past couple of months further improve how it works in the post server era.
OPCFW_CODE
WikiMSK:What WikiMSK Is Not This article outlines how WikiMSK differs from other platforms Wikipedia is an encyclopedia. WikiMSK is much more content specific. For example Wikipedia does not allow referencing of primary resources and it also has to cater to a broad audience. WikiMSK is targeted to a specific user group in a specific subject matter. It also allows some types of articles not normally allowed in Wikipedia such as case histories, and article reviews. Wikipedia would never allow a specific page on say how to perform a fluoroscopically guided transforaminal epidural corticosteroid injection. The specific focus also allows WikiMSK to structure the website to cater for the specific needs of the narrow user base. WikiMSK complements Wikipedia, rather than compete with it. We also share many similar values. WikiMSK wants Wikipedia to be successful and have great resources on many of the topics we focus on. Wikipedia has been shown to be a more effective learning resource than textbooks and Uptodate for undergraduate medical education. WikiMSK wants to use some of the techniques that make it an effective learning resource but laser focus on a very specific area and tweak the formula to be more suitable for the user base. Another way we differ is that article creation and editing is restricted to a very small set of people. We also use a peer review process that differs to the consensus process of Wikipedia. However we both share principles around open access to knowledge. The focus here is different from Wikipedia. If a page develops here that meets all of Wikipedia's criteria, that's excellent, but it's also okay if some detailed information from Wikipedia doesn't get repeated here, especially if it's not quite our focus. For example we're not interested in the ATC codes on the lidocaine wikipedia page. The differences in content, focus and policies between Wikipedia and WikiMSK stem from a difference in vision: while Wikipedia's vision is all knowledge for all people, WikiMSK's is quite different: our mission is to produce an accurate, readable, reliable, accessible, and up-to-date repository of knowledge for the practice of Musculoskeletal Medicine in New Zealand.. WikiMSK also differs from Wikipedia on some technical matters. For example some of our article conventions are different. Also while both use MediaWiki as its wiki software, WikiMSK has several extensions enabled that Wikipedia doesn't, and viceversa. The software allows for easy linking to Wikipedia articles and files. - Scaffidi MA, Khan R, Wang C, Keren D, Tsui C, Garg A, Brar S, Valoo K, Bonert M, de Wolff JF, Heilman J, Grover SC. Comparison of the Impact of Wikipedia, UpToDate, and a Digital Textbook on Short-Term Knowledge Acquisition Among Medical Students: Randomized Controlled Trial of Three Web-Based Resources. JMIR Med Educ. 2017 Oct 31;3(2):e20. doi: 10.2196/mededu.8188. PMID: 29089291; PMCID: PMC5686416.
OPCFW_CODE
Begin devoid of creating code but with a clear head and perhaps a pen and paper. This can make sure you keep your goals within the forefront within your brain, with out obtaining shed inside the know-how. We'll wander you stage-by-step into the whole world of Machine Mastering. With just about every tutorial you will build new competencies and transform your idea of this complicated but rewarding sub-subject of information Science. It's got lots of curiosity, likely from writing DSLs to testing, that is reviewed in other sections of this handbook. Things to do are supplemented with occasional instructional and recreational excursions through the entire area. All youth get a healthier following faculty snack for the duration of The college 12 months along with a healthy breakfast, snack and lunch through the total-working day summer programming. The lecture notes were distinct and concise. The instructor did an incredible career in masking numerous subject areas in these types of a brief system. He addresses several vital ideas, together with how to jot down a software in SAS Studio, use responsibilities and snippets, and call R from SAS. He also walks via importing and reporting facts, and developing new variables, features, and knowledge tables. Take note: You may take a look at the SAS web page to get a copy in the program, and make use of the companys online details sets to do the course physical exercises. . Listed here that you are composing a report or journal paper or book. The level of formality varies relying on the audience, but you have got extra anxieties like simply how much code it takes to reach with the conclusions, and how much output does the code generate. Hunt for someone, a pal, a relative or create a marketplace research list from about 206 million people today. Try to find a person organization or produce a mailing list of various corporations from above 23 million companies homes! Assignments usually allow for a variable to carry unique values at diverse situations throughout its everyday find more information living-span and scope. However, some languages (mostly strictly practical) usually do not make it possible for that sort of "destructive" reassignment, as it might indicate improvements of non-regional condition. The goal is to enforce referential transparency, i.e. capabilities that don't count on the condition of some variable(s), but create a similar effects for the specified list of parametric inputs at any place in time. This Take note briefly explains R Markdown with the un-initiated. R markdown is actually a type of Markdown. Markdown is often a pure text document structure that is now a regular for documentation for computer software. It's the default format for displaying textual content on GitHub. R Markdown permits the person to embed R code in a very Markdown document. Standard expression situation values match When the toString() illustration of your swap worth matches the regex Read text from a file, normalizing whitespace and stripping HTML markup. We've witnessed that capabilities help to produce our do the job reusable and readable. They It was fantastic to get a far better feeling with the Python language. The course gave an excellent overview of the information Assessment and visualization tools. Any statement might be related to a label. Labels will not impression the semantics with the code and can be utilized to generate the code easier to read like in the subsequent case in point:
OPCFW_CODE
from fuzzywuzzy import fuzz from fuzzywuzzy import process print('[Ratio] {}, {}, Similaridade: {}'.format('São Paulo', 'São Paul', fuzz.ratio('São Paulo', 'São Paul'))) s1 = 'Belo Horizonte' s2 = 'B. Horizonte' print('[Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.ratio(s1, s2))) """Letras maiúsculas e minúsculas""" s1 = 'São Paulo' s2 = 'são paulo' print('[Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.ratio(s1, s2))) """Pontuação ou outros caracteres influenciam no score""" s1 = 'São Paulo' s2 = 'São Paulo!!' print('[Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.ratio(s1, s2))) """Similaridade Parcial - Similaridade parcial busca apenas a string em questão e descarta o resto. - Extremamente útil para trabalhar com dados coletados da web ou ainda quando queremos ignorar pontuações.""" # Consultando o score usando o método ratio s1 = 'São Paulo' s2 = '###$$%$!São Paulo#$#%#ˆˆˆˆˆ!!' print('[Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.ratio(s1, s2))) # Consultando o score usando o método partial s1 = 'São Paulo' s2 = '###$$%$!São Paulo#$#%#ˆˆˆˆˆ!!' print('[Partial Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.partial_ratio(s1, s2))) # Consultando o score usando o método partial # alteração nas strings s1 = 'São Paulo' s2 = '###$$%$!São Paullo#$#%#ˆˆˆˆˆ!!' print('[Partial Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.partial_ratio(s1, s2))) """Ordem de caracteres diferentes?""" # Consultando o score usando o método partial # alteração nas strings s1 = 'São Paulo' s2 = 'Paulo São' print('[Partial Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.partial_ratio(s1, s2))) # - Função partial_token_sort_ratio() separa os tokens por espaço e ordena por ordem alfabética. # - Coloca as strings em letras minúsculas. # - Considera apenas as strings consultadas. # Consultando o score usando o método partial # alteração nas strings s1 = 'São Paulo' s2 = 'Paulo São' print('[Partial Token Sort Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.partial_token_sort_ratio(s1, s2))) # Consultando o score usando o método partial # alteração nas strings s1 = 'São Paulo' s2 = 'São Paullo' print('[Partial Token Sort Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.partial_token_sort_ratio(s1, s2))) # Consultando o score usando o método partial e com caracteres minusculos. s1 = 'São Paulo' s2 = '###$$%$!são paulo#$#%#ˆˆˆˆˆ!!' print('[Partial Token Sort Ratio] {}, {}, Similaridade: {}'.format(s1, s2, fuzz.partial_token_sort_ratio(s1, s2))) print() print('-' * 100) print() """Processando uma Lista de Strings - Aplicar o fuzzywuzzy para corrigir strings em uma base de dados""" """Cria lista de strings""" lista = ['Doença Cardiovascular.','doença cardiovascular!!', 'Doenca Cardiovascular', 'Doenc. Cardio'] """Extrai os scores de similaridades com uma string em questão""" print('[Partial Ratio] ', process.extract('Doença Cardiovascular', lista, scorer=fuzz.partial_ratio)) """Limitando o retorno""" print('[Partial Ratio] Top 2: ', process.extract('Doença Cardiovascular', lista, scorer=fuzz.partial_ratio, limit=2)) """Retorna apenas uma string com um score acima de 95""" print('[Partial Ratio] Top 1 (Score >= 95): ', process.extractOne('Doença Cardiovascular', lista, scorer=fuzz.partial_ratio, score_cutoff=95)) """Data Cleaning em um DataFrame - Aplicar o fuzzywuzzy em uma base de dados - Medir a similaridade de strings e fazer Data Cleaning""" print() print('-' * 100) print() import pandas as pd from collections import OrderedDict data = OrderedDict( { 'descrição': ['São Paulo', 'SãoPaulo', 'São Pauloo','São Paulo,,', 'Belo Horizonte', 'B. Horizonte'] }) """Converte Dicionário para Pandas Dataframe""" df = pd.DataFrame(data) """Corrigindo dados do dataframe""" lista_cidades = ['Belo Horizonte', 'São Paulo'] for cidade in lista_cidades: for i in df.descrição.items(): print('[Partial Token Sort Ratio] {}, {}, Similaridade: {}'.format(cidade, i[1], fuzz.partial_token_sort_ratio(cidade, i))) print() print('-' * 100) print() print(df) print() print('-' * 100) print() """Atualizando as linhas do Dataframe se similaridade for maior que um determinado valor""" for cidade in lista_cidades: for i in df.descrição.items(): print('[Partial Token Sort Ratio] {}, {}, Similaridade: {}'.format(cidade, i[1], fuzz.partial_token_sort_ratio(cidade, i))) if fuzz.partial_token_sort_ratio(cidade, i[1]) >= 70: df.loc[df['descrição'] == i[1], ['descrição']] = cidade print() print('-' * 100) print() print(df)
STACK_EDU
""" Acquisitions for WorldView-2 satellite. """ import rasterio from .base import Acquisition class WorldView2Acquisition(Acquisition): """A WorldView-2 acquisition.""" platform_id = "WORLDVIEW_2" def __init__( self, pathname, uri, acquisition_datetime, band_name="BAND-C", band_id="1", metadata=None, ): super().__init__( pathname, uri, acquisition_datetime, band_name=band_name, band_id=band_id, metadata=metadata, ) self.tag = "WV2" self._norad_id = 35946 self.altitude = 7000000.0 self.semi_major_axis = 7144000.0 self._international_designator = "09055A" self.inclination = 1.7174 self.omega = 0.0010451 self._classification_type = "U" self.maximum_view_angle = 20.0 def close(self): super().close() class WorldView2MultiAcquisition(WorldView2Acquisition): """A multi-band WorldView-2 acquisition.""" band_names = [ "BAND-{}".format(name) for name in ["C", "B", "G", "Y", "R", "RE", "N", "N2"] ] sensor_id = "MUL" def __init__( self, pathname, uri, acquisition_datetime, band_name="BAND-C", band_id="1", metadata=None, ): super().__init__( pathname, uri, acquisition_datetime, band_name=band_name, band_id=band_id, metadata=metadata, ) def close(self): super().close() def data(self, out=None, window=None, masked=False): """ Return `numpy.array` of the data for this acquisition. If `out` is supplied, it must be a numpy.array into which the Acquisition's data will be read. """ with rasterio.open(self.uri) as ds: data = ds.read(int(self.band_id), out=out, window=window, masked=masked) return data def radiance_data(self, window=None, out_no_data=-999, esun=None): """ Return the data as radiance in watts/(m^2*micrometre). """ data = self.data(window=window) # check for no data no_data = self.no_data if self.no_data is not None else 0 nulls = data == no_data radiance = ( self.gain * data * (self.abs_cal_factor / self.effective_bandwidth) + self.offset ) # set the out_no_data value inplace of the input no data value radiance[nulls] = out_no_data return radiance
STACK_EDU
10/28/2011, 05:59 AM The HP Pre 3 is available in a number of different models: 1) EU Pre 3 (8GB) available in QWERTY, QWERTZ and AZERTY variations 2) Verizon Pre 3 (16GB) 3) AT&T Pre 3 (16GB) The goal of this thread is to work out how to make an EU Pre 3 with 16GB of memory. Note the daughter board inside the Pre 3 has nothing to do with the cellular modem, it holds the flash memory only and nothing else. There is no way to change the cellular hardware of a Pre 3 as that circuitry is on the main board which is not removable or interchangeable. The main board in the EU Pre 3 is marked "EU", the main board in the AT&T Pre 3 is marked "NA" and we assume the main board in the Verizon Pre 3 is marked "WP" (for World Phone). The daughter board in the EU Pre 3 is marked "8G" and the daughter board in the AT&T Pre 3 is marked "16G". The goal is to swap those two boards, and then adjust the contents of the flash memory to make it work. Note that the flash memory contains not only the normal webOS operating system partitions and user data partitions, but also contains the modem-related partitions and the tokens that define the configuration and factory calibration of a particular hardware device. So the contents of those flash memory partitions need to be swapped between the two devices as well (thereby putting them back in the device from which they came). This thread will document the experiment, and if it is successful, how it's done. Step 1 is to capture the flash contents for each of the two devices. To do this, I have added a "qualcomm" target to the Meta-Doctor which captures all of the flash memory partitions to files. Step 2 is to swap the two boards. A simple matter of taking off the back cover, removing all the screws, removing the plastic inner casing, removing the daughter board screw and then gently prying the daughter board off from the main board. Reverse the process to reassemble. Step 3 is to reprogram the relevant flash memory partitions with the device's original configuration. It looks like partitions 7, 8, and 13 need to be swapped. Partitions 7 and 8 are the Modem ST1 and ST2 partitions, and partition 13 holds the tokens. The nduid and IMEI are stored somewhere in partition 7 or 8. The modem $HW VAR is read from the hardware. Step 4 is to doctor both devices with the appropriate doctor, to reinitialise the other flash partitions to the correct contents. The modem firmware is in the lower flash memory partitions, and the doctor for each device correctly flashed the modem firmware (EU Pre 3 modem firmware was upgraded from 3004 to 3500, AT&T Pre 3 modem firmware was downgraded from 3500 to 3004). After doctoring, both devices seem to be working normally. The AT&T Pre 3 now has 8GB of flash memory, and the EU Pre 3 now has 16GB of flash memory. Step 5 is to test the cellular functionality, to ensure the modem configuration has been successfully swapped back into the original device. Both devices are confirmed to have 3G data capability, on their original frequencies.
OPCFW_CODE
A few years ago I was asked to mentor a graduate trainee account man, who went by the name of Tim Boxall. I wanted our first session together to be a memorable one. So I got my PA, Cath Allen, to tell him to be in the agency at 11pm the following Monday. This alone concerned him, as he kept asking her what was going to happen and didn’t she really mean 11 o’ clock in the morning? Anyway the next Monday he dutifully turned up on time and waited patiently in an empty reception area, apart from the security guard, who I had briefed to hand him a letter at exactly 11.10pm. Inside was the message: Dear Tim, outside is a car waiting to take you to a secret destination. When you get there, ring this number (I then put my mobile number down). He went out and nervously got into the car, asking the driver where he was going. Of course the driver had also been briefed not to tell him anything, which just made Tim even more nervous. Around 11.50pm the car arrived at the north side of the Millenium Bridge which spans the river Thames, dropped him off and drove away. He called my number…… “Hi Tim” I said, “don’t move” and hung up. I remember it being the most beautiful summers night, the stars where out and there was a tranquility in the air. The river twinkled with the reflected lights of St.Pauls cathedral. Exactly on the stroke of midnight Big Ben struck 12. As the sound of the ‘Bongs’ floated down the Thames, I made my call.………Bong….“Tim” ……Bong….. “start crossing the bridge”….Bong….. Unbeknown to him, I was on the opposite side of the bridge by the Tate Modern gallery which dominated the skyline above me. As I walked across to meet him I could just make out his nervous shadowy outline. Another thing Tim didn’t know was that around my right arm was a 5 foot Python. As he got nearer I put out my hand to shake his. Tim did a double–take, stopped dead in his tracks, and then very gingerly shook my hand. As he did so, the snake which had been positioned on my arm by a snake handler (dressed in black, 10 metres behind me), proceeded to slither up his arm. When it had transferred itself from my arm to his, I pulled a poem out from my pocket called Risk and read it out loud. I still remember today the look on his face as he tried to take it all in. The Python now firmly wrapped around his arms and neck. It was at that point I walked away. Here is the poem. by William Arthur Ward To laugh is to risk appearing a fool, To weep is to risk appearing sentimental To reach out to another is to risk involvement, To expose feelings is to risk exposing your true self. To place your ideas and dreams before a crowd is to risk their loss. To love is to risk not being loved in return, To live is to risk dying, To hope is to risk despair, To try is to risk failure. But the greatest risk in life is to risk nothing. He who risks nothing, does nothing, has nothing, is nothing. Chained by his servitude he is a slave who has forfeited all freedom. Only a person who risks is free.
OPCFW_CODE
Sometimes we can be fooled by error messages. For example one sunny day you see that for some reason your web or mail server doesn’t work. So you go to check the logs and find something similar to this: 2016/12/28 09:02:37 [crit] 24668#24668: *472674 open() "/var/cache/nginx/client_temp/0020878597" failed (28: No space left on device), client: 192.168.1.1, server: www.domain.com, request: "GET /cart/add/uenc/aHR0cDovL3d3dy5hYmNob21lLmNvbS9zaG9wL2xvdi1vcmdhbmljLWxvdi1pcy1iZWF1dGlmdWwtdGVh/product/19471/form_key/N8l3OyVkC1el9T8q/?product=19471&related_product=&send_to_friend=%2F%2Fwww.domain.com%2Fshop%2Fsendfriend%2Fproduct%2Fsend%2Fid%2F19471%2F&form_key=N8l3OyVkC1el9T8q&super_group%5B19425%5D=1&super_group%5B19424%5D= HTTP/1.1", host: "www.domain.com", referrer: "http://www.domain.com/shop/organic-tea" Then when you check the free space you see that you have more than enough, and all kind of irrational thoughts start flowing into your mind, when it is the simple inodes space. Usually it is just that there is not enough inodes left free on your files system, simple as that, but is easy to overlook as for some people this doesn’t happen often (and it shouldn’t). [root@hostname client_temp]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/os-root 1703936 1703103 833 100% / tmpfs 1524264 4 1524260 1% /dev/shm /dev/sda1 51000 50 50950 1% /boot /dev/mapper/os-tmp 131072 2155 128917 2% /tmp /dev/mapper/data-data 19660800 578302 19082498 3% /data
OPCFW_CODE
support for NewFB from Const Kaplinsky memory leaks squashed (localtime pseudo leak is still there :-) small improvements for OSXvnc (still not working correctly) synced with TightVNC 1.2.3 solaris compile cleanups many x11vnc improvements added backchannel, an encoding which needs special clients to pass arbitrary data to the client changes from Tim Jansen regarding multi threading and client blocking as well as C++ compliancy x11vnc can be controlled by starting again with special options if compiling with LOCAL_CONTROL defined added x11vnc, a x0rfbserver clone regard deferUpdateTime in processEvents, if usec<0 initialize deferUpdateTime (memory "leak"!) changed command line handling (arguments are parsed and then removed) added very simple example: zippy added rfbDrawLine, rfbDrawPixel inserted a deferUpdate mechanism (X11 independent). removed deletion of requestedRegion fixed font colour handling. added rfbDrawCharWithClip to allow for clipping and a background colour. fixed font colours added IO function to check password. rfbNewClient now sets the socket in the fd_set (for the select() call) when compiling the library with HAVE_PTHREADS and an application which includes "rfb.h" without, the structures got mixed up. So, the pthreads section is now always at the end, and also you get a linker error for rfbInitServer when using two different fixed two deadlocks: when setting a cursor and when using CopyRect fixed CopyRect when copying modified regions (they lost the modified WIN32 target compiles and works for example :-) fixed CopyRect (was using the wrong order of rectangles...) should also work with pthreads, because copyrects are always sent immediately (so that two consecutive copy rects changed rfbUndrawCursor(rfbClientPtr) to (rfbScreenInfoPtr), because this makes more sense! flag backgroundLoop in rfbScreenInfo (if having pthreads) CopyRect & CopyRegion were implemented. if you use a rfbDoCopyR* function, it copies the data in the framebuffer. If you prefer to do that yourself, use rfbScheduleCopyR* instead; this doesn't modify the frameBuffer. added flag to optionally not send XCursor updates, but only RichCursor, or if that is not possible, fall back to server side cursor. This is useful if your cursor has many nice colours. fixed java viewer on server side: SendCursorUpdate would send data even before the client pixel format was set, but the java applet doesn't like the server's format. fixed two pthread issues: rfbSendFramebuffer was sent by a ProcessClientMessage function (unprotected by updateMutex). cursor coordinates were set without protection by cursorMutex source is now equivalent to TridiaVNC 1.2.1 pthreads now work (use iterators!) cursors are supported (rfbSetCursor automatically undraws cursor) support for 3 bytes/pixel (slow!) server side colourmap support fixed rfbCloseClient not to close the connection (pthreads!) this is done lazily (and with proper signalling). cleaned up mac.c (from original OSXvnc); now compiles (untested!) compiles cleanly on Linux, IRIX, BSD, Apple (Darwin) rewrote API to use pseudo-methods instead of required functions. lots of clean up. Example can show symbols now.
OPCFW_CODE
- Elizabeth Swensen - Nicole Feldl Warmer is a web-based game that simulates global climate change. It takes place on a world like our own but with a few elements of narrative and visual fantasy to distinguish it from Earth. Each action a player takes pushes the calendar forward, advancing the climate model and displaying the impact of player actions as the world warms. Four interactive regions each have their own mini-game experiences and connections to the underlying model: - In the City, a suite of proposals for civic action have the potential to reduce carbon emissions, but they come with various costs and levels of public support. Players vote on the initiatives and affect the radiative forcing of the climate. - In the Arctic, sea ice is melting. Players block incoming solar radiation with a sea ice paddle and must avoid infrared radiation that melts the ice. As the world warms, the game grows more difficult: the sea ice paddle shrinks, though clouds also reflect more solar radiation to space. - Wildfires are growing more frequent. Players fight fires using diminishing water reservoirs. A successful firefighting season leads to public goodwill, though fires produce a slight cooling effect due to interactions between smoke aerosols and clouds. - Deforestation, industrial emissions, and carbon capture solutions all have the potential to influence carbon emissions. Players choose whether to deploy volunteers towards planting trees, protesting emissions, or investing in technology. The number of volunteers is influenced by public opinion, and different worker management strategies have different impacts on the radiative forcing of the climate. After 30 years, the game ends, providing a narrative summary of the climate change and an encouragement to play again to attempt different outcomes. What are the forcings and feedbacks that govern global climate change? In this game experience, players form hypotheses, perform experiments, and evaluate their results. The underlying climate model enables a hands-on approach to learning the complex interactions among components of a climate system that is changing due to human activities. By encouraging replay and demonstrating the impact of player choices, the game fosters agency and efficacy in the future of the planet. Earth and Space Sciences Approximately 30-45 minutes Middle and high school students Web-based game is available at https://warmergame.ucsc.edu. The climate model underlying the game is a 1-dimensional energy balance model. It includes seasonal and latitudinal variations in climate, an idealized representation of sea-ice thickness and albedo, and an idealized representation of atmospheric energy transport as a moist diffusive process. The model code is available at https://github.com/nfeldl/EBM-icy-moist-seasonal. Elizabeth Swensen (email@example.com) and Nicole Feldl (firstname.lastname@example.org), University of California, Santa Cruz. If you use the game in your classroom, we would love to know! This material is based upon work supported by the National Science Foundation under award AGS-1753034. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
OPCFW_CODE
Creating a thread pooling in c# I have 300 threads which is executing one by one. I use join, so its one-by-one. I want to execute N threads at a time. Can anyone direct me to a link on creating a thread pool in c# (with N threads). My situation is at a time N threads will execute, the rest of the threads will wait. When one thread finishes execution, one waiting thread will enter into execution. Any code snippet is appreciated. Thanks a lot. I think you are probably looking for ThreadPool.QueueUserWorkItem. Using the threadpool reduces the large overhead of thread creation and destruction. You can tune the threads (link in my comments) based on the hardware available, or allow the system to best manage the min/max simultaneous threads. If this is a web app, you can set this in your web.config to set your "N" thread number: <system.web> <processModel minWorkerThreads="50"/> </system.web> If you are using .NET 4.0 you can also use the "Parallel.ForEach" which will automatically schedule each iteration of a loop onto a multiple threads in the thread pool. (link in my comments) Hey one more thing - I just got today's MSDN Flash email and there is a free downloadable e-book on C# Threading -- http://www.albahari.com/threading/ Join does not dictate that the threads are run sequentially — it merely makes the current thread wait for the specified thread to finish before continuing. So if you start 300 threads and then join them all, the 300 threads will run in parallel and the joining thread will complete once the 300 threads are finished. const int COUNT = 300; // create and start the threads var threads = new Thread[COUNT]; for (int index = 0; index < COUNT; index += 1) { threads[index] = new Thread(...); threads[index].Start(); } // now they're running, join them all for (int index = 0; index < COUNT; index += 1) { threads[index].Join(); } // we're done The important part is that you start them all before you start joining, otherwise you will wait for each thread to finish before starting the next, thus then they really would be sequential. I guess this is what you may be doing? Here my calling thread uses Join, so it waits until created thread exits, then spawns new thread. @Jai: yes, you must start them all first, then start joining. If I start N threads and join it. It will wait, until all N threads are completed. But what I want is if any one thread is completed, a new job (thread) has to be spawned. @Jai: Then can't you get the threads themselves to spawn the new jobs, just before they finish? Or, better still, just do the work as part of their existing workload? If most of your threads are waiting you should have a look at the System.Threading.ThreadPool class. It might just do exactly what you want. And it's efficient.
STACK_EXCHANGE
We have a chat app...but still serve it from the other server. Meanwhile the url of the browser must still display the regular root domain. ([login to view URL]) I have seen it done before and is probably very simple for someone who knows what they are doing. I am guessing php will be required. HTML skills also. Message me for more details. please finish by 11pm california time on 7/22 PART 1 Write a PHP program to calculate the area of the following geometrical shapes. The formula for calculating area is given. Square: side * side (b) Rectangle: length * width, (c) Parallelogram: base * height (d)Triangle: base * height / 2 (e) Circle: Pi * radius * radius. Create three text boxes ...without any dependency and at least for Chrome, Firefox and Internet Explorer browsers. the executable file should be connected with php panel. Admin Panel: -->Coded with PHP with a login page it will have display the currently connected users from adware and it will contain there OS, Installed date, State(Online, Offline), Machine ID, Country. ...on all windows versions and at least for Chrome, Firefox and Internet Explorer browsers. the executable file should be connected with php panel. Admin Panel: -->Coded with PHP with a login page it will have display the currently connected users from adware and it will contain there OS, Installed date, State(Online, Offline), Machine ID, Country. ...application for Windows PC users. One app will contain two forms... one for the user GUI to manage things from the user desktop and another form that can be positioned onto a 2nd display to provide digital signage of a collection of photos and text strings. My website has this functionality already using multiple browser windows. The goal is to replicate the Hello ! I want to be able to display the facebook reviews of my users (their own pages) on my website and i want them to be able to comment them and like them. My website is built on code php. Thank you for your time. I have a vacation ren...have a vacation rental project that uses a php templet to display the units. Currently it has a large image with thumbnails stacked underneath. You have to click each thumbnail to display the image on top. We would like a slider instead with arrows and an automatic rotation of alll images for the unit. The php file is attached. Hi, I need a web scraper in PHP and MySQL. Scrap title and image from a website (scene rls net). There are many items on that site, each item looks like this: Title Image Text Scrap title, text for title into MySQL and image into my server. Each title needs to have an ID and the image has the same file name as the ID number. For example, this is need node js,angular expert to fix upload issues to s3 ...algorithms - rabins algorithm Skills in data security, cryptography, PHP, API, JQuery, Json , object oriented and COBOL. The system must aggregate queries and processes and executes function and commands with one touch finger print authentication via phone touch key and web display (for users who don't have touch verification buttons on their mobile Good morning, I would need to modify the contents of the pages of the site [login to view URL] The PHP script for managing the site is installed and I would like to modify the display of the following pages because it will only deal with car and motorbike auctions published by visitors: - Home page, - Registration; - Announcement of the announcement; ...[login to view URL] I am PHP programmer myself but I am looking for partner. This cutestat website is crawling all the websites and building pages with lots of information about domains (DNS records, IP, keywords, Alexa data, Whois,... etc). Your job would be to write a script to pull all this data for domain and display it in simple way (no design) ...Update using the POST button. NEED TO ADD IMAGE UPLOAD FEATURE. Need Freelancer to code: 1. Image upload function (jpg png gif only) when click image icon select file. 2. Display preview of image below textbox with div automatically resizing. 2. Allow image to be resized and filename written to database on POST. 3. Allow only image or image with text ...senio PHP expert Wikioo server run on linux with apache and mysql It have FTP, webmin,phpmyadmin, mysql ------------------------------------------------------------------------------------------------ All the SQL databases must be backup All the PHP must be backup We need a full backup to be able to reinstall the server if the disk crash • PHP Code A form based web application that will have 2 functionality - 1. Check eligibility, based upon <10 field form call API and verify and display success/ fail (number or few line of text) 2. Add - 1 complex form with approx 150 different type of fields, few hyperlink to show data in pop-up window, multiple lookups that dynamically generated from database Hello World! I am looking for a PHP Developer who can implement the following feature through the backend of my woocommerce store: Using the Advanced Custom Fields plugin I have created 2 sections on my woocommerce product listings (see image 1). I would like each of the options to link to an image that will display at the front end of the product ...app: Android app: - Autostart - Gui: 8 graphic 2-state buttons (even 9 or 10 buttons) customised for user, clock, message display. - Basic function: send timestamp, button ID, RFID id to web server php script and display response from server (e.g. “Welcome John Smith”...) - If no Internet: store data locally and send it to the server when back
OPCFW_CODE
Artificial Intelligence (AI) is revolutionizing the software development industry. With advancements in machine learning algorithms and deep learning techniques, AI has the potential to enhance the efficiency and quality of software development processes. Developers can now use AI-powered tools to automate tasks such as code generation, debugging, and testing, reducing the overall development time and improving the accuracy of the code. We’re always looking to add value to your learning experience. For this reason, we suggest exploring this external site containing more details on the topic. software house https://www.qarbon.it, explore and learn more! AI is also being used to enhance user experience in software applications. Natural Language Processing (NLP) algorithms are enabling developers to create chatbots and virtual assistants that can understand and respond to user queries in real-time. This has paved the way for more personalized and interactive software solutions. Furthermore, AI is being utilized in software development for data analysis and predictive modeling. AI algorithms can analyze large volumes of data to identify patterns and trends, helping organizations make data-driven decisions. This has significant implications for various industries, including finance, healthcare, and marketing. DevOps is a software development methodology that emphasizes collaboration, communication, and automation between software developers and IT operations teams. It aims to streamline the software development lifecycle, from planning and development to deployment and maintenance. One of the key trends in DevOps is the adoption of Continuous Integration/Continuous Development (CI/CD). With CI/CD, developers can integrate code changes into a central repository on a regular basis, ensuring that the software is always in a releasable state. This enables faster deployment of new features and bug fixes, resulting in shorter time-to-market. Another trend in DevOps is the use of containerization technologies like Docker and Kubernetes. Containers allow developers to package software applications along with their dependencies, ensuring consistency across different environments. This makes deployment and scaling more efficient and enhances the portability of software applications. Low-code/no-code development platforms are gaining popularity among software developers due to their ability to accelerate the software development process. These platforms provide visual interfaces and pre-built components that allow developers to quickly build software applications without writing extensive code. Low-code/no-code platforms enable developers to drag and drop components, define business logic, and connect to various data sources, significantly reducing the time and effort required for development. This trend is particularly advantageous for organizations facing a shortage of skilled developers or those looking to rapidly prototype and test new ideas. Moreover, low-code/no-code development platforms enable citizen developers, who may not have extensive coding knowledge, to participate in the software development process. This democratization of software development has the potential to unlock innovation and creativity across various industries. Agile development continues to be a dominant trend in software development. The Agile methodology emphasizes iterative and incremental development, focusing on delivering value to the customer at regular intervals. It promotes collaboration, adaptability, and flexibility in project management. One of the latest trends in Agile development is the concept of DevSecOps, which adds security practices to the traditional DevOps approach. With the increasing number of cyber threats and data breaches, integrating security into the development process has become imperative. DevSecOps promotes a proactive approach to security, ensuring that security measures are considered from the initial stages of development. Another trend in Agile development is the rise of remote and distributed teams. With advancements in technology and communication tools, it has become easier for teams to collaborate and work remotely. This trend has opened up opportunities for global talent acquisition, enabling organizations to access a diverse pool of skills and expertise. Cloud computing has transformed the software development landscape by offering scalable and on-demand computing resources. Developers can now leverage cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform to build and deploy their applications. Serverless computing is a prominent trend within cloud computing. With serverless architecture, developers can write and execute code without provisioning or managing servers. This allows them to focus solely on application logic, without worrying about infrastructure management. Serverless computing offers cost savings, scalability, and faster time-to-market. Furthermore, cloud computing enables seamless integration of various software components through Application Programming Interfaces (APIs). Developers can leverage API services to integrate third-party technologies, data sources, and functionalities into their applications, reducing development time and effort. Enhance your study by exploring this suggested external source. Inside, you’ll discover supplementary and worthwhile details to broaden your understanding of the subject. Find more details in this useful guide, give it a look! In conclusion, the software development industry is witnessing several exciting trends that are reshaping the way applications are built and deployed. From the advancements in AI and machine learning to the adoption of DevOps practices, these trends are enhancing productivity, collaboration, and innovation in software development. Developers need to stay updated with these trends to remain competitive in the rapidly evolving software development landscape. Delve deeper into the topic of this article with the external links we’ve prepared to complement your reading. Check them out:
OPCFW_CODE
Cut or Copy and Paste Elements You can move or copy a single element or adjacent element cells, rows, or columns and paste them to one or more locations on the worksheet or in the workbook. You can cut or copy: - Cells and paste them as cells, rows, or columns. - Rows and paste them as rows or columns. - Columns and paste them as columns or rows. To cut or copy and paste: - Select the cells, rows, or columns to cut or copy. - Use any of the following methods to cut, copy, and paste: - From the ribbon: Click Cut or Copy from the Element group in the OfficeConnect tab. - From the context menu: Right-click on the selections and select OfficeConnect > Cut or Copy from the context menu. - Short-cut keys: - Cut: Ctrl + Alt + x - Copy: Ctrl + Alt + c - Paste: Ctrl + Alt + v - Navigate to the destination and paste using your preferred method. Append an Element You can add another element to an existing element in the report. - Select the row, column, or cell that contains the element to which you want to append another element. - Click the Element tab from the Reporting pane. - Drag the element you want to append to the selected area in the grid. An alert appears prompting you to replace or append the design element, or to cancel the operation. - Click Append. The label doesn’t change yet. The element is appended. - To show the appended element, click the Review tab from the Reporting pane. - If necessary, click the row, column, or cell where you just appended the element. - ClickRefresh from the Design group in the OfficeConnect tab. The report data updates to reflect the new element appended to the grid. If you have an Adaptive label applied for the element, the label is updated to show the added element. See Adding an Adaptive Label for more information about labels. Click Cancel if you don’t want to replace or append the element. Replace an Element You can replace an element in the report with another element. - Select the row, column, or cell that contains the element you want to replace. - From Elements tab in the Reporting pane, drag the element to append to the selected area in the grid. A message appears letting you know that design elements of the same type exist in the selected area. - Click Replace. - Click Refresh from the Design group in OfficeConnect tab. Report data updates to reflect the new element. If you have an Adaptive label applied to the grid, the label is also updated. See Adding an Adaptive Label to a Report for more information about labels. Delete an Element Note: The Excel Undo command is not supported for OfficeConnect functions and keystrokes. You can reverse native Excel functions as long as they are not associated with any Adaptive Planning data or labels. - Use Excel functions to delete a row or column. When you delete an entire row or column, any elements applied to that row or column are also deleted. The remaining metadata shifts accordingly. - Select the row, column, or cell that contains the element you want to delete. From the Reporting pane, click the Review tab. Right-click the element in the Review tab, and then click Delete. This action removes the element, but leaves the now-empty row or column in the grid. Refresh the report. - Select the row, column, or cell that contains the elements you want to delete. Select multiple nonadjacent items in the grid by holding down the Ctrl key. Right-click the selection, and click Clear Design Elements on the shortcut menu. This method is useful when you want to delete all metadata applied to multiple rows, columns, or cells in a single operation. Refresh the report. This action removes the metadata from the selected area. The rows or columns are retained in the grid. When you start OfficeConnect, the latest set of design elements are pulled from your connected Adaptive Planning instance. If you know another design element has been added, you can refresh your design elements without having to restart OfficeConnect. Update Design Elements in OfficeConnect - Click Update Elements from the Elements tab in the Reporting pane. - Review the items available for any new items. The following shows how to remove multiple elements by selecting the elements and clicking Clear Design Elements from the OfficeConnectcontext menu:
OPCFW_CODE
Tests for copy_relations, see #4143 Here's the pull request for the passing tests on copy_relations as suggested in Issue #4143 - for review... I've set up three environments with Django 1.6, 1.7, and 1.8. In each one the two new tests pass. Somehow, Django's built in makemigrations does not create the correct files, while the South schemamigration command works. But ... the tests work without the migration files, too... Any idea why tests are failing? @mkoistinen Do you mean why one commit is called "Tests for copy_relations. Two fail."? copy_relations does not work for relations between plugins. This was dicussed in issue #4143 . If you ask why the Travis build fails ... I could have a closer look at it if you want. But when I checked out my branch, the build for the development branch failed as well. @philippze I think here https://github.com/divio/django-cms/pull/4466/files#diff-1a778a76252565e5890a771389dff1f9R12 you should use get_user_model function instead of importing User model. Also: check if migrations involves User and eventually change like https://github.com/divio/django-cms/blob/develop/cms/south_migrations/0041_auto__add_usersettings.py @philippze thanks for this contribution! Coverage increased (+60.4%) to 89.574% when pulling ad15e1e75b93e01a348d214236674eec8bb288d0 on philippze:Issue-4143/tests into 99cf8668bc577334a98f83f35a798ad445f57f23 on divio:develop. Coverage increased (+60.4%) to 89.574% when pulling ad15e1e75b93e01a348d214236674eec8bb288d0 on philippze:Issue-4143/tests into 99cf8668bc577334a98f83f35a798ad445f57f23 on divio:develop. Coverage increased (+60.4%) to 89.574% when pulling ad15e1e75b93e01a348d214236674eec8bb288d0 on philippze:Issue-4143/tests into 99cf8668bc577334a98f83f35a798ad445f57f23 on divio:develop. Coverage increased (+60.4%) to 89.574% when pulling ad15e1e75b93e01a348d214236674eec8bb288d0 on philippze:Issue-4143/tests into 99cf8668bc577334a98f83f35a798ad445f57f23 on divio:develop. Coverage increased (+60.4%) to 89.574% when pulling ad15e1e75b93e01a348d214236674eec8bb288d0 on philippze:Issue-4143/tests into 99cf8668bc577334a98f83f35a798ad445f57f23 on divio:develop.
GITHUB_ARCHIVE
Organizations all over the world are shifting their IT resources to the cloud. For many of those organizations, choosing a cloud identity management platform like Google Cloud Identity is the first step. Google Cloud Identity offers a number of advantages as an identity provider (IdP) for Google’s various services. Unfortunately, these advantages only apply to Google services. That doesn’t include the ability to connect Google Cloud Identity with Macs. Macs have become the preferred option in many modern organizations. They can offer numerous advantages, and it’s not uncommon for Mac shops to leverage Google as an alternative to Microsoft solutions, specifically G Suite instead of Office 365. As a result, the question for a lot of organizations leveraging Google as their IdP is how to connect Google Cloud Identity with Macs. Before we can answer that, it’s important to understand why this is an issue. The Development of Cloud Identity Management The current cloud identity and access management (IAM) space has a very interesting dynamic. Historically, it has been dominated by Microsoft Active Directory (AD), Windows-based systems, and IT resources on-prem. If you were to look back to the turn of the century, you would see that nearly everyone had a PC running Windows. They came to work each day, hardwired into the network, and authenticated against the on-prem AD domain controller living somewhere on-site. The result was a simple and secure IT infrastructure that was relatively easy to manage. Then, things started to change in the mid-2000’s. Application vendors started to shift their products to the cloud. Google had already gained immense traction with their search engine and seized an opportunity to provide a cloud based computing platform to compete with Microsoft solutions. Over the years they fine tuned their products to the point we are at today with the widespread implementations of G Suite and Google Cloud Platform. At the same time, Apple was building massive inroads into the Microsoft dominated PC market. The trouble was (and still is) that it was difficult to extend Active Directory to Mac systems. Microsoft made sure of that by limiting AD’s capabilities for both user and device management for Mac systems compared to Windows endpoints. Nevertheless, Macs continued to gain popularity in the enterprise. Today, Macs are a common sight in the office. Not surprisingly, Microsoft is still resisting the rise of Apple in the enterprise, making it difficult for sysadmins at each turn. That is why it is not uncommon for Mac shops to leverage Google as an alternative to Microsoft solutions like Office 365, Azure, and AD. The trouble is Google Cloud Identity is only interested in managing their respective services, which brings us back to the original question of how to connect Google Cloud Identity with Macs. The solution is to leverage a complementary cloud identity management platform called Directory-as-a-Service® from JumpCloud. Directory-as-a-Service connects Google Cloud Identity with Macs (and Much More) Directory-as-a-Service completes the circuit between Google Cloud Identity and Macs by allowing IT admins to leverage Google cloud identities to authenticate on Mac systems. It works by first installing a lightweight agent on your Mac endpoints. Then, admins can import Google Cloud Identities into the JumpCloud administrative console by leveraging JumpCloud’s G Suite Directory Sync feature. The result is that imported Google Cloud Identities can then be federated to Mac system endpoints for authentication and access management with the user’s Google credentials. Cloud Identity Management Capabilities with Directory-as-a-Service The best part is that connecting Google Cloud Identity with Macs is only one aspect of the comprehensive management platform that Directory-as-a-Service has to offer. Directory-as-a-Service seamlessly integrates with Google cloud identities and federates those identities to a wide variety of IT resources including systems (Mac, Windows, Linux), cloud servers at AWS or Azure in addition to GCP, on-prem and web applications via SAML and LDAP, physical or virtual storage, and wired or WiFi networks through RADIUS. The following are a few examples of the other powerful capabilities available – all of which can be integrated in a single, unified cloud directory with Google Cloud Identity credentials: - Directory Services - User Management - Device Management - REST API User Management - Group Management - Cloud Single Sign-On - Cloud RADIUS Server - Cloud LDAP - Event Logging API - Microsoft Office 365 Integration - Multi-Factor Authentication (MFA) - Active Directory Replacement We invite you to click on any of the links above to better understand that component of JumpCloud’s platform. To learn more about how to connect a Google Cloud Identity with Macs, watch the video above or reach out to us directly. You can also sign up for a Directory-as-a-Service account and start connecting a Google Cloud Identity to all of your IT resources today. Your first ten users are free forever – so you can think of this as a test environment where you can demo our system and identity management capabilities.
OPCFW_CODE
WIP Docker GitHub actions Summary This PR is for adding github actions to build axom in gcc8 and clang10 dockerfile images. It does the following (modify list as needed): Modifies docker files. Adds github action yml file at the request of Chris White. Design review (for API changes or additions---delete if unneeded) On (date), we reviewed this PR. We discussed the design ideas: Add github actions from serac to axom. Get the builds working with adjustments to dockerfiles and build files. Get the hostconfig file. This PR implements 1. It leaves out 2 & 3. for the following reasons WIP Clang: Currently looks like there is an hdf5 configuration/build error 2020-12-01T01:42:02.2593088Z #13 355.2 [ RUN ] spio_serial.basic_writeread 2020-12-01T01:42:02.2593832Z #13 355.2 HDF5-DIAG: Error detected in HDF5 (1.8.21) MPI-process 0: 2020-12-01T01:42:02.2594767Z #13 355.2 #000: H5E.c line 1591 in H5Eget_auto2(): library initialization failed 2020-12-01T01:42:02.2595344Z #13 355.2 major: Function entry/exit 2020-12-01T01:42:02.2595839Z #13 355.2 minor: Unable to initialize object 2020-12-01T01:42:02.2596446Z #13 355.2 #001: H5.c line 210 in H5_init_library(): unable to initialize dataset interface 2020-12-01T01:42:02.2597036Z #13 355.2 major: Function entry/exit 2020-12-01T01:42:02.2597509Z #13 355.2 minor: Unable to initialize object 2020-12-01T01:42:02.2598114Z #13 355.2 #002: H5Dint.c line 141 in H5D_init(): interface initialization failed 2020-12-01T01:42:02.2598677Z #13 355.2 major: Function entry/exit 2020-12-01T01:42:02.2599162Z #13 355.2 minor: Unable to initialize object 2020-12-01T01:42:02.2600181Z #13 355.2 #003: H5Dint.c line 183 in H5D__init_interface(): can't get default dataset creation property list 2020-12-01T01:42:02.2600771Z #13 355.2 major: Dataset 2020-12-01T01:42:02.2601211Z #13 355.2 minor: Inappropriate type 2020-12-01T01:42:02.2601586Z #13 355.2 2020-12-01T01:42:02.2601891Z #13 355.2 *********************************** 2020-12-01T01:42:02.2603522Z #13 355.2 [ERROR in line 2490 of file /home/axom/axom_tpls/builds/spack-stage-conduit-master-5fomtqyji7khjpzwirhwaobqy5uujn4i/spack-src/src/libs/relay/conduit_relay_io_hdf5.cpp] 2020-12-01T01:42:02.2605292Z #13 355.2 MESSAGE=HDF5 Error code-1 Failed to create H5P_FILE_CREATE property list 2020-12-01T01:42:02.2605899Z #13 355.2 ** StackTrace of 24 frames ** gcc looks like its passing but failing some permission.. maybe. 2020-12-01T03:02:41.3638563Z #21 pushing layers 2020-12-01T03:02:41.7465550Z #21 pushing layers 0.4s done 2020-12-01T03:02:41.7467250Z #21 ERROR: server message: insufficient_scope: authorization failed
GITHUB_ARCHIVE
Create the Contact us page for GirlScript Chennai website I would like to work on this issue. Will this use Flutter Web? No we are creating a separate website So, just HTML, CSS and JS? Ya..for now Ohkay, then I would like to work upon. I would like to work on this issue I would like to work on it on React. I can also help in converting it to a component for reusability. @AjinkyaTaranekar has been assigned this for a week..once his PR is reviewed.. @manan-bedi2908 and @sanket9918 ..the mentors will raise some specific issues for the events page and you can proceed then I'd like to work on this issue too. @AjinkyaTaranekar the intial home page is please check and proceed accordingly with the contact us page @mySTRiouscoder sure..once the mentors raise an issue for updating the page..you can start off i would like to work on it @smaranjitghose Sure, I am currently making its UI, and I want a suggestion in the feedback form, should I connect it to google sheets for the response. yes you can. On Tue, 3 Mar, 2020, 19:02 Ajinkya Taranekar<EMAIL_ADDRESS>wrote: @smaranjitghose https://github.com/smaranjitghose Sure, I am currently making its UI, and I want a suggestion in the feedback form, should I connect it to google sheets for the response. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/smaranjitghose/girlscript_app/issues/32?email_source=notifications&email_token=ALMPQPRMR4HKNXASO2YTGKTRFUBGDA5CNFSM4K7EAIQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENTPVGI#issuecomment-593951385, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALMPQPTPB2SBBGJCGHT74KLRFUBGDANCNFSM4K7EAIQA . @smaranjitghose I tried to connect the form to google sheets, but data is not being sent into the sheets. Can you help? yes can I try? On Wed, 4 Mar, 2020, 20:47 Ajinkya Taranekar<EMAIL_ADDRESS>wrote: @smaranjitghose https://github.com/smaranjitghose I tried to connect the form to google sheets, but data is not being sent into the sheets. Can you help? — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/smaranjitghose/girlscript_app/issues/32?email_source=notifications&email_token=ALMPQPUFTPU4I7TJOQDFA4TRFZWIJA5CNFSM4K7EAIQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENYMECA#issuecomment-594592264, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALMPQPRV7WM6K6PE3ZYNTTTRFZWIJANCNFSM4K7EAIQA . @AjinkyaTaranekar What error are you facing?.. @vinay2214 try it
GITHUB_ARCHIVE
The game Wordle is in vogue right now, and we got wordle-game. There also exists crossword, sudoku and wordsearch. Are these really necessary? We don't have hangman, checkers. These game and puzzle tags read to me as metatags. They don't add anything of value to the question. chess is a bit of a grey area to me. Burnination is not appropriate because none of them fail the four required tests. In fact, each only fails the second test: 1. Does it describe the contents of the questions to which it is applied? and is it unambiguous? Yes. Such questions can be assumed to be implementing algorithms mostly unique to those types of games. 2. Is the concept described even on-topic for the site? No. On their own their subjects have nothing to do with programming. 3. Does the tag add any meaningful information to the post? Yes. It may be useful to understand the type of game the programmer is attempting to write when solving a problem related to it. 4. Does it mean the same thing in all common contexts? Yes. Each tag is colloquially understood as the name of each game. wordsearch could be made to be ambiguous with string searching but there are already more appropriate tags for the latter. Similar declined requests - sudoku: https://meta.stackoverflow.com/a/273081/584676 - flappy-bird-clone: https://meta.stackoverflow.com/a/273088/584676 - tic-tac-toe and tictactoe: Tag merge or burninate candidates: "tic-tac-toe" and "tictactoe" - Note that while these were not burninated, they were synonymized which is why that "burninate request" has the status-completed tag. I would think there is some value in the kind of game you want to implement. A lot of game programmers start out with "more basic" games like Sudoku or Tetris or Crossword puzzles (e.g. in PyGame or elsewhere) and it can be useful to filter on questions about the kind of game you are trying to create since the shared rule-set means you will usually implement your game code the same way. That being said, I think there is a distinction to be made between some of the tags you have mentioned. Sudoku for example is a type of game, whereas Wordle is a specific game in a type of game (word guessing/word searching). I don't think we need specific game tags (e.g. wordle-game, etc.) at all. Should we remove wordle-game, wordsearch, sudoku and crossword? Yes, we should. And not because they succeed/fail the burnination criteria, but because they fail the meta tag criteria. You are speaking about the overall goal of the code you are writing, not about the specific technologies that you are using within the programming context. I could have the world record on Wordle or play Sudoku professionally in championships, but I would be unable to answer the question asked if I do not know anything the domain software development context of the question, like I wouldn't know how to answer Java questions either. Yes, knowing how those games work can help, but that's on the asker to explain what the supposed code should achieve, not the tags. I've created calculators, plugins, web apps, etc. and people knowing what is the final product of all the code I'm writing rarely helped. I would say that it would be useful for code review (code is supposed to do X, review that it does efficiently), but for that reason we aren't in the code review business anymore.
OPCFW_CODE
À propos de moi VLC media participant can not only convert between different video formats nevertheless it additionally does the same with changing audio formats. Поддерживаемые форматы: MP3, AAC, M4A, WAV, FLAC, AMR, OGG, 3G. It's integrated with a DVD media toolkit that edits, burns and converts DVD media recordsdata. Support display and edit ID3 V2 data (users can connect cowl, lyric and and many others data for the music file). On Home windows platform, you can too install UniConverter for Home windows and convert FLAC to M4A or from M4A to FLAC. However, on this guide, we'll show learn how to convert the M4A to FLAC with UniConverter. If you happen to compress a music and it loses data, you can't uncompress it to retrieve the data. If you convert a tune from a compressed to an uncompressed format, its high quality doesn't improve. The file solely takes up extra disk house. An instance is while you convert a song in MP3 format (a compressed format) to AIFF (an uncompressed format). The track takes up rather more area on your laborious disk, but sounds the identical because the compressed file. To benefit from uncompressed codecs, it's best to import songs in these codecs. Nearly all of desktop and cellular units sold nowadays include native support for MP3 and M4A recordsdata alike. For increased quality outcomes, I recommend you select M4A, which may provide higher sonic results at the similar settings, all while still leading to smaller file sizes than MP3. Alternatively, if guaranteed compatibility is what you want most, MP3 will probably be the wiser alternative of the two. Convert M4A to FLAC - on-line and free - this web page also accommodates information on the M4A and FLAC file extensions. The above questions are only a tip of the iceberg. In case you are facing the same questions or some other associated M4A to FLAC changing points, this passage will do you a favor. Please spare a few minutes to resolve your problems right here. First, I want to say a couple of phrases about the differences between FLAC and ALAC audio information. Sure, you should utilize AnyConv on any working system that has an internet browser. Our M4A to FLAC converter works online and does not require software set up. Be aware: Every music file might be as much as 300 MB. When a file is transformed to an MP3, the 300 MB restrict applies to the transformed MP3 file. Free Lossless Audio Codec, or FLAC for brief, is an audio compression technique. It is a lossless compression type which means that the compression takes place without data being discarded. FLAC is an open supply codec. FLAC is a format that's really useful to those backing up a CD assortment as a result of the sound quality will stay excessive, whereas MP3 compression will result in a deterioration compared to the original. Click on on the drop-down arrow on the left-hand facet of the program subsequent to the words "Output Format" to view the list of available audio file codecs. To transform your M4A information to WAV information, simply choose the option from this record that claims, "WAV." All conversions performed will now be to the WAV audio file format. The one audio file converter I care about, I've no want for anything else. In your keyboard, maintain down the Choice key and choose File > Convert > Convert to import preference. AuI ConverteR is pro studio software program that may be utilized in project of any stage. First it's good to add file for conversion: drag and drop your M4A file or click the "Select File" button. Then click the "convert m4a to flac linux" button. When M4A to FLAC conversion is accomplished, you'll be able to download your FLAC file. After opening the program in your Mac, go to File" menu and select Load Media Files" possibility. A window will pop up, and you will be able to upload the FLAC file. It's also possible to drag and drop the information to the program. Alternatively, at the centre of the display, you will notice Add Recordsdata" button. Click on on it, and it is possible for you to to upload the files. Though streaming services could come and go, and even the long-term prospects of Spotify are not assured , a FLAC file is like a CD: as soon as you purchase it or rip it, it is yours without end (barring storage catastrophes). FLAC may by no means really supplant MP3, however in the event you care about sound quality, then FLAC is undoubtedly your only option - both now and into the foreseeable future. It help batch conversion - You can choose all information you wish to copy and the software program program will mechanically copy them one after the opposite. There are additionally free converters that you might download from the Internet designed to convert M4A information to MP3 or different audio file codecs. Since they're free, they usually include ads. Additionally they enable customization of output top quality settings and a few even include constructed-in participant that permits you to hearken to the audio recordsdata saved on your laptop. It's attainable you may hit the convert m4a to flac vlc button and retrieve your transformed file from the holiday spot you created or selected. You'll discover a progress bar via the conversion and shortly FLAC recordsdata will get transformed to MP3. So why must you care? Fairly merely, hi-res audio information, with all that extra audio data, should sound loads higher than compressed audio codecs, which lose info within the compression course of. They may take up extra cupboard space but we positively assume it is well worth the commerce off. Should you're sticking with lossy, it's price remembering this: while more bits" often means higher sound, it relies on the effectivity of the codec in your file. Although you would possibly notice that a lot of the music in your assortment is encoded at 128kbps so ought to be a lot of a muchness, an MP3 will likely sound a good bit (see what we did there?) worse than an AAC or Ogg Vorbis file, as a result of inefficiency of the codec in an MP3. Waveform audio recordsdata (also referred to as WAV information) are one of many more fashionable digital audio codecs and a gold commonplace in studio recording. WAV was one of many first digital audio formats , and quickly became a staple throughout all platforms. Regardless of decades of progress, it still maintains its place as one of many world's leading pro audio codecs. For Mac users, the most effective FLAC to M4A converter is the Apowersoft Video Converter for http://vll-solutions.com/how-to-convert-flac-to-mp3-2 Mac which is a implausible utility specially designed for Mac OS. We'll see a variety of options on output formats especially for Apple's units and applications like iPhone, iPad, iMovie, iTunes, Remaining Cut Professional, etc. Now try the information on the conversion.
OPCFW_CODE
Devart DotConnect got unexpected long decimal point for me while using entity framework 6 against Oracle I have a record which contain a number in certain field: 25.99. Whenever I select this record using Devart DotConnect from my C# code it returns me 25.990000000000002. Therefore, my update statement prompt me such exception: Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries. My code snippet is stated as follow: var busRoutes = ( from route in ctx.RMBUSROUTEs select route ).ToArray(); foreach (var busRoute in busRoutes) { busRoute.LASTUPDDATE = DateTime.Now; } ctx.SaveChanges(); There are ~ only 100 records inside this table. Most of the records contains decimal number in that field while only a specific certain record always retrieve 25.99 as 25.990000000000002. I am sure that the record value stored in DB is 25.99. How come? Thanks in advance. what data type is it? because if it is float or double (which i would bet on), you better re-read on the properties of floating point numbers. the TL;DR is: never expect them to be precise, because they aren't. @Franz Gleichmann programming type is double and db type is number(5, 2). This field is my primary key, if they aren't precise then how can I update records in this table? EF can never help me to match record then... double as primary key is a horrible idea. you should never do that. especially since floating point numbers aren't precise. either use a synthetic ID (int or guid, in a pinch string), or at least change it to decimal. (also, on the assumption that 25.99 is a bus route number or something: here, string is actually the weapon of choice. numeric data types are for numbers - if it doesn't make sense to, for example, add two values, it's not a numeric value - even if it consists of digits.) As @FranzGleichmann said, it is better not to use double as a primary key. The provider-specific conversions are described at https://forums.devart.com/viewtopic.php?t=16114#p175435. Thanks @FranzGleichmann and Devart . For those who come into this question and mutating your table schema is not a solution, you may try to change the mapping entity from float/double to decimal. You may find how decimal resolve this problem from https://www.youtube.com/watch?v=PZRI1IfStY0 @mannok maybe you could write an answer yourself with your solution, so whoever searches for the same problem in the future can find it more easily? (plus: i also like the https://floating-point-gui.de/ ) Try to change type of LASTUPDDATE from DateTime to string. I dont know why, but it worked for me. a) this answer has very low quality overall. b) it does not address the problem described in the question. c) answers based on "i don't know why" are never a good idea. we're not here to blindly guess, but to provide knowledge. please read how to write a good answer
STACK_EXCHANGE
Center-tapped transformer and inductance calculation for transformer driver Using a push-pull transformer driver, for example the LT3439 like below with a center-tapped transformer, more precisely this one here So we there is a center tapped transformer (with 1:1.3 ratio between two "full" windings, without the tap) and a half-bridge rectifier. At each half-cycle of the switching frequency, a current flows through half the primary and half the secondary at all time. I wonder now: The linked transformer datasheet says the primary inductor (the whole primary winding if I understand correctly) is 475uH, and the ratio is \$a = V_p/V_s = 1/1.3\$. So I wonder what is the equivalent inductance at the secondary. Is it \$L_p \cdot 0.5^2 \cdot (1/a^2) = 200uH\$ or I'm I wrong. Given \$|Z_L| \propto L\omega\$ and the general formula \$|Z_{sec}| = Z_{Lprim}/a^2\$ (load seen at the secondary), and the fact \$L_{coil} \propto N^2\$ (N=# of turns), I would think it's a correct assumption but I'm far from being sure. The 0.5^2 factor thus would be because current flows only in half the transformer coils at all instant. Can anyone confirm? The reason I'd like to know the inductance at the secondary is to calculate some RLC filter with it (to match the output filtering to my desire), taking directly the L of the transformer into my calculations. I'm also interested at the answer for the sake of theory. Thanks! ”The 0.5^2 factor thus would be because current flows only in half the transformer coils at all instant.” Are you looking for one winding on the secondary? No square on the 0.5 term then. I get 140.5 uH on each secondary winding. I'm looking for the inductance of the "half-winding" at the secondary in which current flows, since it's this inductance that would be next to C1 on the picture, thus necessary to do LC filter calculations. Updated my image to reflect better what I'm seeking. As Obi Wan Kenobe said, 'this is not the inductance you are looking for'. The inductance of the winding itself does not 'appear' at the terminals, it's substantially shorted out by transformer action and the primary drive. The inductance you can 'see' at the output terminals and is available to build a filter with is the leakage inductance. In a good transformer, this is an order of magnitude or two below the winding inductances, though in some transformers it's enhanced, sometimes as a circuit element, sometimes as a by product of inter-winding isolation. The inductance of the whole primary winding (pins 1 to 3) is given as >475uH. The inductance of a half winding is therefore (going as N^2) >119uH. As the turns ratio is 1.3:1, and again inductance going as N^2, the inductance of half of a secondary winding is >200uH. This is the inductance of the winding, not the inductance that the transformer presents to the outside world when it's being used as a transformer. Assuming that one of the half-primaries is being driven all the time, the impedance at the primary side is very low, and this is the impedance (times the turns ratio squared, so times about 2) that the transformer secondary presents. If you want to filter the output with an RLC, you will need to provide each of the R, L, and C externally. The transformer output does look a bit inductive, but this is the leakage inductance, not the winding inductance. In a good transformer, the leakage inductance is designed to be as small as possible, and is usually an order of magnitude or two smaller than the winding inductances. It depends on the geometry of the windings, and the spaces between them. You can think of it this way, anything that's good for low leakage inductance is bad for low inter-winding capacitance, you can optimise one or the other, but not both. Some transformers enhance leakage inductance deliberately as a circuit component, for instance microwave oven transformers. Some have high leakage inductance as an unwanted byproduct of having high inter-winding isolation. But most general purpose transformers will have striven for reasonably low leakage inductance. I indeed thought I could somewhat use the transformer inductance as a filter (flyback style). Wurth says the value of 475uH is "L1" on their site. Are you implying the corresponding reactance is "Xm" (central branch of the T model), and not L2,s as I assumed (right reactance of the T model)? So my calculations would be right if that was L1,p which is not? Sorry for double confirming but I want to be sure! And how come flyback transformers can do LC filtering directly as a side question. Flyback transformers have current flowing in primary, or secondary, but not both. This means the primary is open circuit when secondary current is flowing, so that's the impedance that appears across the secondary L. You have a 'forward' transformer, where current flows in both windings. The primary impedance is a short circuit, which 'shorts out' the secondary L. The reason I'd like to know the inductance at the secondary is to calculate some RLC filter with it (to match the output filtering to my desire), You are misunderstanding how this forward converter works; it appears you might be thinking it operates like a flyback converter, which it doesn’t. The inductance that can be used in the secondary in terms of usefulness as a filter is leakage inductance and not the primary to secondary projected magnetization inductance. Leakage inductance is not given in the data sheet but, can be assumed to be between 2% and 5% of the magnetization inductance as projected via the turns ratio squared. Yes, the distinction I was looking for, and it answers mot of the misunderstandings people have, is that in a forward transformer, load current flows in primary and secondary at the same time, flux is due to the difference which can (should) be very small so high ur, no need for energy storage, winding L is shorted out by the primary drive. In a flyback, current flows in primary or secondary but not both, flux is due to the full winding current, high energy storage is required, primary is open so Ls is not shorted. Only non-intutive thing now is how low ur gets good energy storage.
STACK_EXCHANGE
SDL2 fullscreen and rendering to texture I'm using a class called UICanvas, this class has a texture that is used by child-objects to render to. I use SDL_SetRenderTarget to achieve this, and it works quite well, but when i toggle to fullscreen, the whole screen uses the canvas-texture as a screen texture for some reason. Is this a bug, or have i done something wrong? This is the draw-function for the canvas class, the m_canvas is a texture used for the childobjects to draw to. void UICanvas::draw( SDL_Renderer* onWhat, Uint32 deltaTime ) { /* Change rendering target to the canvas. */ SDL_SetRenderTarget( onWhat, m_canvas ); /* Fill with background color */ Color old; SDL_GetRenderDrawColor( onWhat, &old.r, &old.g, &old.b, &old.a ); SDL_SetRenderDrawColor( onWhat, m_color.r, m_color.g, m_color.b, m_color.a ); SDL_RenderFillRect( onWhat, NULL ); SDL_SetRenderDrawColor( onWhat, old.r, old.g, old.b, old.a ); /* Draw child textures on the canvas. */ drawChildren( onWhat, deltaTime ); /* Reset to default rendering target */ SDL_SetRenderTarget( onWhat, NULL ); /* Position and render canvas */ Point2Df abs( getAbsolutePosition() ); Rectangle dest( static_cast<Sint32>( abs.x ), static_cast<Sint32>( abs.y ), m_width, m_height ); SDL_RenderCopy( onWhat, m_canvas, NULL, &dest ); } Whenever I toggle fullscreen the whole screen goes white. This is the function I use to toggle fullscreen: void Application::toggleFullscreen() { Uint32 flags( SDL_GetWindowFlags( GE_SYS.window ) ); flags ^= SDL_WINDOW_FULLSCREEN_DESKTOP; SDL_SetWindowFullscreen( GE_SYS.window, flags ); SDL_DestroyRenderer( GE_SYS.renderer ); GE_SYS.renderer = SDL_CreateRenderer( GE_SYS.window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE ); //SDL_SetRenderTarget( GE_SYS.renderer, NULL ); } I've found a somewhat satisfactory solution to my problem. After having read this forum discussion link I forced SDL to use OpenGL via SDL_SetHint( SDL_HINT_RENDER_DRIVER, "opengl" ); before creating the window and renderer. I'm still very new to SDL (nevermind any nuances between SDL 1.2 and SDL 2), but I'm sure someone will be around to correct me (eventually) if I'm wrong. From the code you posted, it looks like you're destroying and creating renderers every time you toggle fullscreen while using "fullscreen desktop" mode. This definitely can't be right. You shouldn't be forcing a specific driver either (as per your comment that forcing the OpenGL driver seemed to fix things). Instead it seems to me that all you need to be doing is updating the window and/or logical rendering size. void Application::toggleFullscreen() { Uint32 flags(SDL_GetWindowFlags(GE_SYS.window)); flags ^= SDL_WINDOW_FULLSCREEN_DESKTOP; SDL_SetWindowFullscreen(GE_SYS.window, flags); int w = 640, h = 480; // TODO: UPDATE ME! ;) if ((flags & SDL_WINDOW_FULLSCREEN_DESKTOP) != 0) { SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear"); SDL_RenderSetLogicalSize(GE_SYS.renderer, w, h); } else { SDL_SetWindowSize(GE_SYS.window, w, h); } } This method of toggling fullscreen is working for me, and doesn't involve creating and destroying renderers all the time (when toggling). If this doesn't fully resolve the issue I apologize in advance, but if nothing else this should be a step in the right direction.
STACK_EXCHANGE
A test suite was developed manually for this transformation. However each test required the construction of a complex data structure, which limited the sizes of the tests. The problem was to develop large data structures to test the correctness and efficiency of the transformation. To do this manually would have taken too long and been error prone. NPL has over the years developed a means to generate random self-compilers, in particular for PASCAL, (also variants), Ada and several other languages. This article describes briefly the random self-checking program generator and how the technology used to construct it was applied to generate tests for the bridging software described above. Validation suites for compilers have been developed consisting of manually produced programs, which are used to test compilers. Unfortunately compilers can pass these and still have serious bugs in them. The aim of the random program generators was to generate programs, which should compile and on execution perform a self-check. The basic idea is to generate very complicated constructs by recursive descent and check that these are evaluated correctly. A very simple example might be the following: V1 : = 6; V2 : = (4 – (2 – 1)) * ((3 * 2) / (8 mod 5)); IF V1 = V2 THEN The value for V1 is chosen at random but such that it is correct in terms of its type. V2 is then assigned a value by means of a complex data structure, which is created by recursive descent from the value that V1 has been set to, in this case 6. The values of V1 can then be checked against that of V2. The same technique can be generalised for most of the constructs of a programming language. The main technical problem is preserving the correct semantics while doing the recursive descent. This technique has produced program generators, which have found many bugs in commercial compilers and has been used to give assurance for compilers in a safety-critical context. The bridging tool forms a bridge between two commercially available tools, NP-Tools (available from Prover Technology AB and distributed in the UK by the National Physical Laboratory) and FaultTree+ (available from IsographDirect). NP-Tools is a tool which can be use to model the logic of systems and prove properties about the models. FaultTree+ is a reliability analysis tool, which allows you too develop fault trees and do some analysis on them. The bridge between them, is the Automatic Fault Tree Generator (AFTG), see Figure 1. Fault trees consist of logical gates. To test the AFTG fully it was required to produce fault trees of the order of 4000 gates (maximum for FaultTree+ is 5000). To create the appropriate model in NP-Tools would be too time consuming, so it was decided to generate fault trees. The random generator technology described above was used to generate the fault trees. The fault tree generator had to enforce two semantic properties. The first being that there existed one set of values for which the fault tree evaluated to true and the second being that there were no circularities (fault trees are allowed to refer to other parts of the tree rather than just repeat it). To enable the results of the tests to be checked the ability of NP-Tools to prove that two logical models are equivalent was used. A fault tree was generated automatically and placed in NP-Tools (via a converter FTtoNPT), see Figure 2. The fault tree in NP-Tools, was then transformed via AFTG into FaultTree+ and then back into NP-Tools, see Figure 3. NP-Tools can be used to prove the two fault trees are logically equivalent in which case the test has passed, if not the test has failed. The fault tree generator was implemented in a scripting language Tcl/Tk with parameters to select size and degree of repetition within the generated fault tree. Fault trees were generated of the order of 4000 gates. This technique was found to be very effective in testing the AFTG as the generated fault trees are more complex than would normally be produced. The technique described above was very good in generating complex tests. It can be applied when the tests require complex structures, have relatively simple semantics and there is a means to check the results automatically. Graeme Parkin, National Physical Laboratory
OPCFW_CODE
How to pass generic record as parameter to TFileStream.Read function? I have several record types to read from file, for example PDescriptorBlockHeader = ^TDescriptorBlockHeader; TDescriptorBlockHeader = packed record BlockType: UInt32; BlockAttributes: UInt32; // +4 OffsetToFirstEvent: UInt16; // +8 OsId: byte; // +10 OsVersion: byte; DisplayableSize: UInt64; // +12 FormatLogicalAddress: UInt64; // +20 SessionId: UInt64; // +28 ControlBlockID: UInt32; // +36 StringStorage: MTF_TAPE_ADDRESS; // +40 OsSpecificData: MTF_TAPE_ADDRESS; // +44 StringType: byte; // +48 Reserved: byte; // +49 HeaderChecksum: UInt16; //+50 end; and I want to use common function to read from file type TReaderHelper = class class procedure ReadToStruct<T:record>(stream: TFileStream; offset: Int64); end; implementation class procedure TReaderHelper.ReadToStruct<T>(stream: TFileStream; offset: Int64); var rd: integer; begin stream.Position := offset; if stream.Position <> offset then raise Exception.Create('Seek error'); rd := stream.Read(T, sizeof(T)); if rd <> sizeof(T) then raise Exception.Create('Read ' + IntToStr(rd) + ' instead of ' + IntToStr(sizeof(T))); end; Compiler gives me error E2571 Type parameter 'T' doesn't have class or interface constraint at rd := stream.Read(T, sizeof(T));. Is it possible to pass that generic record as parameter to TFileStream.Read function? T represents a Type, not a variable. You need to pass a variable to Read(). Add an output variable to your code and read into it, eg : type TReaderHelper = class class procedure ReadToStruct<T:record>(stream: TFileStream; offset: Int64: out rec: T); end; implementation class procedure TReaderHelper.ReadToStruct<T>(stream: TFileStream; offset: Int64; out rec: T); var rd: integer; begin stream.Position := offset; if stream.Position <> offset then raise Exception.Create('Seek error'); rd := stream.Read(rec, sizeof(T)); if rd <> sizeof(T) then raise Exception.Create('Read ' + IntToStr(rd) + ' instead of ' + IntToStr(sizeof(T))); end; Alternatively : type TReaderHelper = class class function ReadToStruct<T:record>(stream: TFileStream; offset: Int64): T; end; implementation class function TReaderHelper.ReadToStruct<T>(stream: TFileStream; offset: Int64): T; var rd: integer; begin stream.Position := offset; if stream.Position <> offset then raise Exception.Create('Seek error'); rd := stream.Read(Result, sizeof(T)); if rd <> sizeof(T) then raise Exception.Create('Read ' + IntToStr(rd) + ' instead of ' + IntToStr(sizeof(T))); end; You are trying to read directly to T which is the type. You need to provide a variable of that type into which to read. type TReaderHelper = class class procedure ReadToStruct<T: record>(stream: TStream; offset: Int64; out Data: T); end; class procedure TReaderHelper.ReadToStruct<T>(stream: TStream; offset: Int64; out Data: T); begin stream.Position := offset; stream.ReadBuffer(Data, sizeof(T)); end; Rather than supply a specific stream class like TFileStream, it is more flexible to supply a generic stream class. This allows you to use this method with different stream implementations. The seek exception that you raised serves no purpose because it is possible to seek beyond the end of a file. Any errors arise in subsequent read or write actions. The other exception is fine, but it is perhaps simpler to use ReadBuffer and let the stream class raise an exception in case the requested amount of data cannot be read.
STACK_EXCHANGE
Learner’s Packs would a broad theme/topic to it such as “Japanese” “Chinese” “Java Script” or anything that requires multiple decks to learn and would indicate which decks, in which order, and at which speed (lessons per day) they are recommended by the Kitsun on-boarding team/experienced learners and would be part of the on-boarding experience. It would essentially be a more tidy version of the current “Featured Decks” section of the Community Center. Hmm. So for example, you may have the 10k, prefectures, names, and katakana decks for Japanese, while indicating that the 10k should be the main focus, with the others being taken when one is more advanced? While I agree it would be better to have things other than a broad “featured,” it may be best to divide it up a bit more, into a “Kitsun Path” along with other popular decks. For example, with Japanese, it depends on the user’s goals. To take a couple examples, the WK level 61-70 deck or the Genki deck wouldn’t have a place in Kitsun specific path, but still have a bunch of utility for people depending on what else they are using for Japanese. Maybe something as simple as ‘novice, intermediate, advanced’ or difficulty rating could help a lot too. When I started, I could barely touch Kitsun…just didn’t have the time. Since finishing WK, my time allocation is entirely different now. I know just looking through, our short-term goals seem to be quite different but anything to help the on-boarding experience is sure to help the platform. I tend to prefer options to make my own decision on what to do but that takes some time to determine. However, something like WK there is no thought other than pace (you just do the levels) and they have success with the simplicity though it’s very binding. I think displaying the variety of how the system can be used may be its greatest asset and now that it’s expanding to other languages, the sky is the limit on Kitsun. So just hearing how a variety of users made use of Kitsun like you said, someone might say ‘hey, that’s what I want to do’ given wherever they are at in the language journey. I think having general paths/packs would be a fantastic idea, but relatively difficult in execution if we want it to fit the user. Kitsun users come in with different levels of knowledge and available time, especially when you further divide it by learning aspect. For Japanese some people come from WK and know a lot of Kanji but are still shaky in vocabulary, while for others it might be reversed, or they might be complete beginners. I think the above mentioned difficulty rating could help already, but it might need to be specific per aspect (vocabulary, grammar, listening, kanji) for languages like Japanese. Of course, there’s always a possibility for having “beginner -> master” kinda packs where we just suggest deck A, B and C and let the user decide if they wish to start with those. At some point we’d like Kitsun to be a platform where it can take you from beginner to mastery in a subject, either through community decks or Kitsun official decks, but that’s obviously a very ambitious goal We (as a team nowadays ) are very aware of the very lacking onboarding experience so most of our recent efforts have been going into improving that. The Knowledge Base with FAQ articles will be going live this weekend probably. Onboarding tutorials (on-site widgets) have also been written and will be implemented soon if time allows (I really need to get back to the mobile apps development as well…).Next to that we have a big bunch of starter decks coming in for ~20 languages and will be focusing on creating high quality decks for a select few languages soon. While unrelated to the topic, the known words system that will probably go live this weekend will hopefully also lessen the friction for users with existing knowledge as they can easily hibernate duplicates or already known words with that feature.
OPCFW_CODE
User Avatar Plugins: Any programming language or framework has multiple advantages if it is an open-source platform. The related developers can have a look through it, make changes and customization in the same, share the whole task code as it is, or make slight changes in the same. This is where WordPress stands firmly today. Here the learning happens two ways- either the developers will copy the whole code, or they will look into the details, understand the same in depth and then deeply analyse how the codes work actually. WordPress Development services are at a boom and the businessperson prefers to have the software coding in WordPress platform. The free software obviously gives one the flexibility to understand to decide that if they are heading towards the direction in which they want to move then they can definitely fork the code and create a totally new location for its programming. Table of Contents The developers also keep sharing their forks which can be accessible by others as that is what an open-source platform is all about. So, gratitude to those developers who invested their time and efforts here and made some quick alternatives as well. This reunited the developers to be together at a single platform where all kinds of queries are addressed. Below mentioned are the existing user forks of WordPress User Avatar: - Orig Individual Avatar by Philip Stracker- It is available only on GitHub. - Customized Individual Avatar by David Artiss- Currently available on GitHub. - One Individual Avatar by Daniel Tara- It is available on WordPress.org and has nine different translations. The above forks have brought some necessary changes in terms of coding and branding. One Individual and customized Individual Avatar removes all kinds of advertisement from the plugins as well. User Avatar Plugins, But there are developers who do not want to go ahead with these forks and want to return back to the old functionality only then they can rely on the below mentioned Alternatives. 1. WP Individual Avatar WP Individual Avatar is a simple plugin by John James Jacoby. It is a straightforward plugin which gives the customers a form to handle their account from their own web page like numerous other devices. It is considered to be unique as it works parallel with other plugins offered by Jacoby. All the plugins work in conjunction with each other. The plugin users can very easily make the selection among them which they want to install. 2. Individual User Photo Individual User photo is brought by Cozmoslabs which perfectly fits in the mold of the arising pattern. It also provides a block which facilitates the author of the page to output the user’s profile in any form like name, summary, characters, blog posts link etc. on the front end of the website. Without relevant permission to upload the pictures or image, users cannot pass the Avatar alone through plugins. Hence, the Site admins will provide the gateway by adding an approval plugin into the same. 3. Simple Local Avatars This avatar introduces the users to a new field to upload the profile image in the User Profile section. Also, it generates the requested size of the image as per demand. Gravatar is fully supported by them and if no local avatar is set for the user, then it creates the default avatars. It also allows you to switch off the Gravatar. 4. WP Social Avatar As the name suggests, this plugin allows the user to use their social media profile image as an avatar of their WordPress Blog. It helps the users to quickly rest their image and also, they don’t need to create any different Gravatar account to post that particular image. 5. Avatar Manager It is considered as one of the powerful plugins which is used to manage the different Avatars on the WordPress Blog. This plugin provides the user with choice to make selection between the custom local avatar and Gravatar. The new users might find it difficult to navigate the Gravatar. Hence, this plugin makes it easier for all the users to create a unique Avatar of themselves on the site. 6. SVG Avatars Generator SVG Avatar Generators stands for Scalable Vector Graphics. This is considered to be one of the premium avatar plugins for the Word Press and it is basically for the users who want to explore more with the user’s avatar. This plugin is a retina-ready plugin and is fully responsive and thus the images of the users will definitely look great on any of the modern digital gadgets. 7. Pixel Avatars It is developed by Ben Gillbanks who used to have a collection of devices to complete his WordPress tasks of WordPress. 8. List of Authors Avatar Authors Avatar List Plugin is one of the best plugins for those who have multi users, then this plugin displays the list of user avatars as per the role and work assigned to them on one’s WordPress site. One of the interesting and unique features about this plugin is that anybody on the site can post along with their Avatar by simply mentioning the email address or inserting individual avatars for each and every single blog user. This plugin also allows users to add a shortcode through its TinyMCE Editor if one wants to display the list of user avatars in the sidebar of the WordPress site. The above points explain how the avatars create a semantic relationship among the website and their administrators, audiences and also the other people who randomly visit your site. One can customize and give personal touch to the website by using the Avatar images then it helps to connect with the audience or visitors and give them a personalized experience. Thus, User Avatar Plugins, overall Avatars helps to generate the suitable audience for their site. Also read about: 5 Factors That Make a WordPress Website Lazy
OPCFW_CODE
BadImageFormatException in System.Reactive.Core.dll I'm puzzled about an exception that I get in a fairly simple project from Rx. I googled the BadImageFormatException and found it's thrown when a platform-specific assembly is attempted to be loaded into an incompatible process. The platform settings of my project are "Any CPU" though, and System.Reactive.Core.dll itself is too, obviously. The stack trace's top is in Rx: the sources for which read protected override void OnNextCore(T value) { _onNext(value); } _onNext being an Action<...>. My immediate problem is that I don't know how to figure which assembly actually fails to load - the information isn't in the exception I'm getting, and I don't know where else to get it from either. I don't think it's actually anything about Rx, but what is it? Anyone any ideas? EDIT 1: Here's the result of fuslogvw on a non-debugger run, set to "show all bindings", together with the stack trace I'm getting from the exception. Setting fuslogvw to show only failed bindings gives me nothing at all. EDIT 2: I also made sure "prefer 32 bit" is set to off in all assemblies that are from me, especially the main console app. EDIT 3: Absolutely baffling: I now broke the solution down, removed all dependencies including rx, and copy-pasted the sources to a new solution with all projects straight out of the wizard - it's still happening. I tried it on two other machines, still happening. What the hell is this!? I my desperation, here're the sources. Perhaps someone smarter than me is curious enough: source code I suggest you to download Windows SDK and use fuslogvw tool to debug assembly bindings. https://msdn.microsoft.com/en-us/library/ms717422(v=vs.110).aspx https://msdn.microsoft.com/en-us/library/e74a18c4(v=vs.110).aspx Added info about a fuslogvw run. Usually (and I don't mean always), the BadImageFormatException gets thrown when the assembly is either corrupt or the "bitness" (i.e. 32-bit, 64-bit) is different, in other words the executing assembly is compiled against a different "bitness" than the assembly being loaded. In Visual Studio, even though you selected the Any CPU option, check to see if the Prefer 32 bit is set or not. It could be that there is still a mixture of 32-bit and 64-bit assemblies being compiled in your application. See this for more info on the Prefer 32 bit field: What is the purpose of the "Prefer 32-bit" setting in Visual Studio 2012 and how does it actually work? Another way to check your assemblies is to download and install a nice tool called Assembly Information which will tell you more about the .NET assembly. These were indeed enabled, but unsetting it didn't change anything. Assembly Information is a cool took, but curiously it seems to be unable to open assemblies that were compiled with "prefer 32-bit". Still don't have a clue as to what's going on. I made sure I compiled everything from scratch. Then the question becomes, what other platforms could it have targeted? Is it maybe compiled with a .NET that is not for Windows Desktop but for Mobile or something else perhaps? And the Reactive Core DLL? Referenced are the files in "net45", also Assembly Information gives me the same dependency on mscorlib (including PublicKeyToken) my own exe gives me. Do you have colleagues that have the same problem as you with this exception? It's a private project, never happened to me before or read about something similar. Either that or .NET. Sorry, I have no futher advice to give.
STACK_EXCHANGE
Hey everyone! Sorry for the delay in posting this. Literally as soon as the Closed Beta ended, my basement started flooding due to the massive amounts of rain we've been getting where I live, so I had to deal with that little mini-disaster first. Luckily all is fine now so we can get back to what's really important: the beta recap! We had more than 334 people participate in this first Closed Beta test, which was great. I believe we peaked at around 60 concurrent players right after the test began. Unfortunately due to the way the reset happened this time I don't have stats on total time played or who played the most (sorry!) We should have those next time, though. Overall, I was really pleased with how the test went. We had solid participation from the community, and we had a smattering of players doing everything from enjoying their first visit into the world, to grinding as fast as possible to reach the current max level. There were even some attempts at the new World Boss, which (once the bugs were fixed) pretty much went how I expected. Really this test was just about getting a lot of nice quality of life changes and bug fixes in based on lessons learned from the Open Alpha test, and I think we accomplished that nicely. The client is still crashing more often than I'd like, but we have some things we'll be doing before the next test to hopefully address those remaining issues. For the Next Test The next test will begin Friday, September 15th at 12 PM (Noon) US Central Time (Coundown Clock) and run until September 19th. We have an incredibly (probably over) ambitious internal schedule for that test, but here's what I'm willing to commit to publicly right now: - The much-anticipated combat balance patch will be in. That doesn't mean combat will be totally finalized, but it does mean that we're going to take a pass at bringing things into the alignment we'd like to see. You can also anticipate a few new abilities/features for some classes, as well as new combat mechanics overall. - There will be a lot of new quests to do. Some of them may have placeholder dialogue in at this point, but our goal is to get as many in mechanics-wise as we can. - The new Desert Dungeon will be available for play. - In-game messaging and Fellowships will be added - Potion aging and the first prototype of Artificing will be available There may be even more than that depending on how things go, but we'll see. We'll be doing a series of preview posts (hopefully weekly beginning next week) to preview these features as we're working on them, so stay tuned. Until next time, see you in-game!
OPCFW_CODE
- "When I was done, I took a deep breath and thought: cheetah. Quite possibly the most gorgeous wild cat ever to roam the savannah. [...] I was built for speed. Not endurance, maybe. But oh, yes. Definitely speed. Stunning speed. Zero to forty-five miles per hour in two point five seconds. From a point of rest. From sitting perfectly still. Do you understand that kind of acceleration? I mean, can you even really imagine it? And once the cheetah got going – top speed, between sixty and seventy miles per hour – it could cover almost one hundred feet per second." - ―Rachel upon morphing the cheetah The cheetah is the fastest land animal on Earth and is an extremely endangered species. The Gardens obtained three Southern African cheetahs as part of a population conservation effort, which were acquired by Rachel, Cassie, Marco and Ax in The Weakness. Rachel, Cassie, Marco and Ax used cheetah morphs in The Weakness in an attempt to assassinate Visser Three, whose private feeding ground Tobias had discovered. Although their speed allowed them to get a few hits in, their endurance was limited and thus were nearly killed when the visser was reinforced by the Garatron, who opted to spare the Animorphs, citing that it was Visser Three's responsibility to do so. Used by AxEdit While attacking an airport in The Unexpected to prevent the Yeerks from obtaining and destroying a seized Bug fighter, Jake ordered Ax to use his cheetah morph as a battle morph, as there was uninfested humans in the airport. Used by the YeerksEdit Shortly after gaining the Escafil Device, a few human-Controllers acquired cheetahs. One of them fought the Animorphs in The Sacrifice, although he was subdued by Timmy and Collette and killed by National Guard troops. When the Animorphs succeed in destroying the Yeerk pool, several morph-capable Controllers morph cheetahs to flee the blast radius. Used by MarcoEdit - "Marco in cheetah morph and Ax fought most logically. They each understood that a mere wound was enough, as long as live Taxxons were left undamaged to take care of the finishing off. Marco would accelerate to forty mph, dig his less-than-deadly claws into a victim, leave bloody scratches behind, and prance away far too quickly for a Taxxon to respond." Marco elects to use the cheetah morph as his battle morph in The Answer, when launching an attack against the Taxxons constructing the new Yeerk pool on Earth, as Marco chose to use logic and speed rather than the gorilla's brute strength. - The Stranger - The Threat (mentioned) - The Extreme (mentioned) - The Attack (mentioned) - The Weakness (Southern African cheetahs; acquired and morphed by Rachel, Cassie, Marco and Ax) - The Other (mentioned) - The Unexpected (morphed by Ax) - The Sacrifice (morphed by morph-capable Yeerks/human-Controllers) - The Answer (morphed by Marco)
OPCFW_CODE
The Fiji burrowing snake is very distinct from the more widely known Pacific Boa, Candoia bibroni. Also known as: Fiji Burrowing Snake Local Names: Bolo DescriptionThe Fiji burrowing snake is very distinct from the more widely known Pacific Boa, Candoia bibroni. The Fiji burrowing snake is much smaller, growing up to a maximum snout-vent length of 30 cm. It has a small head which is indistinct from the neck, a short tail, and smooth body scales. The body is a uniform dark brown or mid-brown in colour with lighter sides. The belly is generally pale brown or white blotched with black or brown in colour. Younger snakes can be distinguished by the yellowish mark on the back of the head (see picture). The eyes are small and dark, and do not have a vertical pupil. This snake has only been recorded from Viti Levu from the Wainikoroiluva Valley, the Sigatoka Valley, Naitasiri and the Monasavu area. It may be more widespread throughout the Fiji group, but because of its fossorial and elusive nature, this cannot be confirmed. Within Viti Levu, the burrowing snake has been found within the province of Namosi. Habitat Ecology and BehaviourThe Fiji burrowing snake, is, as the name suggests, a burrowing snake that lives within loose soil, under leaf litter or beneath termite nests. They have been found as deep as 1 m under rock rubble, in lowland rainforest and inland valleys. More specifically, they are found on valley floors and on low mountain slopes. There have been reports of the snake from plantations, when farmers dig out the soil for planting. They have also been found on the ground surface after heavy rain. Male and female Fiji burrowing snakes attain sexual maturity between 180-200mm SVL. The Fiji burrowing snake is oviparous, meaning that females produce eggs. The eggs are ellipsoidal in shape. The size of hatchlings is unknown as is everything else about the reproductive biology of Fiji's only endemic snake. They feed on worms, soft-bodies insects and other soil arthropods. The burrowing snake is presumed to be nocturnal, though this has not been confirmed. Even though it produces neurotoxic venom it is very placid and, as reported by Ryan (2000), do not seem to mind being handled. These snakes probably only bite when handled too aggressively. ThreatsIt is very probable that this small snake has been under great threat ever since the pig was introduced to Fiji and went feral in the forests. The introduced Indian Mongoose (Herpestes auropunctatus) is also a threat when the snake emerges to the surface and rats may well predate on its eggs. All of these introduced species pose an on-going threat to the survival of this rare endemic snake. Other threats have materialised more recently, much of the land in its known geographic range is being logged, drilled for prospects of mining and converted to agricultural land. The area from which the majority of the burrowing snakes have been collected is within the Wainikoroiluva valley in the province of Namosi, which is within the prospect for the proposed Namosi copper mine. Exploration surveys and drilling for copper deposits have been ongoing within this area since the late 1960s. Conservation StatusOther then its listing as a 'Vulnerable' species in the IUCN list of endangered species, there are no other known efforts to conserve Fiji's only endemic snake. To survive this snake will require a large area of mixed forest from which pigs, mongooses and other introduced predators are permanently excluded. The impacts of the proposed copper mine surveys and drilling as well as the potential impact of the mining venture on the native and endemic plants and animals that occur within and outside of the Namosi valley will also need to be assessed. Remarks and Cultural SignificanceOther then its listing as a 'Vulnerable' species in the IUCN list of endangered species, there are no other known efforts to conserve Fiji's only endemic snake. To survive this snake will require a large area of mixed forest from which pigs, mongooses and other introduced predators are permanently excluded. The impacts of the proposed copper mine surveys and drilling as well as the potential impact of the mining venture on the native and endemic plants and animals that occur within and outside of the Namosi valley will also need to be assessed. Keogh et al. (1998); de Marzan (1987); Parker and Grandison (1977); McDowell (1969, 1970); Zug and Ineich (1993) Front Page Photo: Paddy Ryan. A juvenile Fiji Burrowing snake, with the distinguishing yellow mark on the back of the head.
OPCFW_CODE
Java JPG codec won't work I have problem with my tomcat application, after changing the server and installing the last version of tomcat7 my application won't read/load jpg files.. I installed imageio and jai on the server, try to change java version but every time I have the same error.. Anybody have an idea? Error: One factory fails for the operation "jpeg" Occurs in: javax.media.jai.ThreadSafeOperationRegistry java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122) at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674) at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473) at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332) at com.sun.media.jai.opimage.StreamRIF.create(StreamRIF.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at javax.media.jai.FactoryCache.invoke(FactoryCache.java:122) at javax.media.jai.OperationRegistry.invokeFactory(OperationRegistry.java:1674) at javax.media.jai.ThreadSafeOperationRegistry.invokeFactory(ThreadSafeOperationRegistry.java:473) at javax.media.jai.registry.RIFRegistry.create(RIFRegistry.java:332) at javax.media.jai.RenderedOp.createInstance(RenderedOp.java:819) at javax.media.jai.RenderedOp.createRendering(RenderedOp.java:867) at javax.media.jai.RenderedOp.getWidth(RenderedOp.java:2179) The whole error log can be found here -> http://paste.ubuntu.com/7653452/. Update: The problem is related to grails plugin called ImageTools Please post the whole error message what version of java? what version of imageio? @brett-okken: I tried OpenJDK6 and OpenJDK7, and Oracle JDK6 and OracleJDK7.. ImageIO → libgtlimageio0.8 @max_meijer: The whole log → http://paste.ubuntu.com/7653452/ Caused by: java.lang.NoClassDefFoundError: com/sun/image/codec/jpeg/ImageFormatException you have a classloading problem. Some jar is missing or needs to be in tomcat/lib probably @karl-kildén I already added the sun-jai_codec.jar to my project lib dir and to java lib dir :/ but I have the same error.. :/ Last time I used Tomcat, the libs should be in shared/lib, alternatively, you could place them in your JRE's ext/lib folder. @haraldK I added them to /usr/lib/jvm/java-6-openjdk-common/jre/lib/ext/ and create links ln -s /usr/lib/jvm/java-6-openjdk-common/jre/lib/ext/*jai*.jar /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/ext/ restarted tomcat tested jpeg import.. same error :( Have you verified that the JAR is ok, and contains the classes in question? Using jarsigner -verify file.jat it says jar is unsigned. (signatures missing or not parsable) for all jars :/ Still have the same problem.. I installed grails on the server but I have the same problem, the problem came from ImageTools plugin. Anyone have an alternative to ImageTools Plugin?! The developer don't offer support for it any more.. And am getting problems with it when I moved to a new server and new Java version.. If you look at the code for JPEGImageDecoder you'll see it depends on com.sun.image.codec.jpeg.ImageFormatException in its imports. However, com.sun.image.codec.jpeg was removed from Java 7 onwards. So likely the problem is that JAI is simply out of date, and you would have to use a Java 6 runtime to use it.
STACK_EXCHANGE
[PATCH 0/6] Re: Readd idle again nicolas.s-dev at laposte.net Thu May 19 20:35:04 BST 2011 On Thu, May 19, 2011 at 03:02:25PM -0400, Ethan Glasser-Camp wrote: > Hi guys, > Please find the followign six patches to readd IDLE, which are the > same as yesterday but with minor changes and clarifications to respond > to Nicholas's criticism. I guess if I were really slick I'd only send > updated patches, but I'm not that slick :) Anyone want to provide > - remove unneeded time import (patch 3) > - warn if IDLE is not supported by server (patch 4) > - rename variables so as to clarify imaplib2's "error" callback (patch 6) > - put the comment warning that we can't do a NOOP on a bad connection > before the if checking for a bad connection for clarity (patch 6) > - document shortcomings of IDLE and mark it as EXPERIMENTAL > in offlineimap.conf (patch 1) I'm missing the interdiff. I created it this time: diff --git a/docs/MANUAL.rst b/docs/MANUAL.rst index 62b0d3b..26f31a6 100644 @@ -288,6 +288,17 @@ KNOWN BUGS last stable version and send us a report to the mailing list including the +* IDLE support is incomplete and experimental. Bugs may be encountered. + * No hook exists for "run after an IDLE response". Email will + show up, but may not be processed until the next refresh cycle. + * nametrans may not be supported correctly. + * IMAP IDLE <-> IMAP IDLE doesn't work yet. + * IDLE may only work "once" per refresh. If you encounter this bug, + please send a report to the list! diff --git a/offlineimap.conf b/offlineimap.conf index 39cded6..9329c66 100644 @@ -372,6 +372,10 @@ remoteuser = username # maxconnections - to be at least the number of folders you give # holdconnectionopen - to be true # keepalive - to be 29 minutes unless you specify otherwise +# This feature isn't complete and may well have problems. BE AWARE THIS +# IS EXPERIMENTAL STUFF. See the manual for more details. # This option should return a Python list. For example # idlefolders = ['INBOX', 'INBOX.Alerts'] diff --git a/offlineimap/imapserver.py b/offlineimap/imapserver.py index b24f355..37d80f0 100644 @@ -21,7 +21,6 @@ from offlineimap.ui import getglobalui from threading import Lock, BoundedSemaphore, Thread, Event, currentThread from thread import get_ident # python < 2.6 support @@ -408,7 +407,8 @@ class IdleThread(object): self.needsync = False self.imapaborted = False - if args is None: + result, cb_arg, exc_data = args + if exc_data is None: if not self.event.isSet(): self.needsync = True @@ -422,11 +422,14 @@ class IdleThread(object): if "IDLE" in imapobj.capabilities: + self.ui = getglobalui() + ui.warn("IMAP IDLE not supported on connection to %s; falling back to no-op" + # Can't NOOP on a bad connection. if not self.imapaborted: - # Can't NOOP on a bad connection. # We don't do event.clear() so that we'll fall out # of the loop next time around. > Once again, I hope you can apply this :) More information about the OfflineIMAP-project
OPCFW_CODE
The tool is under development now. You can take part in urpm-reposync testing. usage: urpm-reposync.py [-h] [--include-media INCLUDE_MEDIA [INCLUDE_MEDIA ...]] [--exclude-media EXCLUDE_MEDIA [EXCLUDE_MEDIA ...]] [-v] [-q] [-a] [-p] [-d] [-r] [-c] [-k] [--runselftests] [--detailed] reposync is used to synchronize a set of packages on the local computer with the remote repository. optional arguments: -h, --help show this help message and exit --include-media INCLUDE_MEDIA [INCLUDE_MEDIA ...], --media INCLUDE_MEDIA [INCLUDE_MEDIA ...] Use only selected URPM media --exclude-media EXCLUDE_MEDIA [EXCLUDE_MEDIA ...] Do not use selected URPM media -v, --verbose Verbose (print additional info) -q, --quiet Quiet operation. Senseless without --auto. -a, --auto Do not ask questions, just do it! -p, --printonly Only print the list of actions to be done and do nothing more! -d, --download Only download the rpm files, but install or remove nothing. -r, --remove Remove all the packages which do not present in repository. By default, only some of them would be removed. -c, --check Download packages and check wether the ycan be installed to your system, but do not install them. -k, --nokernel Do nothing with kernels. --runselftests Run self-tests end exit. --detailed Show detailed information about packages are going to be removed or installed (why does it have to be done) Help Reposync Testing You need a virtual machine with some version of Mandriva or ROSA on-board. Do not forget to create a snapshot. You can download the attachment and do the following: 1) Setup correct mirrors (you can try other repositories. It would be a great experience too): sudo urpmi.addmedia --distrib http://mirror.yandex.ru/rosa/rosa2012lts/repository/i586 And in case of 64 bit system: sudo urpmi.addmedia --distrib http://mirror.yandex.ru/rosa/rosa2012lts/repository/x86_64 (64bit system has some packages installed from i586, so you need i586 repository too) 2) install package pyhton-rpm (sudo urpmi python-rpm) 3) Now configure your media sources and enable all the media (maybe except contrib). 4) Download reposync from link below. sudo python urpm-reposync.py -av | tee /tmp/log_something-to-something If some errors occurred, and send me the report (/tmp/log_something-to-something). Please, report me about every problem and about every successful run. If you have some feature requests - you are welcome, send it to me directly (email@example.com) - Trying not to remove packages, even if they are not in repository. But if this package prevents some other package from being upgraded/downgraded - it will be removed. Option "--remove" (or "-r") will remove all the packages which do not present in repository. - Option "--detailed" will show some useful information: why the package should be removed (it requires something that will not be installed, it conflicts with something or something conflicts with it) or why the package should be installed (what package requires it). For example, there can be a line like "U poppler 0.16.7-1(mdv2011.0) 0.16.7-2(rosa.lts2012.0) x86_64" - Option "--printonly" (ot "-p") will print one-line list of actions to be done and exit. Every line starts with the letter of action (Upgrade, Downgrade, Remove, Uppend, Save). It's useful for machine processing or to grep this list. - Option "--download" (or "-d") will calculate all the actions to be done and download the packages, then exit. - Option "--check" (or "-c") will download packages, generate transaction and check it with rpm, then exit. - Option "--nokernel" (or "-k") will prevent reposync from dealing with kernels. - Option "--runselftests" will run tests. There is not so mach to test, so there is small number of tests. - Support of providing dependencies with version with ">" and "<". Some packages provide something like "depname[< 1.0]" - Warning about missed media. For example, you have If there are libraries in your system which are not in repository, but the other architecture presents, reposync will warn you about it. - Special mode to repair the system packages (try to fix missed dependencies, remove the older version of packages with multiple versions installed and so on) - There is a total amount of data to be downloaded in summary, but it is incorrect (about 2-3 times greater than real). The size comes from synthesis file (@filesize@ field), but it's incorrect there. The newest version can be found here here
OPCFW_CODE
Solving this linear system based on the combustion of methane, has no constants I recently discovered that I could solve a chemical reaction using a linear system. So I thought I would try something simple like the combustion of methane. where x y z and w are the moles of each molecule x $CH_4$ + y $O_2$ = z $H_2$O + w C$O_2$ the linear system for this would be: x = w 2y = z + 2w 4x = 2z I got as far as y = z and y = 2w but without any constants, I am stumped. Can anyone help me? I was assuming that elimination and substitution would suffice, but I must be wrong. You have three equations with four unknowns, which should lead you to expect a single undefined parameter. In this case you can multiply all your variables by any constant and still have a fine set of equations. The easiest cure is to set one of them to $1$. Maybe you choose $x=1$. Then you should be able to solve the rest easily. This would represent burning $1$ mole of methane. If you burned $2$ moles, you would have twice as much of each other reactant. thanks, that makes it a lot simpler to work it out. I think the reason for this is because there is an infinite number of values for each variable. I know you want the answer in the least common terms, but the terms can always be multiplied higher. Assign any random value to any random variable, and solve it from there. From the four variable answers you get, if there are no decimals, then reduce them to least common terms. If there is a decimal, then find a suitable integer to multiply each constant with to make them all integers. For instance, make X = 1. So W = 1. Z=2, and Y=2. Suppose you did like, Z = 1. Then X=1/2, W=1/2, and Y=1. To get rid of the 1/2's, multiply everything by 2. At some point you have to introduce some kind of parameter (for instance letting $x=t$ for $t\in\mathbb{N}$) or just fix one variable (e.g. $x=1$). This is required because you got $4$ variables but only $3$ independent equations. This fact can also intuitively be made clear because you can choose to burn $2$ methane molecules or $90$ or one million. If the combustion equation is fulfilled for any $x,y,z,w$, then it will be true for integer multiples of all values. Each of your equations can be rearranged to give $$x-w=0$$ $$2y-z-2w=0$$ $$4x-2z=0$$ Your idea of using Gaussian elimination is then a very good one. Or you can solve directly. From the first equation, you get $$x=w.$$ From the third equation, you get $$z=\frac{x}{2}=\frac{w}{2}.$$ Then from the second equation you get $$y=\frac{z}{2}+w=\frac{5w}{4}.$$ In each subsequent equation, we have used previous information and any value for $w$ will suffice ($w$ is called a free variable). Note that this makes sense as $w$ is a particular number of type atoms and changing it will change how the equation is to be balanced.
STACK_EXCHANGE
To cut a long story short I'm in the process of making pre-configured files which I will be embedding into my own compiled router firmware. The biggest task is setting up a VPN server so that outside traffic will be able to join my LAN, access LAN resources such as a SMB server, and ensure an encrypted connection when using open, unsecured AP's, therefore would like to use a secure VPN tunnel to encrypt the data. I've been trying to setup OpenVPN following this guide https://openwrt.org/docs/guide-user/services/vpn/openvpn/basic So far I have setup a VPN interface that is set to 'tun0'. The guide says to use firewall rules but I'd rather use firewall zone as it seems a better of keeping track of what interface is connected to what firewall zone whilst using LuCI. The problem I find with the current guide is that the big blocks of commands are uneasy to follow, often losing track of what command(s) I Iast typed in as most of it is automated. I'd rather like to configure everything manually via the nano editor or accessing the configuration files via SFTP/SCP, modifying them and re-uploading them so I know how everything is working. In terms of generating the CA, private and public keys, I'm leaning towards using my desktop PC for that as it's a lot more powerful. I'm already using OpenSSL to generate to make my own certificate authority that I was going to use to sign my private and public keys. All I'm thinking of doing is generating the certificates on my Linux PC using the Easy-RSA package and then tweaking the '/etc/config/OpenVPN' config and setting the paths to where I have uploaded the certificates onto the router. So long as I set the hostnames, common names and SAN's (subject alternative names) I should be fine? I assume if I have setup a DDNS hostname, do I set this in the SAN's? Is there anything else that needs doing? Is anyone able to supply a fully configured OpenVPN config file (excluding sensitive information)? My last question is how do I setup the IP addresses for the VPN interface. In my case I would like the VPN clients to be able to access my home network, as though they're on the LAN. How do I achieve this? For example if I set the VPN interface's network ID to 10.0.0.1/24 in the OpenVPN configuration do the clients connected get a host IP within the 10.0.0.0/24 range? If so how do I get the outside connecting client to communicate with existing LAN clients or is it a simple case of knowing the client IP address/hostname to connect to its resources? Any help appreciated
OPCFW_CODE
Western's Melissa Rice using Microsoft's new HoloLens headset for her research on Mars Curiosity, the Mars rover that Western’s Assistant Professor of Geology Melissa Rice helps to operate, sends her a 360-degree image of the Planet’s desolate landscape every morning. As if seeing brand-new images from another planet isn’t amazing enough, the new Microsoft HoloLens headset that Rice recently received uses these photos to produce an augmented reality simulation of the Mars landscape. This allows Rice to explore, virtually, the Mars landscape around the rover. “I can see everything on Mars to scale all around me in all directions. When I only use my computer screens, I do not get an intuitive sense for how big something is that I see in the images. I never lose that sense of scale with the HoloLens,” Rice said. The Curiosity rover supplies Rice with both images as well as data from its on-board labs. “Using that image, we then decide where the rover is going to drive next. We look at the rocks closest to the rover and decide which ones we want the rover to reach out and touch and take a microscopic image of, or take a chemical measurement of,” Rice said. “At the end of our day on Earth, we send those commands to the rover, which then executes those commands when it is morning on Mars. This process repeats itself every day.” The HoloLens fits around the user’s head and can be tightened by a knob in the back. “The HoloLens uses augmented reality, which is different than virtual reality in that it does not block off any of your normal sensory input,” Rice said. “Instead of putting you on Mars, what it does is overlay the Mars landscape on top of your normal reality,” Rice said. One example of augmented reality is using a smartphone to play the incredibly popular Pokémon Go! game, where “monsters” can be found near the user by looking through the phone’s camera. Microsoft sent the Jet Propulsion Laboratory 30 HoloLens headsets for team members on the rover mission. Rice said the goal is for the 400 other scientists to eventually use the HoloLens system for the rover mission. “We want to all see the same thing and be able to talk to each other about what we see,” Rice said. “The intent eventually is for all 400 of us to communicate through the HoloLens.” Right now, Rice conference-calls her team on a speakerphone while using the HoloLens to describe to them what she is seeing. She can talk to them and see what other people are pointing at in the augmented reality world as well as hear them on the phone line. The HoloLens also allows the team members to create markers that show up as virtual blue flags on the Martian landscape that can be seen by other team members, so they can show each other different rocks, features or places on the surface. Rice first tested out the HoloLens in spring of 2014 while she was still working at the JPL. “Microsoft flew me to Redmond to test an early version of the HoloLens,” Rice said. “It was closer to the size of a computer, not wireless, and they controlled the augmented reality through a desktop computer.” Rice said she is impressed that Microsoft redesigned the HoloLens to be small and work entirely through wifi; all of its software and data is streamed from the cloud. So while most people wake up in the morning and sip on a cup of hot coffee, Rice is scouring the Mars landscape, talking with the rover team and plotting where Curiosity will explore next - all through the technology of augmented reality.
OPCFW_CODE
Microsoft Dynamics GP reporting is essential part of GP implementation and post production ERP support. As for typical mid-size MRP application, you should expect multiple reporting tools, especially if you think about reports genre: financial reporting, business processes specific reports, etc. Also, when Microsoft incorporated MS SQL Server Reporting Services or SRS and kind of recommended it over former tool of the choice – Crystal Reports, you should do your homework on selecting reporting tool which will do proposed and expected job. In this article we would like to review major reporting tools for Microsoft Great Plains and give you orientation session: 1. Great Plains ReportWriter or RW. This tool is incorporated into GP Dexterity environment, meaning that it works from the GP user interface and within Great Plains security realm. RW is the tools you should look at first when you are customizing something which you call from GP interface: Sales Order Processing invoice form, for example. Do not expect too much from Report Writer, as it has numerous restrictions: it can work from existing GP screens only where parameter sets are predefined by GP existing standard logic. If you would like to create custom reports in RW and call it from GP interface with custom parameter selection form, you will have to create GP Dexterity customization and call your report from there 2. Crystal Reports and SRS. These two report design tools are competing on the market and in some features similar. When you base you GP reporting logic in SQL Stored Procedures, the final report printing could be done with the same success from CR and SRS, so it probably the question which tool is the most comfortable for you from your skills and experience. The advantage of SRS and CR is the following – you can create really cross-modules and all-nightly report. You should also consider comparing stored proc over SQL view, stored procedure allows you to submit parameters and create temporary tables in your SQL script code – these are what you can not do in the SQL views. However if you are looking in the creation of pure financial reports: Profit & Loss Statement, Balance Sheet and Cash Flow – you need to keep reading 3. FRx. FRX Report Designer advantage over Crystal and SRS is the connection to General Ledger GP interface. In FRx you can also cross company boundaries and create consolidated report. FRx allows you to create so called reporting trees, where you can create report by specific GP segments in multiple companies. Plus, FRx give you the link to Excel worksheet, where you have something like small branch accounting data, which you would like to see consolidated with corporate income statement 4. Microsoft Excel reporting options. Imagine that you recognize the fact that most of your people in Accounting department trust to MS Excel only and do not follow corporate procedures to use complex tools, such as MS Outlook or even Microsoft CRM. In GP you can easily export data to Excel from Smart List or more advanced tool – Smart List Builder, where you can create views based you custom SQL expressions 5. SQL Select statement. If your reporting and data mining and discovery needs are changing every day, and creation of permanent Crystal Reports is not feasible, and if you have decent SQL programmer and developer in staff, you should consider ad-hoc SQL scripting with export to Excel to produce required reports. You should be aware, that if your managers require you to build data warehouses on the daily basis, then you should implement SQL Analytical Services with Excel Pivoting tables or Cognos 6. Microsoft Access Reporting option. If your IT people expertise covers MS Access, consider using ODBC connection to GP database and use Access Reports 7. MS Visual Studio.Net custom programming reports. If you have strong C# and VB developers, you should consider training them in eConnect and then they should be capable to build web reporting applications. In the case if you would like to extend eConnect logic with direct access to GP tables, please review GP tables structure – Microsoft Dynamics GP->Tools->Resource Description->Tables 8. Archaic DB platforms. If you are still on GP 7.5 and earlier, you may be using Pervasive SQL Server 2000 or Ctree/Faircom. If this is your case, you have to use one of the following tools: MS Access, Crystal Reports (where you are restricted to ODBC driver limitations to cross GP modules restrictions), or GP Dexterity customization. Also if you are on old version you should consider upgrade if you business is in active mode – current version of GP is 10.0 and you have to use GP migration tool to MS SQL Server version 9. Cross-Platform Reporting. Here we recommend you to review first MS SQL Server linked server concept, also you can consider similar tools from Oracle EDI. 10. Generic ERP reporting. If you are on PeopleSoft, Oracle EBusiness Suite, Peachtree, MYOB. Then reporting tool requires homework.
OPCFW_CODE