Document
stringlengths
395
24.5k
Source
stringclasses
6 values
New Features in SQL Server 2012 for Database Administrators Feb 12, 2012 At SQLPASS 2011, Microsoft has announced the launch of SQL Server 2012 which was also known to the SQL Server Community by its code name SQL Server Denali. It is expected that Microsoft will release SQL Server 2012 in the first half of year 2012. Currently SQL Server Enthusiast can download the Microsoft SQL Server 2012 Release Candidate 0 (RC0) version of SQL Server 2012 to explore new feature in the product. Quick Overview of New Features in Microsoft SQL Server 2012 for Database Administrators Microsoft SQL Server 2012 introduces lot many new features for Business Intelligence Developers, TSQL Developers and Database Administrators. This article gives you an overview of some of the new features in SQL Server 2012 for Database Administrators. Contained Database in SQL Server 2012 Contained Databases is a new feature which is available in SQL Server 2012. A contained database is a database that will store all its metadata within the database thereby not storing any configuration information within the master database of the SQL Server Instance where the Contained Database is created. A contained database is isolated from other databases which are available on the instance of SQL Server. For more information read Contained Databases SQL Server 2012. SQL Server AlwaysOn High Availability Feature SQL Server 2012 introduces a new High Availability option namely SQL Server AlwaysOn. This feature is an enhancement to the existing Database Mirroring Feature which was introduced way back in SQL Server 2005 SP1. SQL Server AlwaysOn feature currently supports up to four replicas of database hence the data within the replicas can be queries, can be backed as well there by allowing maximum return on hardware investments. However, in order to configure SQL Server AlwaysOn feature you need to enable Windows Failover Clustering feature on all the nodes participating in the High Availability Environment hence you need to install Windows Server 2008 Enterprise Edition or later. Using SQL Server AlwaysOn Feature you can achieve Multi-subnet failover clusters i.e., you can configure Failover Cluster node to connect to a different set of subnet which can be either in the same location or in a geographically dispersed location there by improving your High Availability Environment. Indirect Checkpoints Feature in SQL Server 2012 Indirect Checkpoints feature is an interesting feature which is available in SQL Server 2012. Using this feature a Database Administrator can change the target recovery time in seconds’ parameter for a particular user database from its default value ZERO. By default, recovery interval (min) value is set to ZERO at the SQL Server Instance level. When target recovery time in seconds’ value is ZERO automatic checkpoints occur for the database approximately once a minute for all the active databases and the recovery time for the database will be typical less than a minute. However, if you change the target recovery time in seconds’ parameter for a particular user database from its default value of ZERO. Then the recovery of the database in the event of a system crash will be more predictable than automatic checkpoints and as per Microsoft indirect checkpoint provide potentially faster recovery. For more information read Indirect Checkpoints Feature in SQL Server 2012. Partially Contained Database A Partially Contained Database is an interesting concept which is introduced in Microsoft SQL Server 2012. A Contained Database is a database which incorporates all the database setting and metadata within a database without any configuration dependencies on the instance of SQL Server where the database was initially created. As a result a user can get connected to the contained database without actually authenticating the user at the database instance level thereby isolating the database from the database engine completely. This will help a database administrator to easily move the contained database from one instance of SQL Server to another without actually bothering about orphan user issues. For more information read How to Configure and Use Contained Databases in SQL Server 2012. User Defined Server Level Role At last Microsoft has accepted the long pending demand from SQL Server Customers by introducing the feature to allow database administrators to create User Defined Server Level Role in SQL Server 2012. Beginning SQL Server 2012 a database administrator can create a User Defined Server Role and even add a SQL Server Level Permissions to the User Defined Server Role. This feature will definitely help many organizations to delegate non critical work to junior database administrators. Support for 15,000 Partitions in SQL Server 2012 Microsoft SQL Server 2012 by default supports up to 15,000 partitions for a Table. In the earlier versions of SQL Server, this number was limited to 1,000 partitions by default. However, the good news is SQL Server 2008 SP2 and SQL Server 2008 R2 SP1 Support 15,000 Partitions. For more information read Partitioned Tables and Indexes. Columnstore Index Feature SQL Server 2012 It is becoming very crucial for many organizations now-a-days to improve data warehouse query performance. Considering this demand Microsoft SQL Server 2012 introduces a new in-memory, Columnstore index build directly in SQL Server Database Engine. This feature can be used to improve the performance of query when run on large data sets. Using this feature one can get 10 to 100 times performance improvements while running queries on large data sets. One can use this feature to queries which run against Star Schemas which retrieves data from very large fact tables. Once you enable this feature on a table do remember that the table will become Read Only. i.e., Insert, Update, Delete and Merge operations are prohibited. Online Index Create, Rebuild, and Drop Option in SQL Server 2012 for VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) columns Microsoft SQL Server has come up a long way since its first release; it has been widely used by many organizations across the world to run mission critical workloads. With SQL Server 2012 one can Create Indexes, Rebuild Indexes and Drops Indexes which contain VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) columns as an online operation thereby ensuring business doesn’t experience any downtime during the routine maintenance window. This feature will help business to have their SQL Server’s up and running for user activity while maintenance is going on. Achieve Maximum Stability, High Availability and Scalability with Windows Server 2008 R2 With SQL Server 2012 one can achieve maximum stability when it is run on Windows Server 2008 R2 as Windows Server 2008 R2 supports up to 256 logical processors and supports the use of up to 2 terabytes of memory on a single instance of Windows Server 2008 R2 instance. It is recommended to use SQL Server 2012 on Windows Server 2008 R2 as its support huge workload, dynamic scalability, high availability and stability. IntelliSence Feature Enhancements in SQL Server 2012 Management Studio In SQL Server 2012 IntelliSence feature suggest stings that are matched based on the partial words. However in the previous versions of SQL Server suggestions where made typical made on the first few characters of the word typed by the user. In this article you have seen some of the new feature which are introduced in Microsoft SQL Server 2012 for Database Administrators. Continue Free Learning... History of Microsoft SQL Server Geeks who read this article also read… - New Features in Microsoft SQL Server 2012 for Database Developers - Using WITH RESULT SETS Feature of SQL Server 2012 - SQL Server Paging Using OFFSET and FETCH Feature in SQL Server 2012 - Indirect Checkpoint Feature in SQL Server 2012 - How to Downgrade SQL Server Database from a higher version to a lower version - Why an SQL Server Database from a higher version cannot be restored onto a lower version? - When SQL Server was last restarted? - Change SA Password in SQL Server - Troubleshooting SYSPOLICY_PURGE_HISTORY Job Failure in Stand Alone Instance SQL Server 2008 - Troubleshooting OLE DB Provider Microsoft.ACE.OLEDB.12.0 is not registered Error - Troubleshooting SQL Server blocked access to procedure sp_send_dbmail - Performance Dashboard Reports in SQL Server 2012 - Tips to Avoid Account Lockout Issues - Encrypt Database Backups in SQL Server Using MEDIAPASSWORD Option - Using SP_SERVER_DIAGNOSTICS Stored Procedure Quickly Gather Diagnostic Data and Health Information in SQL Server 2012 - How to identify if the database was upgraded from a previous version of SQL Server - Microsoft SQL Server 2008 R2 Build Versions - Installing SQL Server 2008 R2 on Windows Server 2008 R2 - New Features in Microsoft SQL Server 2008 R2 - How to repair a Suspect Database in SQL Server - Steps to Rebuild System Databases in SQL Server - How to Remotely Shutdown, Restart or Log Off Windows Server across the network - How to Backup an Analysis Services Database Using SQL Server Management Studio - Using Transparent Data Encryption Feature of SQL Server 2008 - Database Backup Compression Feature In SQL Server 2008 - Auditing SQL Server Logins - Read More SQL Server Articles…
OPCFW_CODE
#!/usr/bin/python import random import string from time import sleep import sys def delay_print(s): for c in s: sys.stdout.write('%s' % c) sys.stdout.flush() sleep(0.25) delay_print("--== A7MD0V PRESENTS ==--\n") delay_print("GASPE: Genetic Algorithm String Production Engine...\n") print("Hello user...") sleep(0.5) print("Please wait, loading program...") sleep(2) print("Welcome to GASPE ...") sleep(1) print("From the geneset of all the characters, we will try to produce your string...") sleep(1) target = raw_input("Please enter a string...\n") # target = "Hello, World! Hey Ahmad :)" def calc_fitness(source, target): fitval = 0 for i in range(0, len(source)): fitval += (ord(target[i]) - ord(source[i])) ** 2 return(fitval) def mutate(parent1, parent2): child_dna = parent1['dna'][:] # Mix both DNAs start = random.randint(0, len(parent2['dna']) - 1) stop = random.randint(0, len(parent2['dna']) - 1) if start > stop: stop, start = start, stop child_dna[start:stop] = parent2['dna'][start:stop] # Mutate one position charpos = random.randint(0, len(child_dna) - 1) child_dna[charpos] = chr(ord(child_dna[charpos]) + random.randint(-1,1)) child_fitness = calc_fitness(child_dna, target) return({'dna': child_dna, 'fitness': child_fitness}) def random_parent(genepool): wRndNr = random.random() * random.random() * (GENSIZE - 1) wRndNr = int(wRndNr) return(genepool[wRndNr]) def dump_genepool(generation, genepool): for candidate in genepool: print "%6i %6i %15s" % ( generation, candidate['fitness'], ''.join(candidate['dna']) ) sleep(0.000001) GENSIZE = 20 genepool = [] for i in range(0, GENSIZE): dna = [random.choice(string.printable[:-5]) for j in range(0, len(target))] fitness = calc_fitness(dna, target) candidate = {'dna': dna, 'fitness': fitness } genepool.append(candidate) generation = 0 while True: generation += 1 genepool.sort(key=lambda candidate: candidate['fitness']) dump_genepool(generation, genepool) if genepool[0]['fitness'] == 0: # Target reached break parent1 = random_parent(genepool) parent2 = random_parent(genepool) child = mutate(parent1, parent2) if child['fitness'] < genepool[-1]['fitness']: genepool[-1] = child
STACK_EDU
Laravel blade {{ ... }} vs <?= ?> I am moving from cakephp to laravel and I would like to understand what is the advantage of {{ }} over the short php tags <?= ?>. I would guess the php tags are faster, not needing any processing by the template framework. I understood that {{ }} will do some escaping but when not necessary why using {{ }} and not <?= ?>. Also what is the advantage of @foreach and @if ... vs <?php foreach():?> ... <?php endforeach ?> and <php if():?> <?php endif;?>. these templating engines were a great idea while they stuck to the SOLID principle of separating concerns. The used to say 'keep all business logic out of your views, only loops and echoes should be in a view'. Now, they have custom syntax for logic, meaning it's not actually any better than just a plain php script, in fact, now you need to learn a second syntax! I used to use twig in some projects, but now i like plates php, a template engine that just uses standard php now. just my opinion however, not everyone will agree with me about this. The truth of the matter is markup files need to be as clean as possible because you may hand them over to a web designer or any other non-programming person to style them for you. As such having <?php ?> (or worse (<?= htmlentities($value) ?>) all over the place can be unnecessary noise for them. from Laravel docs: Blade {{ }} statements are automatically sent through PHP's htmlspecialchars function to prevent XSS attacks. If you do not want your data to be escaped, you may use the following syntax: Hello, {!! $name !!}. Be very careful when echoing content that is supplied by users of your application. Always use the escaped, double curly brace syntax to prevent XSS attacks when displaying user supplied data. Additionally if you are using blade template engine, it's like convention to use {{ }} https://laravel.com/docs/5.7/blade#displaying-data {{ ... }} works only with blade files while <?= ... ?> works with every php file. This is the only difference I know. The advantage of {{ }} is its look pretty easy to understand also escape the special character from the string. After using @foreach we can minimize code and its also simplify the code to under stand Thank you all for your answers but I still did not get the answer I was looking for ... Meaning a clarification on what path should I go with Laravel (as I come from years of cakephp and things were simpler there in terms of templating). I understood and mentioned that {{}} is escaping strings for XSS, clear until now ... but my problems are as follows: I am migrating some templates from cakephp and is already there ... no i have to move eveything to {{}} and @foreach and @if Isn't this {{}} @foreach tags adding an overhead and extending the execution time? I mean the template has to conver all these tags to php tags in the end, right ? I did not find a GOOD editor (or IDE) to nicely display these tags, to show me the beginning and end of repetitive or decision structures, to color and highlight the syntax inside the tags ... so It's kind of annoying ... at least using i see VERY clear in all editors if i closed the brackets, arrays if it's i am missing a ' or " ... etc. Any suggestion of a good editor to recognize the blade syntax? As I understand the benefit of {{}} (escaping xss), but what is the benefit of @foreach vs or the @if vs ??? As I dont know laravel yet (and all it's power) is there a hidden feature that I am not aware of that will make it hard to work (migrate, extrapolate, color, add smell :))) joking) ... in the future if i use php tags instead of blade tags in my files? For example if o use blade tags will it be easier in the future to migrate all my files to a different templating engine and it wont work if i use php tags? Thank you again for sharing your thoughts
STACK_EXCHANGE
SQL aggregation question I have three tables: unmatched_purchases table: unmatched_purchases_id --primary key purchases_id --foreign key to events table location_id --which store purchase_date item_id --item purchased purchases table: purchases_id --primary key location_id --which store customer_id credit_card_transactions: transaction_id --primary key trans_timestamp --timestamp of when the transaction occurred item_id --item purchased customer_id location_id All three tables are very large. The purchases table has 590130404 records. (Yes, half a billion) Unmatched_purchases has 192827577 records. Credit_card_transactions has 79965740 records. I need to find out how many purchases in the unmatched_purchases table match up with entries in the credit_card_transactions table. I need to do this for one location at a time (IE run the query for location_id = 123. Then run it for location_id = 456) "Match up" is defined as: 1) same customer_id 2) same item_id 3) the trans_timestamp is within a certain window of the purchase_date (EG if the purchase_date is Jan 3, 2005 and the trans_timestamp is 11:14PM Jan 2, 2005, that's close enough) I need the following aggregated: 1) How many unmatched purchases are there for that location 2) How many of those unmatched purchases could have been matched with credit_card_transactions for a location. So, what is a query (or queries) to get this information that won't take forever to run? Note: all three tables are indexed on location_id EDIT: as it turns out, the credit_card_purchases table has been partitioned based on location_id. So that will help speed this up for me. I'm asking our DBA if the others could be partitioned as well, but the decision is out of my hands. CLARIFICATION: I only will need to run this on a few of our many locations, not all of them separately. I need to run it on 3 locations. We have 155 location_ids in our system, but some of them are not used in this part of our system. I'd recommend you pull the specific location ids you want to temporary tables then write a run of the mill join query. hmm. How is this better? (Beyond that, I'm not sure we have the disk space to do this, but if it is better, I can look into our available space) try this (I have no idea how fast it will be - that depends on your indices) Select Count(*) TotalPurchases, Sum(Case When c.transaction_id Is Not Null Then 1 Else 0 End) MatchablePurchases From unmatched_purchases u Join purchases p On p.purchases_id = u.unmatched_purchases_id Left Join credit_card_transactions c On customer_id = p.customer_id And item_id = u.item_id And trans_timestamp - purchase_date < @DelayThreshold Where Location_id = @Location I'm not entirely sure how to ask what I'm think but I'll try: Will this build up the entire results table (IE after all the joins) then do the aggragation, or will will this look at each unmatch_purchase once, change the count and sum values, and move on? (does my question make sense? I'm concerned about how much memory this will take up while running) This depends on the query optimizer/processor and depends to some degree on the data, and the server's current statistics about the data. These factors help the optimizer decide on what type of joins to use, and in doing so, whether to create "aggregation buckets" to dump data into as it processes the joins, or to defer this until later, after the intermediate unaggregated result set has been constructed. Look at whatever oracle tool functions as a query plan display tool (Oracle equivilent of SQL Server ShowPlan) to see what it is actually doing. @David, did some ad'tl research (asked Oracle guy that sits near me) and he said Oracle and/or Toad has a tool called "Display Plan" that does this... You should check that out... Note for posterity: make sure to add u.location_id = p.location_id and c.location_id = u.location_id to the "on" statements of the joins. At least, you'll need more indexes. I propose at least the folloging: An index on unmatched_purchases.purchases_id, one on purchases.location_id and another index on credit_card_transactions.(location_id, customer_id, item_id, trans_timestamp). Without those indexes, there is little hope IMO. Will that last index help even though I'm looking for a range of values? IE it'll be something like "trans_timestamp between date - fuzz and date + fuzz" Yes, exactly; if you have exact values for the other three columns (through the join), the index will help you with the range selection. It might. This may sound insulting, and I really don't mean it to be, but doesn't a business that has processed half a billion purchases and 80 million credit card transactions have enough money to employ an Oracle DBA who knows how to index tables appropriately?! Yes we do, and I am not that person. I am working with our DBA in setting this up, but this query is a one-off query, and there are concerns more important that this one query that dictate what our partitioning and indexing. I suggest you to query ALL locations at once. It will cost you 3 full scans (each table once) + sorting. I bet this will be faster then querying locations one by one. But if you want not to guess, you at least need to examine EXPLAIN PLAN and 10046 trace of your query... I'll check into that, but we have a lot of locations, and location_id is indexed on all three tables. I thought that meant that when doing something "where locaction_id = 123" meant it didn't have to do a truly full scan. That's true but it also implies accessing the tables using single block reads, with a very high number of logical reads compared to using full table scans. Reading 1,000 rows from a very large table using an index might involve several thousand logical reads and many of those could easily be physical reads because of the number of blocks across which the rows are distributed (depending on clustering_factor) So reading an entire table through an index is much less efficient than reading it as a full table scan. Ah, I see your confusion. I wasn't clear: I don't need to run this for all locations. I need to run it on 3 of our 70 or so locations. David Oneill, in this case my suggestion is not suitable, of course. I thought you need them all. The query ought to be straightforward, but the tricky part is to get it to perform. I'd question why you need to run it once for each location when it would probably be more eficient to run it for every location in a single query. The join would be a big challenge, but the aggregation ought to be straightforward. I would guess that your best hope performance-wise for the join would be a hash join on the customer and item columns, with a subsequent filter operation on the date range. You might have to fiddle with putting the customer and item join in an inline view and then try to stop the date predicate from being pushed into the inline view. The hash join would be much more efficient with tables that are being equi-joined both having the same hash partitioning key on all join columns, if that can be arranged. Whether to use the location index or not ... Whether the index is worth using or not depends on the clustering factor for the location index, which you can read from the user_indexes table. Can you post the clustering factor along with the number of blocks that the table contains? That will give a measure of the way that values for each location are distributed throughout the table. You could also extract the execution plan for a query such as: select some_other_column from my_table where location_id in (value 1, value 2, value 3) ... and see if oracle thinks the index is useful. Ah: I was unclear. I only need to run it for 2 or 3 of my locations, not all of them. edits question
STACK_EXCHANGE
Does a diatomic molecule falling into a black hole dissociate? I've just answered Dipping a Dyson Ring below the event horizon, and while I'm confident my answer is correct I'm less certain about the exact consequences. To simplify the situation consider a diatomic molecule falling into a Schwarzschild black hole with its long axis in a radial direction. The inner atom cannot interact with the outer atom because no radially outwards motion is possible, not even for electromagnetic fields travelling at the speed of light. However I find I'm uncertain exactly what the outer atom would experience. We know that the outer atom feels the gravitational force of the black hole even though gravitational waves cannot propagate outwards from the singularity. That's because it's experiencing the curvature left behind as the black hole collapsed. Would the same be true for the interaction of the outer atom with the inner atom? Would it still feel an electromagnetic interaction with the inner atom because it's interacting with the field (or virtual photons if that's a better description) left behind by the inner atom? Or would the inner atom effectively have disappeared? If the latter, presumably the fanciful accounts of observers falling into the black hole (large enough to avoid tidal destruction) are indeed fancy since it's hard to see how any large scale organisation could persist inside the event horizon. Later: I realise I didn't ask what I originally intended to. In the above question my molecule is freely falling and the question arose from a situation where the object within the event horizon is attempting to resist the inwards motion. I'll have to go away and re-think my question, but thanks to Dmitry for answering what I asked even if it wasn't what I meant :-) I might be mistaken, but: Even though the molecule is inside the event horizon (relative to a distant observer), from the point of view of the molecule the event horizon is still ahead of it, and has not yet been reached. The inner atom would still be able to communicate with the outer atom well after they've both crossed the event horizon from our point of view. It's only when the molecule is much closer to the singularity (i.e. about to be spaghettified), that the inner atom will disappear into the event horizon relative to the outer atom; any bond between them will be broken, and any charge of the inner atom will be added to the charge of the black hole. Thanks Dmitry. Since posting the question I've realised that I didn't ask quite what I intended to. While the molecule is straddling the event horizon couldn't this lead to information escaping the black hole? I e. If inner atom is electronically excited, that excitation would oscillate to outer atom leading to transfer across the event horizon. And it doesn't have to rely on microscopic quantum mechanics - could be any two communicating objects infalling together.
STACK_EXCHANGE
We learn things by practice and repetition. Very true. This is simple behaviourism. The law of effect. A response to a stimulus which is rewarded tends to be repeated. (behaviourism and the inner environment) Stephen Downes says he is not a behaviourist but when he talks about learning he sounds behaviourist to me: When you learn, you are trying to create patterns of connectivity in your brain. You are trying to connect neurons together, and to strengthen that connection. This is accomplished by repeating sets of behaviours or experiences. Learning is a matter of practice and repetition.I agree with that, as far as it goes: Thus, when learning anything - from '2+2=4' to the principles of quantuum mechanics - you need to repeat it over and over, in order to grow this neural connection. Sometimes people learn by repeating the words aloud - this form of rote learning was popular not so long ago. Taking notes when someone talks is also good, because you hear it once, and then repeat it when you write it down. Learning can be viewed as self design. There doesn't appear to be a more powerful way to think about design than thinking of it as an evolution wrought by generate and test!Arti links to Stephen's article and then goes onto suggest that: - behaviourism and the inner environment Enhancing student “engagement”, “authenticity of task”, and or “belonging” does not cause improved student learning outcomes anymore than “just washed hair“ prevents you kissing someone you desire - "I'd love to kiss you, but I just washed my hair..." My position: Behaviourism works. We are all behaviourists even though it has been unfashionable to admit it because Skinner took it too far and annoyed everyone by making us feel non creative. But that is only the beginning of the story It is way premature to take “engagement”, “authenticity of task”, and or “belonging” out of the learning loop just because their connection to learning is not as obvious as the simple behaviourist case. I believe that when we begin to tease apart the inputs that arti rejects that problems arise. "Engagement" and "belonging" could be described as being more in the "emotional" category and “authenticity of task” might be more in the "cognitive" category. If you understand a topic more deeply then you can teach the important things first - or setup a task which assists this. I would argue that those with the superior concept map of a problem domain could be capable of teaching that domain better, provided they know how to teach, of course. How to take a learner from point A to point B. They would be capable of developing more authentic tasks because they understand the more important (and less important) parts of the problem domain. When I studied quadratics deeply - by writing a logo program to teach quadratics behaviouristically - I was able to teach them more effectively. See quadratics and behaviourism. I still like Papert's idea of "objects to think with" (isdp) and that some materials (such as the logo programming language) are better with regard to the following criteria: - appropriability (some things lend themselves better than others to being made one's own) - evocativeness (some materials are more apt than others to precipitate personal thought) - integration (some materials are better carriers of multiple meaning and multiple concepts) Skinnerian behaviourism does not explain complicated human learning. It only explains simple animal learning, including some human learning. I discussed this in Dennett's creatures: - Darwinian creatures - random mutation and selection by environment - Skinnerian creatures - favourably actions are reinforced and then tend to be repeated - Popperian creatures - have an inner environment that can preview and select amongst possible actions - Gregorian creatures - import mind-tools (words) from the outer cultural environment to create an inner environment which improve both the generators and testers - Scientific creatures - an organised process of making and learning from mistakes in public, of getting others to assist in the recognition and correction of mistakes This is my context for introducing Marvin Minsky's book, The Emotion Machine. A draft is also available online. Not because Minsky is correct but because he looks at the harder questions. It's preferable to be wrong about the harder questions than to oversimplify human learning (which I think perhaps arti and stephen are doing) I've read parts of Minsky's book and would recommend this summary from amazon readers reviews, which I reproduce in full: 1. We don't recognize a problem as hard until we've spent some time on it without making any significant progress. For if you can diagnose the particular type of problem you face, then you can use that knowledge to switch to a more appropriate way to think. 2. Critic-selector model of thinking: Each critic object can recognize a certain species of problem type. When a critic sees enough evidence, the critic will activate a "selector", which tries to start up a set of resources that it has learned is likely too act as a way to think that may help in this situation. 3. If a problem seems familiar, use reasoning by analogy. If it seems unfamiliar, change the way you're describing it. If it seems too difficult, divide it into several parts. If it still seems difficult, replace it by a simpler problem. If none of these work, ask someone for help. 4. If too many critics are aroused, then describe the problem in more detail. If too few critics are aroused, then make the description more abstract. If important resources conflict then you should try to discover a cause. If there has been a series of failures, then switch to a different set of critics. 5. Emotional reactions: cautious vs. reckless, unfriendly vs. amicable, visionary vs. practical, inattentive vs. vigilant, reclusive vs. sociable, and courageous vs. cowardly; each such emotional way to think can lead to different ways to deal with things-either by making you see things from new points of view or by increasing your courage or doggedness. If too many critics are active then your emotions would keep changing too quickly. And if those critics stopped working at all, then you'd get stuck in just one of states. 6. The best way to solve a problem is to already know a way to solve it. Searching extensively. When one has no better alternative, one could try to search through all possible chains of actions. But that method is not often practical because such searches grow exponentially. 7. Reasoning by analogy: when a problem reminds you of one that you solved in the past, you may be able to adapt that case to the present case situation. 8. Divide and conquer: if you can't solve a problem all at once, then break it down into smaller parts. 9. Reformulation: find a different representation that highlights more relevant information. Understand in a different way. 10. Planning: consider the set of subgoals and examine how they affect each other. 11. Techniques for problem solving: simplifying, elevating, and changing the subject. 12. More reflective ways to think: wishful thinking, self-reflection, impersonation. 13. Other modes of thinking: 1) logical contradiction: try to prove that your problem cannot be solved, and then look for a flaw in that argument. 2) Logical reasoning. We often try to make chains of deduction. 3) External representation. Drawing suitable diagrams 4) Imagination. What would happen if by simulating possible actions inside the mental models that one has built. 14. Creating higher level selectors and critics help to reduce the sizes of the searches we make. 15. Modes of thought: preparation, incubation, revelation, and evaluation. 16. Creative ideas must be combined with the knowledge and skills already possess-so it must not be too different from ideas with which we're already familiar. 17. If too may critics are active then you notice flaws to correct and spend much time repairing them and never get at the important things and people perceive us as depressed. If too many critics are turned off then you ignore alarms and concerns that would help you concentrate allowing errors and flaws. The fewer the critics active, then the fewer goals pursued, making one intellectually dull.
OPCFW_CODE
import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.patches as patches from pattern_search import calculate_pattern_heatmap, parse_arguments, get_argmin_position def visualize_pattern_search(image, query, with_padding): scores = calculate_pattern_heatmap(image, query, with_padding) answer = get_argmin_position(scores) if with_padding: answer[0] -= kernel.shape[0] - 1 answer[1] -= kernel.shape[1] - 1 if not with_padding: scores = np.pad(scores, [(kernel.shape[0] // 2, 0), (kernel.shape[1] // 2, 0)], constant_values=scores.max()) rect = patches.Rectangle((answer[1], answer[0]), query.shape[1], query.shape[0], linewidth=1, edgecolor='r', facecolor='none') else: image = np.pad(image, [(kernel.shape[0] // 2, kernel.shape[0] // 2), (kernel.shape[1] // 2, kernel.shape[0] // 2), (0, 0)], constant_values=255) rect = patches.Rectangle((answer[1] + kernel.shape[1] // 2, answer[0] + kernel.shape[0] // 2), query.shape[1], query.shape[0], linewidth=1, edgecolor='r', facecolor='none') fig, axs = plt.subplots(1, 1) axs.set_title("heatmap scores") axs.imshow(image) cs = axs.contour(scores, np.percentile(scores.ravel(), [5, 10, 20, 30, 40, 50]), linewidths=2, cmap=plt.cm.Greens, alpha=0.75) plt.colorbar(cs, extend='both') axs.add_patch(rect) plt.axis("off") plt.figure(2) plt.title("query") plt.imshow(query) plt.axis("off") plt.show() if __name__ == "__main__": arguments = parse_arguments() image = np.array(Image.open(arguments.image)) kernel = np.array(Image.open(arguments.query)) visualize_pattern_search(image, kernel, arguments.pad)
STACK_EDU
Something strange about TEXTMETRIC I am reading Programming.Windows.5th.Edition,Charles Petzold when I was doing the Figure 4-5. SYSMETS1.C,I met the following codes: cxCaps = (tm.tmPitchAndFamily & 1 ? 3 : 2) * cxChar / 2 ; and there are explanations in the book, SYSMETS1 also saves an average width of uppercase letters in the static variable cxCaps. For a fixed-pitch font, cxCaps would equal cxChar. For a variable-width font, cxCaps is set to 150 percent of cxChar. The low bit of the tmPitchAndFamily field in the TEXTMETRIC structure is 1 for a variable-width font and 0 for a fixed-pitch font. SYSMETS1 uses this bit to calculate cxCaps from cxChar. Now that cxCaps is set to 150 percent of cxChar,I think it should be cxCaps = (tm.tmPitchAndFamily & 1 ? 3.0 : 2.0) * cxChar / 2 ; Can you explain it for me?Thanks! It is trying to calculate an integer result, expressed in pixels. The effective calculation is (3 * cxChar) / 2, not 3 * (cxChar / 2). Presumably the latter one is giving you pause. It's going to get rounded down for a proportional font but will never be off by more than a single pixel. It doesn't matter since the value is only a guess anyway, the actual width is different for every glyph in a proportional font. No points scored for readability, you could perhaps rewrite it like this: bool proportional = tm.tmPitchAndFamily & TMPF_FIXED_PITCH; if (proportional) cxCaps = (3 * cxChar) / 2; else cxCaps = cxChar; Note how the flag has the wrong name, perhaps the reason Petzold wrote it like that. I like how the MSDN documentation has "Note very carefully that those meanings are the opposite of what the constant name implies." Microsoft often make such funny things What are the types of cxChar and cxCaps? If cxChar is float or double, then there is no problem, because the result of the multiplication will be converted to that type before dividing by 2, and the result would be 1.0 or 1.5. But cxCaps should also be of a floating point type, so it can hold the floating point value. EDIT I checked to code of the book, and found that they are both int. And also I found that there is no need for floating point variables. For example assume that cxCahr is 20. If tm.tmPitchAndFamily & 1 results in 1, then the expression will be cxCaps = 3 * cxChar / 2; and cxCaps will be 30. And if the result is 0, 'cxCaps' will be '20'. So everything is fine. If cxChar is an odd number, then the lost value will be 0.5, that could easily be neglected. Frankly speaking,the reason for asking this question is just because I don't know the types of cxChar and cxCaps,I try to find it ,maybe the type is BYTE,namely unsigned char.
STACK_EXCHANGE
Two tabs open, log out of one, the other does not timeout gracefully If you force a "timeout" by having two tabs open and logging out of one, the other tab does not fail gracefully. Instead of the timeout error you see "something went wrong" and get a refresh. This is especially problematic if you fail on the Submissions summary page, which will sometimes show multiple errors before you escape a refresh loop. Why has this appeared? Session timeout used to be triggered by any failed query on elasticsearch or fedora create, but it was recently changed to only check the whoami service. This is fine if you let the browser sit for an hour and the login times out properly. Unfortunately the logout button does not log the user out in a way that whoami service is unavailable, and you can log back in without entering a user name. If there was a way to fix the logout to require re-login, this would fix the problem, but until then, we need to incorporate some other way to detect a timeout. The easiest way would probably be to return to pinging elasticsearch and assume the failure to retrieve a predictable value is a timeout (similar to how it used to work in the adapter). @karenhanson can you elaborate on this a little bit? In particular: What is the functional difference between the whoami endpoint and any other shib-protected resource as far as checking the status of a shib session is concerned? That is to say, what precisely about the interaction between Ember and the http resource makes pass-user-service/whoami different from /es/? What is the expected behaviour of the other tab? Is the issue that the message says "something went wrong", rather than "your session has ended", or something along those lines? To the first point: I'm not sure - that needs to be investigated. It is probably to do with how the URL is pinged by the code actually. Here are the bits of code that used to catch a timeout somewhat reliably, but did so by catching all errors on queries to elasticsearch: https://github.com/OA-PASS/ember-fedora-adapter/commit/fa418b5cf0e5982e316d47ac5850ed87b3598fca Here is the code that replaced it and currently pings whoami: https://github.com/OA-PASS/pass-ember/blob/develop/app/components/workflow-component.js This code also works for detecting the difference between logged in and not (ugly as it may be): https://github.com/OA-PASS/pass-ui-static/blob/master/assets/logged-in.js I suspect it's something to do with how js handles redirects in those different scenarios. So, where I said the easiest solution would be to ping ES, it would be to do so through the adapter where the error is known to show up reliably. Ideally we would understand why these pings behave differently and use the one that works, it's just difficult to test outside of demo. To the second point: It should say "your session timed out, the page will now reload" and it reloads. If new login is required the reload shows the login screen. Where the person logged out in another tab, it should reload the ember app and with the session refreshed those redirects that are causing the issue are gone. @karenhanson I see. So for either request (es, whoami) the same http pattern plays out: The browser does an XHR POST or GET to a resource (https://pass...) The shib proxy returns with a 302 to the idp (https://incommon.johnshopkins.edu...) The browser automatically follows the redirect to the idp The fetch API seems to be the only API that can be used to reason about whether there was a redirect or not, all other APIs (e.g. XmlHttpRequest, JQuery), make this transparent. Because the request is an XHR, and https://incommon.johnshopkins.edu... is in a different domain than https://pass..., the browser needs to use CORS to see if the cross-domain request is allowed The Browser sends a CORS preflight request to the redirected location (in this case, the Idp) The IdP rejects the CORS request, returning with a 403 The calling code goes into its onError or otherwise throws an exception. So the problem with the whoami code does not catch errors (except in sending them to the generic error handler), and the exception thrown by the browser in response to a failed CORS request is not being dealt with specifically (instead, a general "something went wrong"). I have no idea what sort of exception is thrown in that case, unfortunately.. and if it is the same between browsers. The problem with pass-docker is that the idp (step 2) is on the same domain as pass (e.g. pass.local), so the browser doesn't do a CORS preflight. The 302 successfully redirects to the idp, and Ember gets the IDPs response back. This could be replicated in pass-docker if we used a different domain for the idp, like idp.pass.local. That would be another PR, and developers need to add an entry for idp.pass.local to the hosts file. Worth a try? @birkland Thanks for that break down, that will be helpful for finally fixing it. In order to replicate it, we would we need the domain change you described, but also the ability to hit the logout button, and for the logout button to behave in a similar way to production (i.e. doesn't really log out, but the redirect appears). I'm not sure the effort for one without the other would be worth it. I see. There is a logout functionality for the current IdP, I think it is doing a GET to a Logout.sso endpoint (which I don't think is enabled by the proxy that's before Shibboleth at the moment). I think that shouldn't be too difficult to change. It could be put at something like https://idp.pass.local/Logout.sso. Would that work? @birkland Sounds intriguing! I'm all for bringing pass-docker closer to production as we keep finding issues that are unique to that environment, but it's @htpvu's call on scheduling of that, of course! Meanwhile I tested further based on what you said and see you are right about whomai failure not being caught in production resulting in the default exception being used instead of the customized timeout message! I think something like this might work, so I'm leaving it here for when we work on it again: https://github.com/karenhanson/pass-ember/tree/another-timeout-fix Because of the CORS preflight thing, none of these checks are particularly precise though and are prone to catch errors that are not timeouts. Not sure if there is a way round that.
GITHUB_ARCHIVE
Let's say you've bought and built a new 3D printer, and started printing stuff like crazy. Yet, you now find very painful to have to remove the SD card each time, insert in the SD card reader, plug on your computer, open Cura or whatever other slicer you might be using, slice and save on the SD card and put back the SD card again in the printer then select the file to print, each time you want to start a new print. So, with Octoprint, you'll get: Octoprint is a HTTP server software that's expected to run on a small computer. Many will install Octoprint on a Raspberry Pi (minimum Raspberry Pi 3 is required here) either manually, or by downloading a prebuild image (called OctoPi). However, this means you'll need to source a Raspberry Pi, a casing, likely the Raspicam also, and this is not cheap. All in all, it'll cost around 50€ for the board + casing and almost the same for the Raspicam here. Also Raspberry Pi model 3 is a bit outdated now. Instead, I'm explaining here how to get the same features with an Orange Pi Zero 2 which is smaller, faster and cheaper. You'll need a SBC (single board computer) and, currently, the best deal is the Orange Pi Zero 2. If I were you, I'd choose the 1GB version (even if it's a bit more expensive than the 512MB version). Don't be mistaken by all the other board from Orange Pi (like the Zero, the Zero+, the Zero+2). This one is using an Allwinner H616 CPU that's a 64bits ARM CPU. It costs less than 30€. Right now, it's running Armbian with Debian Buster, but a port to latest version is coming (but not ready yet) There's also a nice transparent case for it. You'll also need a µSDcard (8GB or 16GB is enough) but you can reuse any card you already have. If you intend to monitor your 3D printer, you might be interested by buying a webcam too (~20€). You'll need a USB-C cable to power it (you can reuse any cable here, only the 5V and ground pin is used here, you don't need the more expensive data cable). Installation process is quite straightforward, provided you already know how to use Linux. If you don't, don't worry, I'll (try to) explain here what we are doing. Unlike your computer, this small board does not have any non-volatile memory to boot from. This means that the operating system will have to be written to a SD card and it'll boot and run from this µSD card. You should use a 8GB or bigger card here. The larger the card, the less likely it'll run out of space with software update and card damage due to wear. So a 16GB card is probably 1€ more expensive, but if it saves you 4h for reinstalling everything, it's 1€ well spent. Once you have the SD card ready, you'll need to download the Armbian image for this board from here. Then you'll need to uncompress this file with 7-zip (on Windows), 7z (on linux) or The Unarchiver (on MacOS). Once you've uncompressed the archive, you'll get a .img file that you'll need to burn to the SD card. Although it can be done with few low level commands, I recommend to use BalenaEtcher software: The first boot is a bit long. You'll need to connect a Ethernet cable from your router to the Opiz2 port, even if you intend to use Wifi later on. Plug the ethernet, plug the USB-C cable and wait until the Ethernet is blinking in green. Once this is done, you'll have to find what IP address was leased to the Opiz2. There are multiple way to do that (and if you don't know what I'm talking about, try each method below in order until one is working): orangepizero2. This will give you the IP address (a sequence of 4 numbers separated by dot, like 192.168.1.32) of the board. nmaptool (or an IP scanner tool). Scan your own network for new devices, with a ping scan. Typically, the nmap command is nmap -sP 192.168.0.0/24or nmap -sP 192.168.1.0/24. You'd get an answer like this: Nmap scan report for orangepizero2 (192.168.0.7) Host is up (0.0015s latency). ipconfig) and start playing trial and error here. For example, if you have an address of 192.168.1.2, try pinging 192.168.1.3and so on until you get an answer. This will return false positives here, so you'll need to check if the answer comes synchronously to the green LED blinking on the Opiz2. Ok, now you have the IP address of the board, you'll have to connect to it. You'll need a SSH client for this. On both Linux and MacOS, it's as simple as typing ssh firstname.lastname@example.org (replacing with the board's IP address here) in a terminal. Under Windows, you'll need to download Putty and follow the GUI here. The initial root password is The first command to run is to fetch the latest version of the operating system via: Don't type the first # in the command below. $ means that the command must be issued by an unpriviledged user, and # means that the command should be run by the root user # apt-get update Blah blah, very long... # apt-get upgrade Say yes by typing 'y' and wait. You might have to run this command pair from time to time to maintain your system fresh. You don't want to leave a default user and password on a device. You need to customize it to your own here. So we'll remove the orangepi user and create your own and give you permissions to use all peripherals on the system. You must not remove the root user (but you can change the password or remove it). Replace yourname below by your private name # adduser yourname # usermod -a -G sudo yourname # deluser orangepi # usermod -a -G video,plugdev,games,users,systemd-journal,netdev,input,ssh,dialout,audio,tty,disk yourname # passwd Type your new root password here twice To enable WIFI, you'll need to perform two operations: join a WIFI network, and set a static IP address so your board is always reachable at the same address (no more guess my IP game anymore): orangepi-configcommand to set the board's hostname, timezone and scan for WIFI network and join a WIFI network. If this fails, you'll need to exit the software, type syncand reboot the board and try again once it has booted again. nmtuicommand as root to change the IP address configuration for your WIFI link from dynamic to static IP. If you don't know what address to put here, I'd say that many router don't reserve all address pool for dynamic address, so they are likely using address from 1 to 100 (for the last number) for dynamic address DHCP, so it should be safe to use a 200-ish address or so for your board. You'll need to remember this IP address as it'll be the address to use from now on. ssh email@example.com your computer. sync && reboot) and test again. If it works, you can unplug the Ethernet cable, move your board to its final location: close to your printer. To install Octoprint, we'll need to create a low priviledged user first and give him the minimum rights required to run octoprint and connect to any device (printer/camera/etc). From now on, you don't need to connect to your board via the root user, you should use So, let's do that: $ sudo adduser --home /home/pi --disabled-password --disabled-login pi $ sudo usermod -a -G tty pi $ sudo usermod -a -G dialout pi $ sudo usermod -a -G video pi $ sudo apt install python3-pip python3-dev python3-setuptools python3-venv git libyaml-dev build-essential ffmpeg $ sudo -u pi bash Now you are in a sub-shell for this pi user. We'll install octoprint for him: $ mkdir OctoPrint && cd OctoPrint $ python3 -m venv venv $ source venv/bin/activate $ pip install pip --upgrade $ pip install octoprint $ ~/OctoPrint/venv/bin/octoprint serve If everything went well, you'll need to connect your computer's browser to http://your.board.ip.address:5000 and follow the setup instructions on this page. Once it's done, you can stop Octoprint's program in the console by hitting Ctrl + C and then Ctrl + D, you'll be back at your own user shell. We are now adding a service that'll start Octoprint when your board is booting: $ wget https://github.com/OctoPrint/OctoPrint/raw/master/scripts/octoprint.service && sudo mv octoprint.service /etc/systemd/system/octoprint.service $ sudo systemctl enable octoprint $ sudo systemctl start octoprint Make sure you can connect via your browser again, and if it's working you're done! Afin de m'aider à financer l'infrastructure de ce blog, je liste à la fin des billets les liens affiliés sur le matériel évoqué ci-dessus.
OPCFW_CODE
CSS is a sticky subject in the best of times and to make it more sticky I thought I’d run down the techniques needed to make a sticky footer that works in all modern browsers. This is unlike most examples on the web that break in either Opera, IE8, IE7 or indeed in all three. Try any of those footers from the Google search above in IE8 or Opera (some don’t work in IE7 either). Load the page then grab the bottom of the window (not the side or corner of the window) and drag it up or down and you will see that the footer usually sticks in the wrong place, messing up the display. Now try it on my old original sticky footer version (circa 2003 which pre-dates all those above) and you will see that my version is working in all browsers including IE8. Before we get into details I should first explain what a sticky footer is. What is a Sticky Footer A sticky footer is one that sits at the bottom of the viewport when there is not enough content in the page to push the footer down. If there is a lot of content then the footer sits at the bottom of the document and will be below the fold as usual. Why this is desirable is because on short content pages you won’t have a footer right at the top of the screen looking very strange indeed as shown from Figure 1 below. Figure 1 – normal footer close to content. Figure 2 – Sticky footer at bottom of viewport. Note that a “fixed positioned” footer is not the same thing as a sticky footer as a fixed positioned footer is one that sits at the bottom of the viewport at all times and never moves. Don’t get confused between the two. Before we get into the nitty gritty detail I will briefly explain the concept in getting a sticky footer to work. The first thing we need to do is create a 100% high container which is achieved by setting the html and body elements to 100% height and then creating a container that is a minimum of 100% high. The footer is then placed after this container which means it will be invisible as it will be below the fold of the page but by the magic of negative margins we can bring it back into view at the bottom of the viewport. Of course this means that the sticky footer must be a fixed height (pixels or ems will do) so that we know how to accommodate it with the exact negative margins that bring it into view. This also means that our footer is now overlapping content on the page so we will also need to protect this content with either padding on an inner element, or some other similar approach as you will see when we get into specifics later. That’s basically all there is to it except that we have to squash a few bugs on the way to make it work everywhere.
OPCFW_CODE
Yesterday CS3216 just had a seminar about the latest-and-greatest Facebook/iPad applications. Let's review one application that I was asked to review: Pekay's Little Author. What I get from the presentation Pekay's Little Author makes it very easy for children and adults to create your own storybook. It's available as desktop, Facebook, and iPad app. As the features suggest, the target market of the app is primarily children and the app has done a good job for it. The interface is intuitive and fun enough for children to use, the base characters and sprite libraries are cute. (Perhaps not so much for the advanced control buttons (the play, next button etc. look complicated and rather out-of-place), but I guess those controls are made for the older children and/or the parents.) There seems to be a feature parity between the Facebook and the iPad app, the Facebook app being free and having more social elements and the iPad app being costly (10 USD!) and having no social elements. It would be good if they actually try to close this gap between the different versions of the app. The presentation did not cover the original desktop app though. Another important note mentioned in the Q&A session is the importance of platform analysis before jumping in to development. The reason why the Facebook app has not been so popular is supposedly because the targeted children do not actually have a Facebook account yet, or do not yet see a benefit of socializing in Facebook (I presume that their social circle is still very very small and geographically centred around a few km of where they live). iPad is different however -- big screen and touch interface are very natural. Users do not even need to be able to read to use the app in iPad! So... what is this app actually about? Targeting children actually poses the app in a unique position because children do not really have money -- their parents do. Thankfully these days, children creating storybooks is seen as a good thing so parents won't hesitate to spend a few dollars for such a desirable activity. Plus as Prof Ben has said, having a distraction for your children is priceless :) To further strengthen the image of "storywriting is a nice, hippy, and smart activity for your children", they even had workshops and associate themselves with museums. Museums! Who could think that those boring repositories of physical stuff can become an ingenious marketing campaign? Furthermore, being made in Japan, the app tells us that people outside the U.S. can also make a difference. You don't have to have boatloads of connections, money shower from VCs, or an English-speaking country. Of course the author of PeKay's Little Author definitely knows interesting people, has money, and Japan is not exactly anti-English, But in these frontiers, Silicon Valley is an order of magnitude much more welcoming and friendly environment. Sidetrack: how presentation works in School of Computing Coming from science-medicine kind of background, I find the overall presentation session very interesting. The presentation style is definitely not of the same kind as what I am used to. - People don't seem to care about how numbers and statements come out and they just take it on face value. An example is the statement "the iPad version is less popular than the iPhone version". Who said that? What is the evidence? Do you consider the confounding factors eg. the fact that there are more iPhones vs. iPads in the world? If it is indeed less popular, does it matter (i.e. does the app have to be popular or are other indicators more important)? - People actually throw out new ideas during the Q&A session! :) - People don't seem to really care about dress code! :) - Some of us seriously need to learn to imagine being in someone else's shoes. Emphatize with customers (end users). Customers don't care whether you develop the app with HTML5 -- they care whether the app is available in more devices, crashes less, and becomes more awesome. Customers don't care whether setup is easy for you -- they care whether it is easy for them! (Of course being in a CS course, we can be technical, but we must not forget the end users as well.)
OPCFW_CODE
Vanity URLs in WebCenter Sites 11g Release 1 The latest release of WebCenter Sites, 11g Release 1 (126.96.36.199.0), now includes vanity URL support. If you’ve ever used GSF’s vanity URLs, this will look very familiar to you. WebCenter Sites’s implementation, however, does contain some slight differences. More significantly, there are some nice enhancements compared to the GSF vanity URLs. The purpose of this post isn’t to bore you with all the technical details of how the vanity URLs are setup. For all those details, check out the Oracle WebCenter Sites documentation (http://www.oracle.com/technetwork/middleware/webcenter/sites/documentati...) — in particular the Administrator, Developer, and User guides. Here’s an overview of some of the great vanity URL features The Webroot asset is an important part of setting up vanity URLs. This is where you configure your root URL (similar to GSF’s master webroot) and any virtual root URLs. Unlike the GSF, WebCenter Sites lets you manage all your virtual root URLs for different environments in a single Webroot asset. In addition, you can choose to use absolute or relative vanity URLs. Device-specific URLs and Redirects While GSF only allowed one vanity URL per asset, WebCenter Sites allows an asset to have multiple vanity URLs. Some good reasons for allowing this are: - device-specific URLs: Since vanity URLs map not only to a specific asset, but also to the template that’s used to render it, having separate device-specific URLs makes it easier to manage device-specific content rendering - Redirects: Do you need to edit a vanity URL or redirect visitors to another page temporarily? Now you can manage 301 and 302 HTTP Status redirects from right in the Contributor interface. GSF required content contributors to manually enter vanity URLs for each asset, but WebCenter Sites can auto-generate vanity URLs for you (don't worry, you can still manually create vanity URLs). This is configured by viewing the asset type in the Admin UI and selecting “URL Pattern” from the “more…” dropdown. URL Patterns are created for each asset type (or even subtype) and can use any combination of an asset’s attributes to auto-generate a vanity URL. You can even format attribute values to be more URL-friendly by converting a value to all lowercase, convert spaces to underscores, and format dates. Blob Vanity URLs Need I say any more? Vanity URL Troubleshooting The System Tools section of the Admin UI contains a helpful tool for viewing all vanity URLs or searching for specific vanity URLs (i.e. in case you have vanity URL conflicts that need to be resolved). Vanity URLs on a JumpStart Kit With the GSF, using vanity URLs on a JSK required you to either connect the JSK to a webserver where you could use mod_rewrite or you had to figure out how to use Tuckey (http://tuckey.org/urlrewrite/). Now it’s much easier to use vanity URLs on a JSK. For more info, check out the Administrator’s Guide, section 24.4 Resolving Vanity URLs Using a Rewriter Filter.
OPCFW_CODE
I am trying to understand when to define __getattribute__. The python documentation mentions __getattribute__ applies to new-style classes. What are new-style classes? A key difference between __getattribute__ is that __getattr__ is only invoked if the attribute wasn't found the usual ways. It's good for implementing a fallback for missing attributes, and is probably the one of two you want. __getattribute__ is invoked before looking at the actual attributes on the object, and so can be tricky to implement correctly. You can end up in infinite recursions very easily. New-style classes derive from object, old-style classes are those in Python 2.x with no explicit base class. But the distinction between old-style and new-style classes is not the important one when choosing between You almost certainly want Lets see some simple examples of both __getattribute__ magic methods. Python will call __getattr__ method whenever you request an attribute that hasn't already been defined. In the following example my class Count has no __getattr__ method. Now in main when I try to access both obj1.mymax attributes everything works fine. But when I try to access obj1.mycurrent attribute -- Python gives me AttributeError: 'Count' object has no attribute 'mycurrent' class Count(): def __init__(self,mymin,mymax): self.mymin=mymin self.mymax=mymax obj1 = Count(1,10) print(obj1.mymin) print(obj1.mymax) print(obj1.mycurrent) --> AttributeError: 'Count' object has no attribute 'mycurrent' Now my class Count has __getattr__ method. Now when I try to access obj1.mycurrent attribute -- python returns me whatever I have implemented in my __getattr__ method. In my example whenever I try to call an attribute which doesn't exist, python creates that attribute and sets it to integer value 0. class Count: def __init__(self,mymin,mymax): self.mymin=mymin self.mymax=mymax def __getattr__(self, item): self.__dict__[item]=0 return 0 obj1 = Count(1,10) print(obj1.mymin) print(obj1.mymax) print(obj1.mycurrent1) Now lets see the __getattribute__ method. If you have __getattribute__ method in your class, python invokes this method for every attribute regardless whether it exists or not. So why do we need __getattribute__ method? One good reason is that you can prevent access to attributes and make them more secure as shown in the following example. Whenever someone try to access my attributes that starts with substring 'cur' python raises AttributeError exception. Otherwise it returns that attribute. class Count: def __init__(self,mymin,mymax): self.mymin=mymin self.mymax=mymax self.current=None def __getattribute__(self, item): if item.startswith('cur'): raise AttributeError return object.__getattribute__(self,item) # or you can use ---return super().__getattribute__(item) obj1 = Count(1,10) print(obj1.mymin) print(obj1.mymax) print(obj1.current) Important: In order to avoid infinite recursion in __getattribute__ method, its implementation should always call the base class method with the same name to access any attributes it needs. For example: object.__getattribute__(self, name) or super().__getattribute__(item) and not If your class contain both getattr and getattribute magic methods then __getattribute__ is called first. But if AttributeError exception then the exception will be ignored and __getattr__ method will be invoked. See the following example: class Count(object): def __init__(self,mymin,mymax): self.mymin=mymin self.mymax=mymax self.current=None def __getattr__(self, item): self.__dict__[item]=0 return 0 def __getattribute__(self, item): if item.startswith('cur'): raise AttributeError return object.__getattribute__(self,item) # or you can use ---return super().__getattribute__(item) # note this class subclass object obj1 = Count(1,10) print(obj1.mymin) print(obj1.mymax) print(obj1.current) This is just an example based on Ned Batchelder's explanation. class Foo(object): def __getattr__(self, attr): print "looking up", attr value = 42 self.__dict__[attr] = value return value f = Foo() print f.x #output >>> looking up x 42 f.x = 3 print f.x #output >>> 3 print ('__getattr__ sets a default value if undefeined OR __getattr__ to define how to handle attributes that are not found') And if same example is used with __getattribute__ You would get >>> RuntimeError: maximum recursion depth exceeded while calling a Python object New-style classes inherit from object, or from another new style class: class SomeObject(object): pass class SubObject(SomeObject): pass Old-style classes don't: class SomeObject: pass This only applies to Python 2 - in Python 3 all the above will create new-style classes. See 9. Classes (Python tutorial), NewClassVsClassicClass and What is the difference between old style and new style classes in Python? for details. New-style classes are ones that subclass "object" (directly or indirectly). They have a __new__ class method in addition to __init__ and have somewhat more rational low-level behavior. Usually, you'll want to override __getattr__ (if you're overriding either), otherwise you'll have a hard time supporting "self.foo" syntax within your methods. I find that no one mentions this difference: __getattribute__ has a default implementation, but __getattr__ does not. class A: pass a = A() a.__getattr__ # error a.__getattribute__ # return a method-wrapper This has a clear meaning: since __getattribute__ has a default implementation, while __getattr__ not, clearly python encourages users to implement - getattribute: Is used to retrieve an attribute from an instance. It captures every attempt to access an instance attribute by using dot notation or getattr() built-in function. - getattr: Is executed as the last resource when attribute is not found in an object. You can choose to return a default value or to raise AttributeError. Going back to the __getattribute__ function; if the default implementation was not overridden; the following checks are done when executing the method: - Check if there is a descriptor with the same name (attribute name) defined in any class in the MRO chain (method object resolution) - Then looks into the instance’s namespace - Then looks into the class namespace - Then into each base’s namespace and so on. - Finally, if not found, the default implementation calls the fallback getattr() method of the instance and it raises an AttributeError exception as default implementation. This is the actual implementation of the object.__getattribute__ method: .. c:function:: PyObject* PyObject_GenericGetAttr(PyObject *o, PyObject *name) Generic attribute getter function that is meant to be put into a type object's tp_getattro slot. It looks for a descriptor in the dictionary of classes in the object's MRO as well as an attribute in the object's :attr:~object.dict (if present). As outlined in :ref:descriptors, data descriptors take preference over instance attributes, while non-data descriptors don't. Otherwise, an :exc:AttributeError is raised. In reading through Beazley & Jones PCB, I have stumbled on an explicit and practical use-case for __getattr__ that helps answer the "when" part of the OP's question. From the book: __getattr__() method is kind of like a catch-all for attribute lookup. It's a method that gets called if code tries to access an attribute that doesn't exist." We know this from the above answers, but in PCB recipe 8.15, this functionality is used to implement the delegation design pattern. If Object A has an attribute Object B that implements many methods that Object A wants to delegate to, rather than redefining all of Object B's methods in Object A just to call Object B's methods, define a __getattr__() method as follows: def __getattr__(self, name): return getattr(self._b, name) where _b is the name of Object A's attribute that is an Object B. When a method defined on Object B is called on Object A, the __getattr__ method will be invoked at the end of the lookup chain. This would make code cleaner as well, since you do not have a list of methods defined just for delegating to another object.
OPCFW_CODE
FBLPromises fails to load building macos flutter app dyld[71212]: Library not loaded: @rpath/FBLPromises.framework/Versions/A/FBLPromises Referenced from: /Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/Dashboard Reason: tried: '/usr/lib/swift/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/../Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (code signature in <4EFC1C03-34BA-302C-8153-DF1B0507C339> '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' not valid for use in process: mapped file has no Team ID and is not a platform binary (signed with custom identity or adhoc?)), '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/macosx/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/../Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (code signature in <4EFC1C03-34BA-302C-8153-DF1B0507C339> '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' not valid for use in process: mapped file has no Team ID and is not a platform binary (signed with custom identity or adhoc?)), '/usr/lib/swift/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/../Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (code signature in <4EFC1C03-34BA-302C-8153-DF1B0507C339> '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' not valid for use in process: mapped file has no Team ID and is not a platform binary (signed with custom identity or adhoc?)), '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/macosx/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/MacOS/../Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (code signature in <4EFC1C03-34BA-302C-8153-DF1B0507C339> '/Users/steve/Documents/GitHub/dfc/dashboard/build/macos/Build/Products/Debug/Dashboard.app/Contents/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' not valid for use in process: mapped file has no Team ID and is not a platform binary (signed with custom identity or adhoc?)), '/Library/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (no such file), '/System/Library/Frameworks/FBLPromises.framework/Versions/A/FBLPromises' (no such file) I think I fixed it. It had to do with signing. I had to set the right organization and I set sign for development. not sure if that's 100% correct, but it's running now.
GITHUB_ARCHIVE
fix json metadata parsing for our specific examples hi marek. we installed the record3d app, thank you for your work, it's awesome and works very nicely on our hardware, very much worth the money :) when trying to preview the videos using the code in this repository, we got a json parsing error though: Uncaught SyntaxError: JSON.parse: unexpected non-whitespace character after JSON data at line 1 column 107 of the JSON data something goes wrong with the line of parsing code i changed, seems there is no newline after the json (anymore?) making the meta string include binary data in the json parse step. i am not sure if my "fix" will work for all examples, it does for all videos we tested it with, regardless if they were taken with the lidar functionality, or taken without, and if i understand the video data + metadata of the intrinsicMatrix correctly it should work in all cases. i also have some questions and feature suggestions for record3d, which repository do you want me to file those as issues in? Hi Jascha, thank you for spotting this! I have just tried to replay an exported RGBD Video via http://record3d.xyz and I was unable to reproduce the bug you experienced. May I ask which Record3D version and iOS/iPadOS version are you using? I have tried exporting into an RGBD mp4 video on both iOS and iPadOS 14.5.1 without any problems related to this demo. My guess is that this change might be due to a different iOS version. My ugly workaround for parsing the metadata out of the mp4 file is definitely not clean in any way, so I will have to find a better way to do this in the future. In the meantime (while continuing with my ugly parsing workaround), wouldn't it be better to search for the rightmost } instead of the first ]}? Although your fix works for current videos, I might extend the metadata JSON in the future and ]} could pop up in more places (whereas the JSON should always end with a trailing }). I am glad you created this pull request so that other people with similar problem can see this! hi again :) ios version: 14.4.2 ipad pro (11 inch, 2nd generation) the record 3d app should be up to date, installed a few days ago, i could not find a version number for it, adding the version number in settings -> about for future references might be a good addition for the next release :) (not an ios user, and have no idea how to check the version of an installed app, but we installed a few days ago so we should have 1.5.6 installed) we are updating ios now and i will amend this comment once we know if the newer version works. if i am thinking correctly, the problem with finding the last } would be that js would parse the full meta string twice, which includes all of the video data after the intrinsicMatrix json. see the code block at the end of this comment for a way to avoid parsing the video data at all. 1. loop, does not work my first approach using function findFirstNonAsciiIndex(str) { for (let i = 0; i < str.length; i++) { if (str.charCodeAt(i) > 255) { return i } } return str.length - 1 } does not find a match and loops over the full binary data, freezing the browser in the process, so that seems to not work. (according to https://www.ascii-code.com/ 255 is the last valid ascii character code) 2. breaking change and extra data in json maybe adding a breaking change, falling back to ]} if the condition does not trigger could work: { "intrinsicMatrix": [], "eof": 1} then using string.search would be more robust, performant and take less memory, albeit the solution being inelegant too and using regex, not sure if i like that too much. the "eof" part could be added once you want to add other data to the metadata, with the fallback triggering all the time until that change lands in record3d, making all old videos work forever. let meta1 = '{"intrinsicMatrix": [],"eof":1}' let meta2 = '{"intrinsicMatrix": [], "eof": 1}' const index1 = meta1.search(/,\s?"eof/) const index2 = meta2.search(/,\s?"eof/) const result1 = meta1.substr(0, index1) + '}' const result2 = meta2.substr(0, index2) + '}' console.log(result1 === result2) 3. performance nitpick another small nitpick, would be relevant for huge videos, but compared to the rest of the script this will still only be a minor speedup: console.time('lastindexof') let meta = fileContents.substr(fileContents.lastIndexOf('{"intrinsic')); console.timeEnd('lastindexof') console.time('indexof') let meta2 = fileContents.substr(fileContents.indexOf('{"intrinsic')); console.timeEnd('indexof') console.log(meta === meta2) // is true, we just have to assume that intrinsicMatrix only appears once // lastindexof: 53ms // indexof: 1ms once we have a plan how to move forward i can append the pull request with the changes you decided upon :) Thank you for the thorough overview. The app version should be visible on the Record3D's App Store page, but I can confirm that 1.5.6 is currently indeed the latest version. I am going to display the version number directly inside the app (in the Settings tab) beginning with the next update. I assume that updating to the latest iOS version did not fix the issue for you since you did not update your comment. If possible, would you consider sending me sample .mp4 and .r3d files at<EMAIL_ADDRESS>please? I would ideally want to figure out what is causing this bug instead of just fixing the consequences. Thank you :)! hi :) we just got back from our excursion today and i got to test another bunch of videos, so far it seems that after the update ios writes the newline into the video correctly, i still get an error with the old video on record3d.xyz, the new videos load flawlessly. Hi, it seems like the bug is indeed related to a change in iOS. Can you please try to re-export an old 3D Video (which you shot on iOS 14.4.2) into .mp4 on the latest version of iOS and test if the new .mp4 is playable? (It is needed to delete the already exported old .mp4 to be able to export into .mp4 again.) Could you please also share either an example .mp4 video that shows this bug or just the part of the mp4 file that begins with {"intrinsic till the end of the file? I would like to test different approaches to parsing the JSON without bothering you with my requests to try try them out :). Thank you!
GITHUB_ARCHIVE
Xenonauts attempts to remain faithful to the original 1994 X-COM. However Xenonauts attempts to update and alter the game to enhance the experience and rework disliked or broken features from the original game. This page will detail all the main differences between the two games, on both the Geoscape and Battlescape perspectives. (This page is a work in progress) - Xenonauts takes place near the end of 1979, and uses period weaponry and equipment, ranging from NATO to Soviet equipment. X-COM takes place in the (at the time) near future, using this to have futuristic equipment and weapons. - Unlike X-Com, Xenonaut soldiers are unable to use psionic powers. - Xenonauts replaces the 16 funding nations with 10 funding blocs. While essentially they fulfill the same role, the funding blocs occupy entire regions and continents of the world, instead of only individual nations. This means alien activity in any part of the world will negatively affect a funding region. - Building a base costs the same amount of money no matter where it is placed, at $500,000. Unlike X-COM where different areas of the world have different base construction costs. - Inventory management is streamlined, instead of having to purchase or manufacture ballistic weapons, ammunition, grenades and miscellaneous equipment, it is instead provided in unlimited quantities The player still has to manufacture more advanced weapons and armour however. - In addition, there is a loadout system allowing the player to save preferred loadouts to not only distinguish a soldiers role on the battlescape, but to allow the player to quickly set up equipment for soldiers, instead of having to equip soldiers manually. - The player has greater control over hiring new soldiers, instead of randomly giving new soldiers random stats the player is instead presented with a list of potential recruits, and can examine them to select soldiers with preferred, and high, stats. - Weapons research and upgrading is handled differently, in X-Com you could research laser weapons relativity early on whereas with Xenonauts Lasers come in later on in the late early game. - In addition, unlike X-Com you cannot research and use alien plasma weapons. Instead you have to research and construct your own weapons as you research them. Also new weapon types (Laser, Plasma or MAG) only see a increase in damage, unlike in X-Com where different weapon types had unique differences to them. - UFOs spawn in waves and in groups, meaning you'll encounter multiple UFOs during a wave and not a single UFO. - Aircombat in Xenonauts is massively overhauled in comparison to X-COM. Unlike X-COM air combat instead takes place in real time, the player directly controls their interceptors in a RTS mini game. - Maximum Xenonauts attack force is limited to 3 machines (unlike original 4), while Alien squadron can consist of 3 ships (instead of 1). - Auto-resolve option is not present in original X-COM. - In X-COM it is possible to fully destroy enemy ship if extremely heavy weapon is used against the smallest vessels. - Xenonauts pilots should always stay in alert and be aware that they can be actually attacked by enemy interceptors (unlike X-COM). That includes troop carriers. - Despite circumstances mentioned above, Xenonauts transport ships cannot be armed. - Xenonauts features a fully fleshed out cover system, where Xenonauts soldiers and aliens can take cover behind cover, fire over it and have the cover absorb damage and explosive damage. - Xenonauts cannot use infantry weapons to break through alien exterior walls. - In X-COM a minimap is present allowing commander to better plan his/her actions. - X-COM troopers cannot perform precision shots or full-auto bursts (more than 3 bullets). They are capable of throwing items though. - Xenonauts can jump trough small terrain obstacles (e.g. fences). - All civilians in X-COM series are unarmed and present in terror missions only.
OPCFW_CODE
I make music! I can make anything electronic, from Dubstep to DnB to Psytrance to House to Electronica to Electro Jazz to Electro Swing to Trance to Future Bass to Halftime to Ambient to Chill to... okay, I can do a lot. I also have experience making games, and I am actually making the soundtrack for a game I'm programming myself currently, so I definitely have experience with video game soundtracks specifically. I probably will not be too useful as a programmer since I only know Python, but I can definitely get your music done, no matter what style, no matter how light/heavy. Here's my latest (unfinished) song. It definitely isn't suitable for a video game, but it's to show you the abilities that I have. I'll repeat that it is unfinished. https://drive.google.com/file/d/12f7c0Gxb8_aIo8U3APyMQqy6klSIkRvz/view?usp=shari... I only compose with free software, but I do all of it myself, meaning I make all my sounds, mix it myself, etc. So, if you want a sound, I can make it. So, I can do your music and sound effects, and I can put whatever sounds you want in the song as well. =) I work for free. You can contact me at firstname.lastname@example.org. Your Discord link is broken, by the way. Hey there! Well, I suppose I wont be of much help but I am very much interested to be involved as an intern type of thing(?!?!). Lets say I'm young, short on time, and just started learning how to code on my own by picking up Gamemaker. But I'm very much interested in learning more about how things are done. Discord link is dead btw :p Hi ! I am currently working with the Unity platform developing small-but-funny games (i.e. typical arcades such as plane fighters), but I have also worked in other bigger projects (i.e. engine development). Of course, I do this for fun and as a hobby, I cannot work all day (I am working full-time), but I like to spend my spare time on this and I use to work a lot on my hobbies haha. I hope you can explain me more about what ideas you have in mind. I'm a writer, though new to writing for gaming. It's something I've wanted to do for ages but have yet to do so. I'm an author, written screenplays, teleplays, plays and more. I'd be glad to join a team as a writer. And I'm good with comedy, in terms of these being funny games. PS -- the invite for Discord link has expired.
OPCFW_CODE
DINOv2 is a computer vision model from Meta AI that claims to finally provide a foundational model in computer vision, closing some of the gap from natural language processing where it is already common for a while now. In this post, we’ll explain what does it mean to be a foundational model in computer vision and why DINOv2 can count as such. DINOv2 is a huge model (relative to computer vision) with one billion params so this comes with hard challenges both for training the model and using it. We’ll review the challenges and what the researchers in Meta AI has done in order to overcome these challenges using self-supervision and distillation. Don’t worry if you are not familiar with these terms, we’ll explain them when we’ll get there. Let’s start by first understanding what DINOv2 provides that make it a foundational model in computer vision. If you prefer a video format, then a lot of what we cover here is also covered in this video: What is a Foundational Model? In the life before a foundational model, one would need to find or create a dataset, choose some architecture for a model and train the model on that dataset. The model you need may be complex and may require a long or hard training. So here comes DINOv2, a pretrained huge visual transformer (ViT) model which is a known architecture in the field of computer vision and says that you may not need a robust complex dedicated model. Say for example that we have a cat image (the one on the left in the picture below). We can provide this image as an input to DINOv2. DINOv2 will yield a vector of numbers, often called embeddings or visual features. These embeddings contain deep understanding of the input cat image, and once we have them, we can use them in smaller simpler models that handle specific tasks. For example, we can have one model that should handle semantic segmentation, which means categorizing related parts in the image, and one model to estimate the depth of the objects in the picture. The output examples here are taken from Meta AI demo for using DINOv2. Another very important attribute for DINOv2 here is that while training these task specific models, DINOv2 can be frozen, or in other words, no finetuning is needed, which further simplifies the training of the simpler models and their usage, since DINOv2 can be executed on an image once and the output can be used by multiple models, unlike if it was finetuned then there was a need to run the finetuned DINOv2 version for any task specific model we have. Also finetuning such a huge model is not trivial to do and requires proper hardware that is not accessible to everyone. How to use DINOv2? We do not dive deep into code here, but if you would want to use DINOv2 then you could simply load it using pytorch code as in the following code taken from DINOv2 GitHub page. We see that there are few possible versions of different model sizes to load, so you can decide which version to use based on your needs and resources. The accuracy does not significantly drop when using a smaller version which is cool, especially if using one of the middle size versions. dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14') dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14') dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14') dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14') This brings us to talk about how they generated the different types of model versions, and the answer is distillation. Distillation means transferring knowledge from a large trained model into a new smaller model. An interesting note is that while doing so in DINOv2 the researchers got better results than when trying to train smaller models directly. The way it is being done is taking the large pretrained DINOv2 model and use it to teach new smaller models. So, for example given a cat image (see picture below), DINOv2 yields some embeddings. Then, we have a smaller model which we also feed with the same cat image and it also yields embeddings. The distillation process will try to minimize the difference between the embeddings coming from the new model to the one coming from DINOv2. Remember, we keep DINOv2 frozen here so only the smaller model on the right side is changing. This method is often called teacher-student distillation since the left side here acts as a teacher while the right one acts as a student In practice, to get better results from the distillation process, we do not use just one student but rather multiple ones and each simultaneously gets the same inputs and yields results. During training, an average of all of the student models is created which ends up to be the final graduated distilled model. With DINOv2, the model size was increased dramatically from previous DINO version, which raise the need for more training data. This brings us to talk about self-supervised learning with large curated data. Self Supervised Learning with Large Curated Data First, what is self-supervised learning? In short, it means our training data has no labels and the model learn solely from the images. The first DINO version also used self-supervised learning techniques. Ok, so without data labelling it should be easier to increase training data size right? However, previous attempts to increase uncurated data size with self-supervised learning have caused a drop in quality. With DINOv2, the researchers have built an automated pipeline to create a curated dataset which helped them to reach state of the art results comparing to other self-supervised learning models. They started with 25 sources of data that combined had 1.2 billion images (!) and extracted 142 million images out of it. So, this pipeline has multiple filtering steps. For example, in the original uncurated dataset we’ll likely find a lot of cat images, and also some other images. Training on such data as is may lead to a model that is very good in understanding cats but may not do very good in generalizing to other domains So one of the steps in this pipeline was to use clustering, which basically means grouping images based on similarities. Then, they could sample from each group a similar number of images and were able to create a smaller but more diverse dataset. Better Pixel Level Understanding Another benefit for using self-supervised learning is better pixel level understanding. A common approach in computer vision nowadays is using text guided pretraining. For example, the following cat image would come with a description text that may be similar to “a white kitten in a field of grass”. Both the image and the text are provided as input to such models. However, the description text may miss data, such as that the cat is walking or the small white flowers, which may lead to limiting the learning capability. With DINOv2 and self-supervised learning, the model has amazing capability to learn pixel level information. As an example, in the picture below we can see multiple horse images, and when visualizing the result of DINOv2 on them, we can see that horses in different pictures get similar colors for same body parts, even if there are multiple horses in a picture, and even if they are super tiny, very impressive. - Paper – https://arxiv.org/abs/2304.07193 - Code – https://github.com/facebookresearch/dinov2 - Video – https://youtu.be/csEgtSh7jV4 - Demo – https://dinov2.metademolab.com/ A more recent computer vision progress by Meta AI is human-like I-JEPA model, which we covered here.
OPCFW_CODE
Comments on: Robust estimation of multivariate location and scatter in the presence of cellwise and casewise contamination The authors are to be commended for bringing the critical problem of cellwise outliers to the attention of a broader community and providing some important new estimation methods and related theory. High dimensional data analysis has become a critical area for research in statistical theory and practice and, in such situations, removing (or severely downweighting) an entire observation for a single cellwise outlier can eliminate most of the data. In practice, it can be difficult to verify the assumptions needed for the theorems that are proved or used in this paper. For example, how can I know that the fraction of outliers tends to zero as the number of observations tends to infinity? Even after transformation, the standard normal may not be appropriate and the tails may not be heavier than the actual distribution. And we have to live with the fact that modern robust algorithms suffer from computational uncertainty (local optima) as well as the usual statistical uncertainty associated with a sample. I tell my students to start an analysis with a robust procedure as part of data exploration to see if any downweighted observations or observations with large residuals should be examined for errors, etc. If things look reasonable, then I often say they should try a classical procedure and compare the robust and classical estimates. If these two estimates are similar, then using a classical procedure might be appropriate. Therefore, a robust procedure is being used to diagnose data problems as well as a possible final estimator. I often first look for a diagnostic approach to data analysis (data diagnostics). With potential casewise outliers, we can compare our results with and without the case (the whole case is NA) as in leave-one-case-out covariance and regression diagnostics. This is a rather brutal approach and each element of a case could be replaced by the median of the variable associated with each cell in that case and then the analysis redone. This, of course, increases the computational cost. These methods are, naturally, affected by swamping and masking. The approach taken in this paper is to treat a potential cellwise outlier as NA and proceed from there. We could use one of the number of missing data algorithms to fill in the NAs and, therefore, reduce the need for specialized algorithms to deal with incomplete multivariate data. However, the authors note that missing data fill-in does not address casewise outliers and is not consistent. We could address casewise diagnostics with leave-one-case-out diagnostics discussed above and I am not sure just how much I should worry about asymptotic consistency for diagnostic purposes. An outlier, leverage point, or influential cell is not missing, but it is interesting. To avoid making it NA, I often replace it with the median of all the observations for that variable as mentioned earlier in the casewise situation. Of course, I do not know what cells are outliers, leverage points, or influential points, so I would have to do this for every cell in a variable and over all variables (nxp). With each one of the nxp replacements, I recompute the covariance matrix estimate and then compare the covariance matrix with the cell unchanged with the covariance matrix with the cell replaced by the variable median using whatever measure of distance was appropriate (e.g., Kulback-Leibler or LRT, condition number, etc.) I then look at the distribution of these differences to get a rough idea of cells that are having an unusual impact on the estimated covariance matrix and examine them for errors, etc. This method is not optimal in any particular sense and requires some computation, but it is easy to explain to students and consulting clients. Over the years, I have found communication to be as important as many of the theorems of asymptotic statistics. There are now a number of ways to find lower rank matrix approximations for sparse data. The NA methods suggested in the paper would, in some cases, lead to at least moderately sparse matrices and bringing together sparsity ideas and robustness might provide another approach to the problems considered here. This paper is an excellent start and opens the door to some exciting new research.
OPCFW_CODE
The Simplest Guide to Support Vector Machines. Support Vector Machines(SVM) are one the most frequently used machine learning models, and definitely essential for any ML developer's toolkit. The goal of this article isn’t just to simply teach you what SVM’s are but also how to build one with python. What is an SVM? An SVM is a supervised learning model used mainly for classification problems. We are going to use this model to classify 5 different species of flowers. SVM’s incorporate similar ideas as linear regression, so if you aren’t already familiar with that I recommend checking out the linked article. The concept of SVM is that it takes in a series of classified data points and tries to find a line that separates the data points in the best way possible. For example, let's say this is the data you are given, Now the SVM model creates a line that separates the data. It would look something like this, This is obvious to us, but for a computer, this is phenomenal work. The SVM model has to create a perfect line not only that it fits the classified data but also most likely on new data. For example, the line could look like this, Is this accurate? Well yea, but the first line fits the general data better so that if there are new points it would be more likely to classify it correctly. That's what makes SVM so powerful. How does an SVM work? To separate the two classes of data points, there are many possible hyperplanes that could be chosen. Our objective is to find a plane that has the maximum margin, i.e the maximum distance between data points of both classes. Maximizing the margin distance provides some reinforcement so that future data points can be classified with more confidence. The green line is in the dead center between the closest pink and closest blue data points. That's led to the best possible margin because it gives most leeway for new data points to be classified correctly. The proper term for the green line is called a hyperplane. The dimension of the hyperplane depends upon the number of features. If the number of input features is 2, then the hyperplane is just a line. If the number of input features is 3, then the hyperplane becomes a two-dimensional plane. Essentially if the data is above or below the line it will be classified as such. The entire coordinate plane is squashed that to just 0 to 1 by using just the sigmoid function. This just makes it easier to do the computations. Data that isn’t linearly separable However, what is important to note is an error that could occur within an SVM problem. What if the data is not linearly separable? In the previous chart, there was a clear indication of possible lines for separation. For example, take a look at this chart. In this chart, there is not a possible line to separate the data, you would need a circle-like structure to separate the data, this isn’t possible with an SVM algorithm. So how do we combat this? We add a dimension (a new feature). In the graph in Fig 5, there are x and y coordinates, so let's add a z coordinate. If we add a Z coordinate the equation goes to Z = X²+ Y², this turns Fig 5 into a 3D graph. Then a 2D hyperplane can cut the model. This is called the Kernel method. Now that we have a solid understanding of SVM, let's move onto our implementation part. As stated before we are going to be classifying different types of flowers by looking at different images of flowers. In order to do this project, you need to understand how computers actually view images. Unlike computers, humans have developed over millions of years to be able to view an image without thinking much about it. Computers on the other hand see an array of values that represent colored pixels on a screen. With that said we can actually start working on the code. Steps to Complete a Machine Learning Project To implement SVM into Python we just need to follow the general process of machine learning. If you have never done an ML project before I recommend reading this article. It goes over the essential steps in making a machine learning project. In summary, the steps required are as follows, - Identify the Problem - Data Collection - Data Cleaning - Building a Model We have already identified the problem we are going to complete (classifying images of flowers). So let's move onto the next step. Btw, we are not going hand We will be using the following dataset from Kaggle. This folder called flowers has five subfolders that are categorized by the classification of flowers(these will be the labels in our code). Sometimes in other datasets, the data needs to be cleaned to remove nulls, outlying data, and dummy rows. We don’t need to do any of that because we are dealing with images that have been properly classified. We have to import six libraries just to prep the data in the code, they are as follows. Pandas: A tool for handling, analyzing, and processing data. Os: A tool for looking into a computer operating system. Skimage.io: A tool to read and process images. Skimage.transform: A tool to resize the images for our benefit. Numpy: A tool for handling the mathematics of data. Matplotlib: A tool for plotting visuals(images and data). Setting up inputs and outputs We now have set up variables in our code for both inputs and outputs. We do this by listing our labels as a list, two empty arrays, and the path to our data. Then we have to loop through each of the categories and pull all the images from the path. From here we have to loop through all the images within the selected path and tailor the size of our image and add it to the previously empty arrays. After the nested loop is completed the whole thing can be joined into simple x and y variables. Building a Model Now we actually have to construct our model, for this part we will use sklearn. It is a collection of algorithms and models that make making models so much easier. We will use this library to build our SVM. We now have to train our model, but first, we have to split the data into training and testing. If you didn’t read the ML article, training is basically fitting a model so that it will come up with a series of calculations and algorithms specific to your model and data. In this case, it is coming up with the optimal hyperplane. Now it's time to fit the model for our data. Now it's time to score the accuracy of our model by taking the testing data and making predictions about it. You can’t test and train on the same data because it will always be 100% accurate because the model has been perfected for that specific data. The testing sets offer something else. The accuracy will be displayed by calculating how many classifications were correct and incorrect and dividing that by the total. You’ve done the work, you’ve learned the concepts, you’ve collected the data, built a model, trained it, tested it, and now it's time to use it. Using the following code will classify any image you give it. The whole thing is pretty self-explanatory since you’ve made it this far! In conclusion, we have broken down the theory of SVM and built a flower classification model in return. Source Code → https://github.com/RyanRana/SVM-Flower-Classification Good Luck with all your coding endeavors! “1.4. Support Vector Machines¶.” Scikit, scikit-learn.org/stable/modules/svm.html. “1.4. Support Vector Machines¶.” Scikit, scikit-learn.org/stable/modules/svm.html#classification. MLMath.io. “Math behind Support Vector Machine(SVM).” Medium, Medium, 16 Feb. 2019, ankitnitjsr13.medium.com/math-behind-support-vector-machine-svm-5e7376d0ee4d. Shanmukh, Vegi. “Image Classification Using Machine Learning-Support Vector Machine(SVM).” Medium, Analytics Vidhya, 5 Mar. 2021, medium.com/analytics-vidhya/image-classification-using-machine-learning-support-vector-machine-svm-dc7a0ec92e01. I am a Business Analytics and Intelligence professional with deep experience in the Indian Insurance industry. I have worked for various multi-national Insurance companies in last 7 years. “SVM: Support Vector Machine Algorithm in Machine Learning.” Analytics Vidhya, 23 July 2021, www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/. “Support Vector Machines: A Simple Explanation.” KDnuggets, www.kdnuggets.com/2016/07/support-vector-machines-simple-explanation.html.
OPCFW_CODE
SUMO Development: 2012.3 and 2012.4 Update procrastinated forgot to post an update for 2012.3 and we are done with 2012.4 too now. - Closed Stories: 26 - Closed Points: 37 (3 aren't used in the velocity calculation as they were fixed by James and Kadir - Thanks!) - Developer Days: 28 - Velocity: 1.21 pts/day The 2012.3 sprint went very well. We accomplished most of the goals we set out to do. We rolled out Elastic Search to 50% of our users and had it going for several days. We fixed some of the blocker bugs and came up with a plan for reindexing without downtime. Everything was great until we decided to add some timers to the search view in order to compare times of the Elastic Search vs the Sphinx code path. As soon as we saw some data, we decided to shut down Elastic Search. Basically, the ES path was taking about 4X more time than the Sphinx path. Yikes! We got on that right away and started looking for improvements. On the KPI Dashboard side, we landed 4 new charts as well as some other enhancements. The new charts show metrics for: - Search click-through rate - Number of active contributors to the English KB - Number of active contributors to the non-English KB - Number of active forum contributors We did miss the goal of adding a chart for active Army of Awesome contributors, as it turned out to be more complicated than we initially thought. So that slipped to 2012.4. - Closed Stories: 20 - Closed Points: 24 - Developer Days: 19 - Velocity: 1.26 pts/day The 2012.4 sprint was sad. It was the first sprint without ErikRose :-(. We initially planned to have TimW help us part time, but he ended up getting too busy with his other projects. We did miss some of our initial goals, but we did as good as we could. The good news is that we improved the search performance with ES a bunch. It still isn't on par with Sphinx but it is good enough to where we went back to using it for 50% of the users. We have plans to make it faster, but for now it looks like the click-through rates on results are already higher than what we get with Sphinx. That makes us very happy :-D. We added two new KPI dashboard charts: daily unique visitors and active Army of Awesome contributors. We also landed new themes for the new Aurora community discussion forums. This week we started working on the 2012.5 sprint. Our goals are: - Elastic Search: refactor search view to make it easier to do ES-specific changes. - Elastic Search: improve search view performance (get us closer to Sphinx). - Hide unanswered questions that are over 3 months old. They don't add any value, so there is no reason to show them to anybody or have them indexed by google and other search engines. - Branding and styling updates for Marketplace pages - KPI Dashboard: l10n chart - KPI Dashboard: Combine solved and responded charts We are really hoping to be ready to start dialing up the Elastic Search flag to 100% by the time we are done with this sprint.
OPCFW_CODE
How to deal with a fraternal twins "competition" issue? We have 4-year-old fraternal twins and the following issue: ever since they were born, one was the "bigger" one, simply put, he's (currently) better at almost everything than his brother (be it reading, writing, sports etc.). As such I don't think the difference is really an issue, life is long after all; however my concern is that the other one starts having issues with that. He becomes mad every time his brother does something better or beats him at bicycle races or whatever (he wants himself to enter the race but abandons it as soon as he sees his brother being faster). We try to push him or congratulate him, however it's hard not to congratulate his brother as well (Like but I can count to 100...). It's also sometimes hard to get his attention as he often likes to play "dog" or cat instead of focusing on what you just happen to tell him... Any idea on how to deal with all that? It is in our nature to be jealous. I noticed similar problems with my kiddos. However, I then noticed how I "congratulated" them and such, and realized I was creating the jealousy. I've since switched to a "right on, the more you practice the better you both will get" kind of accolade for almost all situations. Even when it is just 1 kid doing something while the other is on the other side of the house, I will tell him good job, keep practicing and you will get even better! It took a fair amount of that but we did eventually see their behavior change. There will still be little bits here and there, but Wifey and I are pretty happy with the results. Also keep in mind that your kiddos are different people. They will respond differently to situations. EDIT: I want to emphasize the word practice. In everything we do, we could consider it practicing. Every day we go to work, we "practice" and get better at our job. Eventually we will get good enough for a raise. When we exercise, we "practice" the exercises, and we get better over time. When your kids ride their bike, they are practicing, and no matter what their skill levels are, they will each get better at their own rate. Exactly this. Praise effort, not success. One thing I've also added is that when I see one of my twins winning/succeeding where they previously were frustrated, I will attribute the win to practice and/or persistence. "You've won, good job, see that you can do it if you just keep trying?" This can be hard when one seems to be "just better" at everything, but being consistent with this pays off... eventually. There is no hard-fast rule as each kid is different, but I try really hard to avoid the word, "won", or "winning", etc... That word is bound to cause jealousy in any situation. Heck, I've had a friends kid doing something with my kid, turn around and say "I won!" in a snooty way, and I got pissed for my kid :) LMAO, my Mother actually encouraged my twin brother and I to compete against eachother. It started as toddlers with the bathtub and a stopwatch. Who could finish first (no shortcuts)? It turned it into a game. As we've aged, the competition has continued, but always in good humor. My favorite is the hot spice (salsa, tobasco style sauces) and who can last the longest. Twins begin to understand that they develop different interests that they can excell at on their own. But for now, while they are young, praise them for their individual acheivements and turn the competitions into games where they play off eachother. One might be faster on a bike, the other might score points for the best skid, or wearing his helmet without being reminded. In the end, they'll be fine.
STACK_EXCHANGE
An Open Source RDL engine for rendering reports to WinForms or Page Language=”C#” MasterPageFile=”~/”. I have referred some articles in this website for output using Am new to C#.net built a application for the same as below gives an error saying I had renamed manually an item to project. Well, if the report’s on the report server, probably the easiest way is just to send the user to a URL that will generate the report as needed. Try something like this . .net – Render SSRS .RDL to PDF and just open it – Stack Overflow renser Smur 1, 5 21 I like this idea Sau Nov 6: The aproach I considered the best for doing this was: This occurred in ParseAttribute XmlNode attr. Get the sample code Download ActiveReports It includes full VB. NET report viewer from Microsoft. I am using your code, but when I render the report I get the error “The report definition is not valid. Viewing the report in the ASP. I want to create a project that shows me a list of countries on the left side of the page. I have been working with MS dotnet since 1. The simple box model of the GenericRender class rnder simple rendering to the various final forms which is evidenced by the RenderToText class being less than lines of total code. Articles Quick Answers Messages. You have a web site, written in ASP. Member 8-Feb I had to add several more classes for deserialization. The WinForms viewer works directly with the GenericRender class. NET or WinForms, etc. Get the sample code. The data doesn’t come up Through the code-behind, I gotta pass in a paramater, render the report, and don’t save it anywhere, just open it to the user, as a. Renxer Golisch Apr 9: Instead use Visible property inside ReportParameter classuse Hidden: The tool from your link simply generates an XML that describes a report. Member Oct When the Parameters view button is pressed, the ViewReport event is raised where the report is rendered and put into the ReportViewer control. For the table headers, first we’ll create the customerHeader cell that represents a cell in the table header:. The GenericRender represents a simple box model which can be easily translated to one of the final rendering classes. I improved this example to work with external images, and to detect required report parameters. The rdl file starts like this: KiranPudi 31 1 6. Next, repeat the formatting for each of the remaining three columns, changing the Name and Value properties accordingly. Sign up using Facebook. In this example, the InitializeDataSet event is registered which may be used if the programmer wants to substitute a custom data table for the one defined in the RDL report. It would be nice if you could post an exception. Paginated, as it sounds, breaks up the report into multiple pages based on the page size we specified in the ViewReport method. Got it but an unfamiliar error pops up: The page is loading with the header LearnByExample May 4: The RDL file is not the report outputbut a mere layout description. But, in the output running in my localhost browserI only get the header of the report and the loading spinner continuously. Is there any way to reduce loading time of rdl report? I had renamed my. That is how SSRS does it. TableCells; We then format the customerHeader cell for each of the five columns and add it to the table: I just need to copy those into the query parameter definitions. Best cc#, Charles Oliveira http: Add cityName ; myDSet. An Open Source RDL Engine Add an attribute to your class or property definition. This is one way to do it without the requirement for full SSRS and the security hoops. Okay to do it programatically the simplest solution fdl NET or Winforms viewer. Could you help me where i went wrong. Since you are using Ccan’t you pull the data into a grid, then from there export out as a PDF using a third party nuget package? Here is the code: Field “City”, “City”, null ; GrapeCity.
OPCFW_CODE
VIM-like editor bindings Is your feature request related to a problem? Please describe. VIM is my primary code editor, but I like to rely on dbeaver for writing/editing SQL. The difference in keybindings often takes me out of my flow as some funny things happen when I start VIMing in dbeaver. Describe the solution you'd like This request is probably pie-in-the-sky, but VIM(ish) key bindings in the script editor would be lovely. Describe alternatives you've considered I can use vim plugins to get sql completion, auto-complete table names etc etc in vim, but dbeaver's GUI is much better for exploring data, seeing query plans, etc. etc. thanks for suggestion Today I already use this with dbeaver. What you need to do is install Eclipse Marketplace : Help -> install new software -> search for market The into marketplace menu , search for Vrapper plugin. Awesome! I first had to install Eclipse Marketplace by going to "Help" > "Install New Software" Then searched for the eclipse repo and the marketplace client: After that I was able to add the Vrapper plugin Thanks so much! This doesn't seem to work for me; I get Cannot complete the install because of a conflicting dependency. Software being installed: Marketplace Client 1.8.2.v20200309-0038 (org.eclipse.epp.mpc.feature.group 1.8.2.v20200309-0038) Software currently installed: DBeaver <IP_ADDRESS>102281937 (org.jkiss.dbeaver.core.product <IP_ADDRESS>102281937) Only one of the following can be installed at once: Equinox Provisioning Discovery UI 1.1.400.v20191213-1911 (org.eclipse.equinox.p2.ui.discovery 1.1.400.v20191213-1911) Equinox Provisioning Discovery UI 1.2.0.v20200916-1234 (org.eclipse.equinox.p2.ui.discovery 1.2.0.v20200916-1234) Cannot satisfy dependency: From: Marketplace Client 1.8.2.v20200309-0038 (org.eclipse.epp.mpc.feature.group 1.8.2.v20200309-0038) To: org.eclipse.equinox.p2.iu; org.eclipse.epp.mpc.ui [1.8.2.v20200309-0038,1.8.2.v20200309-0038] Cannot satisfy dependency: From: Marketplace Client 1.8.2.v20200309-0038 (org.eclipse.epp.mpc.ui 1.8.2.v20200309-0038) To: osgi.bundle; org.eclipse.equinox.p2.ui.discovery [1.0.0,1.2.0) Cannot satisfy dependency: From: Equinox p2, Discovery UI support 1.2.800.v20200916-1234 (org.eclipse.equinox.p2.discovery.feature.feature.group 1.2.800.v20200916-1234) To: org.eclipse.equinox.p2.iu; org.eclipse.equinox.p2.ui.discovery [1.2.0.v20200916-1234,1.2.0.v20200916-1234] Cannot satisfy dependency: From: DBeaver Community Edition <IP_ADDRESS>102281937 (org.jkiss.dbeaver.ce.feature.feature.group <IP_ADDRESS>102281937) To: org.eclipse.equinox.p2.iu; org.jkiss.dbeaver.standalone.feature.feature.group [<IP_ADDRESS>102281937,<IP_ADDRESS>102281937] Cannot satisfy dependency: From: DBeaver <IP_ADDRESS>102281937 (org.jkiss.dbeaver.core.product <IP_ADDRESS>102281937) To: org.eclipse.equinox.p2.iu; org.jkiss.dbeaver.ce.feature.feature.group [<IP_ADDRESS>102281937,<IP_ADDRESS>102281937] Cannot satisfy dependency: From: DBeaver Standalone <IP_ADDRESS>102281937 (org.jkiss.dbeaver.standalone.feature.feature.group <IP_ADDRESS>102281937) To: org.eclipse.equinox.p2.iu; org.eclipse.equinox.p2.discovery.feature.feature.group [1.2.800.v20200916-1234,1.2.800.v20200916-1234] See #10987 . After version 7.3.1 the marketplace become incompatible . However the workaround is add the http://download.eclipse.org/mpc/releases/1.9.0 as repository and install the marketplace from there. Then all works again. vrapper not working on dbeaver version <IP_ADDRESS>104181339 Installed but after reboot no new binding is available. I just updated to DBeaver version <IP_ADDRESS>105021514 and was able to install the eclipse marketplace and vrapper again as noted above Installed Vrapper and it asked to restart.... it keeps restarting once and again automatically until I rebooted the computer. Dbeaver was rendered unusable. Had to remove the Vrapper files manually Installed Vrapper and it asked to restart.... it keeps restarting once and again automatically until I rebooted the computer. Dbeaver was rendered unusable. Had to remove the Vrapper files manually Try the unstable version of Vrapper Vrapper works like a charm, but when update dbeaver version I always lost the plugin and need to install again. Does someone having this too? Vrapper works like a charm, but when update dbeaver version I always lost the plugin and need to install again. Does someone having this too? Hi, Yes! And I also open a issue , then was found it duplicated... this issue is already planned . https://github.com/dbeaver/dbeaver/issues/5317 Vrapper works like a charm, but when update dbeaver version I always lost the plugin and need to install again. Does someone having this too? Hi, Yes! And I also open a issue , then was found it duplicated... this issue is already planned . #5317 Great!! Thanks! :-) Awesome! I first had to install Eclipse Marketplace by going to "Help" > "Install New Software" Then searched for the eclipse repo and the marketplace client: After that I was able to add the Vrapper plugin Thanks so much! Thanks a lot, that's what I needed! How do you find the Eclipse Marketplace in newer versions? I've installed it but I can't figure out how to interact with it. (It is not showing up under the Help menu. Hey, hope this post helps: https://shehuawwal.com/installing-vim-extension-module-for-dbeaver-with-vrapper/ Hey, hope this post helps: https://shehuawwal.com/installing-vim-extension-module-for-dbeaver-with-vrapper/ This is the quickest solution, but the website does not exist anymore. So I used wayback machine to read it. And I'll make it available here for the next time I upgrade DBeaver. LOL How To Install Vrapper On DBreaver Click on Help from the Menu Icon > Install New Software You will see Work With and paste the following URL: http://vrapper.sourceforge.net/update-site/stable And Select the full Vrapper option: Just Wait And Click On Next And follow the on-screen help, And that's all. If you want to check the website for yourself: http://web.archive.org/web/20240314094746/https://shehuawwal.com/installing-vim-extension-module-for-dbeaver-with-vrapper/ Looks like the plugin only provides bindings in SQL script windows, but not for the rest of editing (like stored procedures etc)
GITHUB_ARCHIVE
Nokogiri v1.12.5 Release NotesRelease Date: 2021-09-27 // 28 days ago 🔒 [JRuby] Address CVE-2021-41098 (GHSA-2rr5-8q37-2w7h). 0️⃣ In Nokogiri v1.12.4 and earlier, on JRuby only, the SAX parsers resolve external entities (XXE) by default. This fix turns off entity-resolution-by-default in the JRuby SAX parsers to match the CRuby SAX parsers' behavior. 💎 CRuby users are not affected by this CVE. - 💎 [CRuby] Document#to_xhtmlproperly serializes self-closing tags in libxml > 2.9.10. A behavior change introduced in libxml 2.9.11 resulted in emitting start and and tags (e.g., <br></br>) instead of a self-closing tag (e.g., <br/>) in previous Nokogiri versions. [#2324] - 💎 [CRuby] Previous changes from v1.12.4 Notable fix: Namespace inheritance 💎 Namespace behavior when reparenting nodes has historically been poorly specified and the behavior diverged between CRuby and JRuby. As a result, making this behavior consistent in v1.12.0 introduced a breaking change. 🚀 This patch release reverts the Builder behavior present in v1.12.0..v1.12.3 but keeps the Document behavior. This release also introduces a Document attribute to allow affected users to easily change this behavior for their legacy code without invasive changes. Compensating Feature in XML::Document 🚀 This release of Nokogiri introduces a new namespace_inheritance, which controls whether children should inherit a namespace when they are reparented. Nokogiri::XML:Documentdefaults this attribute to falsemeaning "do not inherit," thereby making explicit the behavior change introduced in v1.12.0. 💎 CRuby users who desire the pre-v1.12.0 behavior may set document.namespace_inheritance = truebefore reparenting nodes. See https://nokogiri.org/rdoc/Nokogiri/XML/Document.html#namespace_inheritance-instance_method for example usage. 🛠 Fix for XML::Builder 🏗 However, recognizing that we want Builder-created children to inherit namespaces, Builder now will set namespace_inheritance=trueon the underlying document for both JRuby and CRuby. This means that, on CRuby, the pre-v1.12.0 behavior is restored. 🏗 Users who want to turn this behavior off may pass a keyword argument to the Builder constructor like so: 🏗 See https://nokogiri.org/rdoc/Nokogiri/XML/Builder.html#label-Namespace+inheritance for example usage. Downstream gem maintainers Note that any downstream gems may want to specifically omit Nokogiri v1.12.0--v1.12.3 from their dependency specification if they rely on child namespace inheritance: Gem::Specification.new do |gem| # ... gem.add_runtime_dependency 'nokogiri', '!=1.12.3', '!=1.12.2', '!=1.12.1', '!=1.12.0' # ... end
OPCFW_CODE
Date: Wed, 29 May 1996 14:59:58 -0700 From: nn@lanta (Neal Nuckolls) The "unique" tcp/ip implementation is a liability to linux. It could also be one of it's greatest assets, and I think this will turn out to be the case. Is anyone working to replace the standard linux stack with port derived from the 4.4BSD code? I will for now only briefly mention why I think this is not very A couple of weeks ago, Larry was babbling to me "oh the stack is sloowww, I can't push nearly as much over 100mb/s ether as freebsd can, etc." I said, "thats peculiar" so I did some investigation and told Linus about it. Turned out to be a driver bug and after that was fixed the over the wire numbers are unparalleled. The Berkeley stack is dead, but it has one redeeming quality which Linux's stack does desperately need. It has a well defined architecture, I will agree with lm when he mentions that it is a jungle of code to sift through in certain respects. It need a mallet to smooth certain aspects and interfaces In the end I think it is best to work on hacking the existing (and upcoming) Linux networking code to have these qualities instead of stuffing the bsd stack into linux (this has been done before a long long time ago btw, before linux had any networking, a man by the name of Charles Hedrick back at Rutgers did it in a few nights). I think the feeling that the linux stack is "hard to follow" or "has very little architecture" has a lot to do with the fact that we don't have 20 books analyzing the code c-statement by c-statement like the bsd stuff does. If we had that, I think this desire to use the berkeley stack would not be as strong. I dislike the berkeley stack, but I am biased in my opinion. I am biased because of the attitude expressed by the people actively working on that code set in the free software world these days, I am also biased because I tend to hack Linux almost exlusively. But, even barring that I believe that some of the elements of the bsd stack will end up being completely flawed when plugged into linux, obvious things like mbufs and other things come to mind right now. It would require a bit of engineering and greatly upset a large community who has put their entire heart and soul into the Linux networking code. I believe at the very least that the Linux networking stack is superior performance wise without any question, and as everyone knows I have numbers to prove it ;-) David S. Miller
OPCFW_CODE
import unittest from programy.clients.events.console.config import ConsoleConfiguration from programy.clients.events.tcpsocket.config import SocketConfiguration from programy.config.file.yaml_file import YamlConfigurationFile class SocketConfigurationTests(unittest.TestCase): def test_init(self): yaml = YamlConfigurationFile() self.assertIsNotNone(yaml) yaml.load_from_text(""" socket: host: 127.0.0.1 port: 9999 queue: 5 max_buffer: 1024 debug: true """, ConsoleConfiguration(), ".") socket_config = SocketConfiguration() socket_config.load_configuration(yaml, ".") self.assertEqual("127.0.0.1", socket_config.host) self.assertEqual(9999, socket_config.port) self.assertEqual(5, socket_config.queue) self.assertEqual(1024, socket_config.max_buffer) self.assertEqual(True, socket_config.debug) def test_to_yaml_with_defaults(self): config = SocketConfiguration() data = {} config.to_yaml(data, True) self.assertEqual("0.0.0.0", data['host']) self.assertEqual(80, data['port']) self.assertEqual(5, data['queue']) self.assertEqual(1024, data['max_buffer']) self.assertEqual(False, data['debug']) self.assertEqual(data['bot_selector'], "programy.clients.botfactory.DefaultBotSelector") self.assertEqual(data['renderer'], "programy.clients.render.text.TextRenderer") self.assertTrue('bots' in data) self.assertTrue('bot' in data['bots']) self.assertEqual(data['bot_selector'], "programy.clients.botfactory.DefaultBotSelector") self.assertTrue('brains' in data['bots']['bot']) self.assertTrue('brain' in data['bots']['bot']['brains']) def test_to_yaml_without_defaults(self): yaml = YamlConfigurationFile() self.assertIsNotNone(yaml) yaml.load_from_text(""" socket: host: 127.0.0.1 port: 9999 queue: 5 max_buffer: 1024 debug: true default_userid: console prompt: $ bot_selector: programy.clients.botfactory.DefaultBotSelector renderer: programy.clients.render.text.TextRenderer """, ConsoleConfiguration(), ".") config = SocketConfiguration() config.load_configuration(yaml, ".") data = {} config.to_yaml(data, False) self.assertEqual("127.0.0.1", data['host']) self.assertEqual(9999, data['port']) self.assertEqual(5, data['queue']) self.assertEqual(1024, data['max_buffer']) self.assertEqual(True, data['debug']) self.assertEqual(data['bot_selector'], "programy.clients.botfactory.DefaultBotSelector") self.assertEqual(data['renderer'], "programy.clients.render.text.TextRenderer") self.assertTrue('bots' in data) self.assertTrue('bot' in data['bots']) self.assertEqual(data['bot_selector'], "programy.clients.botfactory.DefaultBotSelector") self.assertTrue('brains' in data['bots']['bot']) self.assertTrue('brain' in data['bots']['bot']['brains']) def test_to_yaml_no_data(self): yaml = YamlConfigurationFile() self.assertIsNotNone(yaml) yaml.load_from_text(""" other: """, ConsoleConfiguration(), ".") config = SocketConfiguration() config.load_configuration(yaml, ".") data = {} config.to_yaml(data, False) self.assertEqual("0.0.0.0", data['host']) self.assertEqual(80, data['port']) self.assertEqual(5, data['queue']) self.assertEqual(1024, data['max_buffer']) self.assertEqual(False, data['debug']) self.assertEqual(data['bot_selector'], "programy.clients.botfactory.DefaultBotSelector") self.assertEqual(data['renderer'], "programy.clients.render.text.TextRenderer") self.assertTrue('bots' in data) self.assertTrue('bot' in data['bots']) self.assertEqual(data['bot_selector'], "programy.clients.botfactory.DefaultBotSelector") self.assertTrue('brains' in data['bots']['bot']) self.assertTrue('brain' in data['bots']['bot']['brains'])
STACK_EDU
How to Become a Data Analyst What Does a Data Analyst Do? Data analysts collect, organize, and interpret data and information to create actionable insights for companies. To accomplish this, Data Analysts must collect large amounts of data, sift through it, and assemble key sets of data based on the organization’s desired metrics or goals. Analysts then often transform those key datasets into dashboards for different departments within the organization, presenting their insights in ways that can be used to inform activities and decision-making. Data Analysts work in everything from political campaign management and finance to mining and epidemiology. But to give an example: imagine a corporate website that uses content marketing for lead generation. Tracking the conversion rates of visitors into customers yields data that lets a Digital Marketer follow a potential customer from their arrival at a blog post or other landing page all the way through to their signing up for a newsletter or even purchasing a product. Seeing what happens at each step helps the Marketer understand what content is working, why it’s working, and hopefully expand on that success. What Does a Data Analyst Actually Do? Data Analysts’ specific tasks vary wildly from industry to industry, company to company. Generally speaking, though, as a Data Analyst, you can expect to perform some or all of the following tasks and responsibilities: Researching your company and your industry to identify opportunities for growth, vulnerabilities, and areas for improved efficiency and productivity. Data requirement gathering, beginning with determining what you hope to accomplish, and arriving at a clear sense of what information you need and how to measure it. Data collection, either from existing sources, or by developing new channels for obtaining the information you need—while making sure the data is in a usable form. Data cleaning, including reformatting data for consistency, removing duplicate entries and null sets, and so on. In very large datasets, this task is too onerous to complete by hand, and requires the use of purpose-built tools and software. Creating and applying algorithms to run automation tools, the better to understand, interpret, and reach solid conclusions about what the data shows. Modeling and analyzing data to identify significant patterns and trends and interpret their meaning. Presenting your findings to other members of the organization, digested and packaged in a way they can easily grasp. This can include creating visualizations or dashboards for other members of the organization to refer to. This diverse range of actions can be generalized by four fundamental categories: understanding the data, analyzing the data, building and managing databases, and communicating the data to others. In the most recent BrainStation Digital Skills Survey, most Data Analyst respondents said they spend the largest amount of time wrangling raw data and cleaning it up. The primary use for this data? Optimizing existing platforms and products, as well as the development of new ideas, products, and services. When BrainStation further correlated these responses to major job titles, an interesting discrepancy between Data Analysts and Data Scientists emerged: the majority of Business Analyst and Data Analyst respondents indicated that they tend to focus more on the former (optimizing existing platforms and products). Data Scientists, on the other hand, hew primarily toward the development of new ideas, products, and services, where strategic planning comes to the fore—possibly a result of differences in experience, knowledge levels, or degree of specialization. Kick-Start Your Data Analyst Career We offer a wide variety of programs and courses built on adaptive curriculum and led by leading industry experts. - Work on projects in a collaborative setting - Take advantage of our flexible plans and scholarships - Get access to VIP events and workshops Recommended Courses for Data Analyst The Data Science Full-Time program is an intensive course designed to launch students' careers in data. The part-time Data Analytics course was designed to introduce students to the fundamentals of data analysis. Taught by data professionals working in the industry, the part-time Data Science course is built on a project-based learning model, which allows students to use data analysis, modeling, Python programming, and more to solve real analytical problems. The part-time Machine Learning course was designed to provide you with the machine learning frameworks to make data-driven decisions.
OPCFW_CODE
import pytest from unittest import mock from ..__model.accelerometer import Accelerometer from ..__model import constants as CONSTANTS class TestAccelerometer(object): def setup_method(self): self.accelerometer = Accelerometer() @pytest.mark.parametrize( "accel", [ CONSTANTS.MIN_ACCELERATION, CONSTANTS.MIN_ACCELERATION + 1, 100, CONSTANTS.MAX_ACCELERATION - 1, CONSTANTS.MAX_ACCELERATION, ], ) def test_x_y_z(self, accel): self.accelerometer._Accelerometer__set_accel("x", accel) assert accel == self.accelerometer.get_x() self.accelerometer._Accelerometer__set_accel("y", accel) assert accel == self.accelerometer.get_y() self.accelerometer._Accelerometer__set_accel("z", accel) assert accel == self.accelerometer.get_z() @pytest.mark.parametrize("axis", ["x", "y", "z"]) def test_x_y_z_invalid_accel(self, axis): with pytest.raises(ValueError): self.accelerometer._Accelerometer__set_accel( axis, CONSTANTS.MAX_ACCELERATION + 1 ) with pytest.raises(ValueError): self.accelerometer._Accelerometer__set_accel( axis, CONSTANTS.MIN_ACCELERATION - 1 ) @pytest.mark.parametrize( "accels", [ (23, 25, 26), (204, 234, -534), (CONSTANTS.MIN_ACCELERATION + 10, 234, CONSTANTS.MAX_ACCELERATION), ], ) def test_get_values(self, accels): self.accelerometer._Accelerometer__set_accel("x", accels[0]) self.accelerometer._Accelerometer__set_accel("y", accels[1]) self.accelerometer._Accelerometer__set_accel("z", accels[2]) assert accels == self.accelerometer.get_values() @pytest.mark.parametrize("gesture", ["up", "face down", "freefall", "8g"]) def test_current_gesture(self, gesture): self.accelerometer._Accelerometer__set_gesture(gesture) assert gesture == self.accelerometer.current_gesture() @pytest.mark.parametrize("gesture", ["up", "face down", "freefall", "8g"]) def test_is_gesture(self, gesture): self.accelerometer._Accelerometer__set_gesture(gesture) assert self.accelerometer.is_gesture(gesture) for g in CONSTANTS.GESTURES: if g != gesture: assert not self.accelerometer.is_gesture(g) def test_is_gesture_error(self): with pytest.raises(ValueError): self.accelerometer.is_gesture("sideways") def test_was_gesture(self): mock_gesture_up = "up" mock_gesture_down = "down" assert not self.accelerometer.was_gesture(mock_gesture_up) self.accelerometer._Accelerometer__set_gesture(mock_gesture_up) self.accelerometer.current_gesture() # Call is needed for gesture detection so it can be added to the lists. self.accelerometer._Accelerometer__set_gesture("") assert self.accelerometer.was_gesture(mock_gesture_up) assert not self.accelerometer.was_gesture(mock_gesture_up) def test_was_gesture_error(self): with pytest.raises(ValueError): self.accelerometer.was_gesture("sideways") def test_get_gestures(self): mock_gesture_up = "up" mock_gesture_down = "down" mock_gesture_freefall = "freefall" self.accelerometer._Accelerometer__set_gesture(mock_gesture_up) self.accelerometer.current_gesture() # Call is needed for gesture detection so it can be added to the lists. self.accelerometer._Accelerometer__set_gesture(mock_gesture_down) self.accelerometer.current_gesture() self.accelerometer._Accelerometer__set_gesture(mock_gesture_freefall) self.accelerometer.current_gesture() self.accelerometer._Accelerometer__set_gesture("") assert ( mock_gesture_up, mock_gesture_down, mock_gesture_freefall, ) == self.accelerometer.get_gestures() assert () == self.accelerometer.get_gestures()
STACK_EDU
I'm a psychological researcher and am interested in developing a bare-bones program for an experiment. The basic premise is I need small groups of people to be members of a town, and to choose between daily activities that have various risk/reward ratios to generate resources for the town. e.g., hunting for rabbits has high potential gains, but is unlikely to succeed, while picking berries produces a small amount of food reliably. The best way I can think of doing this is with a flash game. Is this feasible? Does anyone have any experience doing something like this? How much would something like this cost? I'm not at all familiar with flash so I'd recommend some sort of Java applet. You can embed it right into the web browser so that users can play it easily (assuming it is signed). What's more, it can submit all the user choices to a server that will collect all the data. You could even have multiple users grouped together in joined sessions as part of the same town. That sounds perfect. Much better than what I had planned. Thank you. As long as it can support a feeling of a town (think town of salem), it should be fine. How much do java applets typically cost to develop? I do not think this project is all that complex. Unfortunately, I can't help you in predicting cost. I have never commissioned / be commissioned for programming on a per-project basis. You are correct though in saying that the project isn't very complex. It will require server communication and some decent UI, but it isn't the kind of thing that would take more than a couple of weeks at most. I have literally no experience with python, so I don't know. The only reason I was thinking of flash is because I know flash games exist that do things much more complicated than what I want. Ergo, using flash should be (1) possible, and (2) comparatively cheap. But again, I'm out of my field here. My technical expertise is limited roughly to building my own desktop and occasionally googling error codes when they come up. Cost depends greatly on how much you want the game to do -- be more specific. Does it need graphics? Good graphics? Are the participants in the same town in a realtime networked simulation? Do you need fancy automatic reports with charts from the data, or can you interpret raw numbers yourself? Do you need someone to take care of hosting the game someplace? For how long? With how much traffic? I can't speak for the average rate or for Indians or for normal people, but as a poor desperate student that is qualified and reasonably experienced, I'll do it for $20~$75 depending on the details. Yes it would need graphics. Nothing fancy, static sprites are perfectly fine. It would be turn based. Every turn they get to select from a variety of options. No, I need the raw data. I'll be doing all the stats myself. Most likely, the uni would be able to handle the hosting, but this is in proposal stages atm, so things could change dramatically. I do know that if it was in flash, I could upload it to kongregate: But I would then need it to have private rooms I could get participants to join. The study could be run for a few months, depending on how fast the data comes in. With the effect sizes we discovered in exploratory studies, I'm assuming we would need at least 50 trials of around 6 people each. 100 would be optimal. This would be part of a multi-study project that we are trying to get approved, and is the final stage. Data collection of this phase might not take place until a year from now. However, if you are interested, you are welcome to leave your email with me and we can contact you once the study is approved. I can probably host that much traffic on an existing server personally. Syncing the game between users in the same group is going to be the main non-trivial part assuming they need to be able to play at the same time. Graphics take time. Overall the game is probably several hours of work, worth about $50 to me. I think people who do contract work as a job (and maybe me in a year) would need more. Keep me posted: gentoo at member dot fsf dot org
OPCFW_CODE
Avant tout début d’activité, les candidats doivent assister à des sessions de formation. Exigences de l’entrée de notre réseau : Les demandes sont examinées attentivement et une enquête relative à lla candidature est menée. Les candidats doivent passer une entrevue de sélection. NE Scala is dedicated to providing a harassment-free experience for everyone, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, or religion (or lack thereof). Sexual language, innuendo, and imagery is not appropriate for any symposium venue, including talks. All communication should be appropriate for a technical audience, including people of many different backgrounds. We do not tolerate harassment of participants in any form. Read the main Heroku Blog to keep in touch with everything that’s happening at Heroku, including product updates, technical posts, and more. Html) and your YubiKey will be reset with factory settings, deleting your PGP keys from it. After three time you will need to edit the YubiKey (with gpg –card-edit) become admin and use the unblock PIN option. If you digit the wrong admin PIN for three time, you will have to follow a quite complicated procedure (explained at this address: //developers. Please remember that you can only digit a wrong user PIN for a maximum of three times. Use HTTP Caching in Heroku with Cloudflare. This website is built by Qihuan Piao with Rails 5. Rc2, hosting on Heroku. July 14, 2013 heroku, Cache. In this step you will install the Heroku Command Line Interface (CLI), formerly known as the Heroku Toolbelt. You will use the CLI to manage and scale your applications, to provision add-ons, to view the logs of your application as it runs on Heroku, as well as to help run your application locally. If you ever break out of the PHP world of things and find yourself using more advanced languages and frameworks you find that you need much more to host them. Nous aidobs less personnes désireuses d’être propriétaires d’entreprises prospères à devenir des professionnels du nettoyage commercial et de la maintenance. En rejoignant notre réseau d’entreprises, vous bénéficierez d’un avantage concurrentiel ainsi que d’une aide apportée par un modèle d’affaires éprouvé vous assurant de travailler avec un leader du domaine disposant de l’expérience et la stabilité. If you now access /db you will see Nameless in the output as there is no table in the database. Assuming that you have Postgres installed locally, use the heroku pg:psql command to connect to the database you provisioned earlier, create a table and insert a row:. How to handle the authentication and permissions and I will cover deployment and hosting. *Add an Image field and save images to S3. Hosting the API on Heroku. You now have a functioning git repository that contains a simple application as well as a composer. Json file indicates to Heroku that your application is written in PHP. Heroku uses Composer for dependency management in PHP projects, and the composer. Make sure you’ve installed Composer. This website is built by Qihuan Piao with Rails 5. Rc2, hosting on Heroku.
OPCFW_CODE
/* * File: Connector.cpp * Author: Marc Schaefer <marc-schaefer.dev@highdynamics.org> * * Created on 23. Januar 2018, 18:59 */ #include "Connector.h" Connector::Connector() { } Connector::Connector(const Connector& orig) { } Connector::~Connector() { } int Connector::startCommunication() { Serial.print("Connecting to access point "); Serial.print(tSSID.c_str()); ArduinoJson::StaticJsonBuffer<200> jsonBuffer; unsigned long ttl = 0; WiFi.begin(tSSID.c_str(), tKey.c_str()); while (WiFi.status() != WL_CONNECTED) { if(ttl>timeout_connectNetwork_ms ) { Serial.println("timeout! Powering down WiFi."); Serial.print("Error: "); Serial.println(WiFi.status()); WiFi.disconnect(true); return -1; } delay(500); ttl = ttl + 500; Serial.print("."); } Serial.println("connection established!"); Serial.print("Assigned IPv4: "); Serial.println(WiFi.localIP()); //setServer(WiFi.gatewayIP(), 8080); Serial.print("Connecting to Server "); Serial.print(tIP.toString()); Serial.print(":"); Serial.println(tPort); if (!client.connect(tIP, tPort)) { Serial.println("...failed!"); return -2; } else { Serial.println("...connected!"); } return 1; } int Connector::endCommunication() { Serial.print("Disconnecting from server and network"); client.stop(); WiFi.disconnect(true); unsigned long ttl = 0; while(client.connected() || WiFi.status() == WL_CONNECTED) { if(ttl>timeout_disconnect_ms) { if(client.connected()) { Serial.println("failed! Still connected to server and network."); return -2; } else { Serial.println("failed! Still connected to the network."); return -1; } } Serial.print("."); ttl = ttl + 500; delay(500); } Serial.println("disconnected!"); return 1; } std::string Connector::send(std::string _Data) { Serial.print("Sending..."); //Serial.println(_Data.c_str()); client.print(_Data.c_str()); unsigned long ttl = millis(); while (client.available() == 0) { if (millis() - ttl > timeout_requestServer_ms) { Serial.println("timeout! Try again."); return NULL; } } String reply = ""; while(client.available()) { reply = reply + client.readString(); } return std::string(reply.c_str()); } int Connector::setNetwork(String _SSID, String _Key) { Serial.print("Set network/key to "); Serial.println(_SSID.c_str()); tSSID = _SSID; tKey = _Key; return 1; } int Connector::setServer(IPAddress _IP, uint16_t _Port) { Serial.print("Set server/port to "); Serial.print(_IP.toString()); Serial.print(":"); Serial.println(_Port); tIP = _IP; tPort = _Port; return 1; }
STACK_EDU
Centos 7 Failure talking to yum, could not resolve host, no more mirrors to try Issue Type Bug Report / Support Request Your Environment Vagrant 2.2.3 vboxmanage: 5.2.26r128414 ansible 2.5.0 Your OS macOS 10.14.3 Full console output Console output: https://gist.github.com/jndevbc/765561d6a197d15133d5272b0b75cb12#file-02_vagrant_up Error text: https://gist.github.com/jndevbc/765561d6a197d15133d5272b0b75cb12#file-03_errors default.config.yml: https://gist.github.com/jndevbc/765561d6a197d15133d5272b0b75cb12#file-01 Summary Downloaded drupalvm master branch changed default.config.yml to use vagrant_box: geerlingguy/centos7 composer install vagrant up Gets to the TASK [geerlingguy.git : Ensure git is installed (RedHat).] task, the first part of the error: (see link to full error text above) Failure talking to yum: failure: repodata/repomd.xml from epel: [Errno 256] No more mirrors to try.\nhttps://mirror.steadfastnet.com/epel/7/x86_64/repodata/repomd.xm Additional Things Tried/Looked At Opened another terminal and vagrant ssh into this same box while it is going through all those mirrors ping <IP_ADDRESS> gets a reply curl example-domain.com does not resolve look at /etc/resolv.conf file [vagrant@drupalvm-master-centos ~]$ cat /etc/resolv.conf # Generated by NetworkManager search test nameserver <IP_ADDRESS> options single-request-reopen did try installing default drupalvm (ubuntu) and it worked fine What am I missing, is there a network setting I need to configure? I've tried this multiple times and it won't resolve. TIA Continuing from above... vagrant ssh into the box and php -v and git --version -- they aren't installed. sudo yum install gitresults in: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=x86_64 error was 14: curl#6 - "Could not resolve host: mirrors.fedoraproject.org; Unknown error" So then, do this: sudo vi /etc/resolv.conf The file is managed, so it will get overwritten ... but for now, add your own nameserver(s). Now, you can sudo yum install git and install everything else you need for your server. continuing… I ran into other issues, mysql wasn’t installed, etc. So remove/clean it out: removed all boxes vagrant box list and vagrant box remove <boxname> looked at virtualbox GUI and removed anything left there (delete all files) did a complete uninstall of vagrant and virtual box for virtualbox, used the tool that comes in the *.dmg file, it runs a script and removes a lot of hidden files/folders then I went and look through /Library and ~/Library and deleted a few other things Fresh Install: Fresh install of VirtualBox 6 (caveat… see below) Fresh install of vagrant Fresh install of centos7 vm from geerlingguy Again: vagrant up works right up to the same spot where it “Ensures git is instsalled” and then the long error message of “Could not resolve host” for all the different mirrors that it tries Then: vagrant ssh into the box, sudo vi /etc/resolv.conf and added my nameservers vagrant up —provision And this time it went through the whole thing And it installed mysql and everything else that I needed Caveat: VirtualBox 6 seems to load my sites really really slowly. Much slower than when I had all my sites working and on 5.x previously. Very strange behavior indeed! So at this point is it installing everything correctly? If I need to do a vagrant reload --provision, then it will still have issues finding mirrors. I tried this a week ago, and had to go and edit the /etc/resolv.conf file again. On a fresh install, what are the contents of your /etc/resolv.conf? I don't think I've ever had to tweak mine inside VirtualBox (though I have had to mess with DNS all the time in Docker environments, go figure). In a box I just spun up I'm seeing: $ cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver <IP_ADDRESS> I believe that is a part of the bridge network VirtualBox creates for you. Is it possible you have some proxy, virus scanner, corporate network tool installed by IT, or otherwise something that could be intercepting traffic and/or causing DNS failures? On the fresh install, I commented above, it got stuck at the same spot, and when I looked at /etc/resolv.conf (after it was stuck), the file looked like this: [vagrant@drupalvm-master-centos ~]$ cat /etc/resolv.conf # Generated by NetworkManager search test nameserver <IP_ADDRESS> options single-request-reopen Your default drupalvm which uses Ubuntu worked/loaded fine. But something about the Centos version… I also see that your resolv.conf file looks different from mine. Hmm, yes, very strange. Changed DNS on my own machine from corporate's to another one. vagrant reload --provision now works without getting stuck. If anyone gets "could not resolve host" and "no more mirrors to try", the first thing you can try is change your computer's DNS to something like OpenDNS. (Instructions) Glad you could get it resolved! Thanks for posting back to here, and I'll go ahead and close out the issue. I think the artifactory at apache.jfrog.io had a minor issue. I verified as well deb and rpm packages could be installed. Everything seems to be back up now.
GITHUB_ARCHIVE
Intelligent PC's and differing realms of knowledge I'm GMing for a group. One of my PCs is highly scholarly (scholarship 4), and the character is a doctor - all aspects point to this. I would assume the doctor's knowledge is more focused on medical things. How should I handle a PC with this specialisation in the greater scheme of things? For instance, what if they want to use the same skill to interpret satellite imagery? Hi @graeme, I've attempted to edit your question to make what I think you're asking about clearer. Please make sure it's still accurate and asking what you're trying to ask! The best way to show specialty is with stunts: keep in mind that stunts (Mortal Stunts, page 146) are the primary means that the game uses to add more trappings to skills. (YS318) So this doctor should have stunts which make him good at doctory stuff. A +2 when using Scholarship to give medical attention (the trapping on YS141) would be simple and effective. Instead of nerfing his ability to do other Scholarship things, just make him better at medical applications of the skill. Here are some other methods the book suggests: If a character has an aspect justifying specialized knowledge, often the GM will not call for a roll: Often, there will be no need to roll, especially if the subject is within your specialty (as indicated by your background and aspects). (YS140) Or the GM will increase the difficulty of a roll if the character's aspect implies that they would be less likely to have studied that particular area: You could even have the player make his initial attempt at a difficulty of +2, reflecting the fact that the character isn’t accustomed to using his skill in such an unusual way. (YS318) Scholarship is a broad skill because DFRPG doesn't consider it a major focus. You could easily split Scholarship into multiple skills like Medicine, Technology, and History. But then you'd have to spend three times as many skill points to get the same ability, and that would put a much higher value on the effects of the Scholarship skill. If you find the difference between kinds of scholarship is becoming more important in your game than stunts and the above guidelines can contain, then you should reflect this importance in the mechanics by splitting the skill up. +1 for stunts; A stunt could potentially come with a downside to offset a greater effect, as well. For example, Dresden's Listen stunt give +4 instead of +2, but his Awareness is Terrible while doing it. I could see a stunt giving +4 to medical-related Scholarship rolls, but reducing the skill level to Fair or Mediocre for non-medical rolls. If your Doctor is using their scholarship skill for something outside their expertise, they could still do that. They might not do as well, but your Doctor is pretty scholarly, and that comes with knowing a great number of things. It's not surprising for them to know stuff about satellite imagery - they probably read about it in a book somewhere, or picked it up in a youtube video they watched at one point. As for the aspects: look to your Doctor's aspects, and what the group has understood they imply. Your player can invoke those aspects that are relevant to the task he's undertaking, even if it's not strictly related to medicine. For instance: Is your Doctor a Surgeon, or does he have an aspect describing his Keen Eyes? He's probably well trained to spot fine details, and this will benefit him when he tries to notice fine details in satellite imagery. Would you say your Doctor Knows a library like the back of his hand? Your Doctor probably knows exactly the right books or resources to help guide him in studying satellite imagery. Did your Doctor graduate from a prestigious University, or is he associated with a prestigious organisation full of experts on a variety of topics? Is he a member of MENSA? He might happen to have direct connections to an expert on satellite imagery who can help him out, or do the work for him. This answer will be less Dresden files and more general, but this is a problem that comes up a lot in other games as well. tl;dr - Doctors have to go to college, even if they gun right for it (which many doctors don't), they've done a lot of schoolin' before they specialize. In the US anyway, many (if not most) doctors did a bacheor's degree - sometimes it's even pre-med, before going to med school. Some doctors go even farther, doing, for example, a master's degree in public health policy or whathaveyou, before they go to med school. Stuff like this, in every system I've ever encountered, becomes a 'GM call.' For satellite imagery? I'd say it depends on the undergrad major - if they did a lot of GIS work, (ecology, or other spatially-oriented major) as an undergrad, I'd say go for it. If not, less so. Non-system specific answer I've always struggled with the rules for knowledge checks. This is mostly because the knowledge categories are ambiguous compared to the definite list of active skills. To compensate for this, I use active skills as a modifier to knowledge rolls. So while the player hasn't specifically stated they have additional knowledge in, say, Medicine, I assume it from the skills they use daily. The character is likely to spend downtime researching his favourite topics. Using Savage Worlds rules, where every skill is a die type (d6 = average human), I use Smarts plus half the skill die. For example: Smarts d4, Shooting d10 -> Knowledge (guns) d4+5 Smarts d6, Healing d6 -> Knowledge (medicine) d6+3 Smarts d12, Climbing d4-2 -> Knowledge (mountaineering) d12+0 On top of this, I give each knowledge check an appropriate target. In Savage Worlds: 4: General knowledge: Typical gun capabilities (single-shot, automatic, etc.) 6: Typical subject matter: Make and model of a gun. 8: Specific subject matter: Full capabilities and typical flaws of a gun. 10: Expert subject matter: Identifying a gun and ammunition type from a bullet wound. In your case, yes I would assume a scholarly character with medical training has more knowledge of medical practices and affairs. Being scholarly, he is likely to have great attention to detail, so I'd give him the full benefits of being scholarly to satellite navigation, but nothing more unless he learns navigation skills. Only a small portion of this is really relevant to Dresden Files. Fair enough. I'm not familiar with the system. I've seen (and had) similar question about a number of systems, so I thought I'd share my resolutions. @Hand-E-Food It's a good idea, and there's a FAE hack which mirrors it with approaches rather than skills, but while your concept isn't married to a particular system it does assume a divorce between knowledge skills and action skills which DFRPG does not have. There isn't a "smarts skill" and a "healing skill"--the Scholarship skill is the medical attention skill.
STACK_EXCHANGE
Crowd sourced TESTNET evalution for STEEM High Level structure of the document: - what - why - how - who gandalf 1:54 PM Try to stay up to date with the code on the github. Testing is not trivial. And it's needed not only when hardforks come. ( https://steemit.com/@gtg) TODO: Clear problem defintion Unlike other public blockchains, STEEM blockchain is the first blockchain to interact with end users who are not necessarily tech savvy or traders. This blockchain creates value by incentivicing content creators. To keep the platform running frequent changes to the code base is required to meet the ever changing demands in terms of bug fixes, prevent ways of abusing the platform & add new features. While the core blockchain consisting of the consensus (DPoS), storing/capturing the state etc are mature, the plugins which adds additional functionalities for the end users to interact with the blockchains in evolving at an unprecedented pace. The plugins which are changing includes, rc_plugin (Resource credits + Turing Compelete), reputation, SMT (new tokens, smart contracts) support etc. The amount of changes at this enormity are perhaps unprecedented in the history of blockchains and thus, end to end testing is needed to ensure maximum code coverage and crush the obvious bugs and gremlins. This section will define the solution - Define KPI for each minor and major relase (build time, TPS, replay time, space/time) - automatic static analysis - code review (already done internally) - continous integration - Continous Deployment when soft/hard forks approach - Alert witnesses about the changes required (in config files, replay needed, gcc/clang, boost etc) - mobilize community to interact with TESTNET - Generate Transactions using Tinman - Testing of Condensor, a/c creation faucet, condensor, SSO, steem-js, steem-python, steem-ruby - Co-ordinate with SIAB / beem / busy / steempeak / steemjs-tools / conductor / st - Create status page, telegram groups etc for the TESTNET + MAINNET - CHANGELOG, Impact etc must be published adhering to a schedule - The community will be giving feedbacks to Developers to support the existing development process first and then if needed to improve it. - The community should not get in the way of development, but MUST act like a "gate keeper" - Existing tools used by the developers should be used to maximize the impact as opposed to adding new tools - Security issues or high severity bugs should be communicated only to designated developers via secure means *TODO:* Add more points from witness testaments (refer @[timcliff](https://steemit.com/@timcliff)'s [post](https://steemit.com/witness-category/@timcliff/the-reports-from-the-witnesses-2018-09-23) *TODO:* Add more points from community feedback 1. suggested using conductor to connect to the testnet for bringing end users to test the new fork 2. 05.10.2018 Set up https://steemtest.com/ especialy for testing new hardforks 3. https://steemtest.com/ will look at the steemd of @bobinson server with the new versions or hardforks of Steem @quochuy - has a script (?) to take the state snapshot It's useful for big infrastructure, for example, if I have 5 edge nodes on prod running `v0.20.4` and I wan't to switch them ASAP to replayed version of `v0.20.5` then I shutdown one of them, lets say `edge5` and start `--replay` (because I don't need `--resync` since I had already valid `block_log`). After catching up with HEAD block (replay to last block in log, then sync to current HEAD) I shut down `edge5` and make a copy of state, that is: `block_log`, `block_log.index` and `shared_memory.bin`. I start `edge5` back up, shutdown `edge4`, copy state file there and start `edge4` back up with the new version and new state. # Transaction Generation on the STEEM TESTNET **"A blockchain is a Distributed OpenLedger of records, contracts & Transactions between multiple partieis."** An ideal TESTNET must sync Records, Contracts & Transactions in real time thus replicating the state of the MAINNET. Further this can be summarized as state transfer from one distributed state machine to another resulting in invocation of various autmatic transctions and virtual operations. In a real world scenario, this will not be possible and we will have device elaborate mechanisms to mimic real world interactions. List down the transaction types that can be invoked with a tool similar to Tinman. For these cases, including a periodic sync of state from the MAINNET to the TESTNET can be done with Tinman. Further mimicing the user interaction, reaching the TPS that we can expect on the MAINNET etc are going to be the challenges. If we are syncing from the MAINNET it will not be possible sync the current Resource Credits (or earlier bandwidth data). We will be able to get the posts. For the interactions on the posts we will need to device a mechanism and invite the community. ## What is achieved so far 1. mirror from github to gitlab 2. CI on Kubernetes is configured 3. Deploy to a testnet with 3 nodes is done (with stale production) 4. Replayed blockchain related files (v0.20.5) are copied (via EBS snapshots) 5. Tinman is auto deployed and triggered from gitlab but not tested against TESTNET 6. Condenser set up on https://steemtest.com/ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ That's a really bad idea to run testing enviornment without clearly warning users about it. Lets change Steemit logo to something that clearly indicates test environment. Stake holders will be here. This will be generally "we" # work arounds Since there are changes to bandwidth, here is a trick to test the replay
OPCFW_CODE
using System; using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; public class PlayerController : MonoBehaviour { private Rigidbody rb; public float speed; private int count; public Text counttext; public Text wintext; string rev; void Start() { rb = GetComponent<Rigidbody>(); count = 0; setCountText(); wintext.text = ""; } void FixedUpdate() { float moveHorizontal = Input.GetAxis ("Horizontal"); float moveVertical = Input.GetAxis("Vertical"); Vector3 movement = new Vector3(moveHorizontal, 0.0f, moveVertical); rb.AddForce(movement*speed); } private void OnTriggerEnter(Collider other) { if(other.gameObject.CompareTag("Pick Up")) { string cubestring = other.GetComponent<Checker>().nameLable.text.ToString(); if(palindromecheck(cubestring)) { gameObject.GetComponent<AudioSource>().Play(); other.gameObject.SetActive(false); count++; setCountText(); }else { Debug.Log(palindromecheck(cubestring)); } }else if(other.gameObject.CompareTag("secret")) { other.gameObject.SetActive(false); GameObject[] allcubes=GameObject.FindGameObjectsWithTag("Pick Up"); for(int i=0;i<=allcubes.Length-1;i++) { if (palindromecheck(allcubes[i].GetComponent<Checker>().nameLable.text.ToString())) { var cubeRenderer = allcubes[i].GetComponent<Renderer>(); cubeRenderer.material.SetColor("_Color", Color.red); } } } } void setCountText() { counttext.text = "Count: " + count.ToString(); if(count==Spawner.noofpalindrome) { wintext.text = Spawner.noofpalindrome+" palindromes captured"; } } bool palindromecheck(string s) { char[] ch = s.ToCharArray(); Array.Reverse(ch); rev = new string(ch); bool b = s.Equals(rev, StringComparison.OrdinalIgnoreCase); return b; } }
STACK_EDU
A few days ago, Mark Liberman posted a link to this Global Security web site with information on the proper pronounciation and meaning of the prison which has been so much in the news: The prefered NIMA [National Imagery and Mapping Agency] transliteration is “Abu Ghurayb” (pronounced ah-boo GRAYB) … The prefix “abu” means “the father of” while the word “Ghurayb” means “raven” — so “father of the raven” is the literal meaning of this place name. There are actually four discrete locations associated with this name in Iraq, as well as a number of other facilities that use this name, some of which are at locations with other placenames. Then, yesterday, Geoffrey Pullum noted all the variant (mis)pronunciations of both the Abu Ghraib Prison, and General Taguba, by members of the .S. Senate’s Armed Services Committee. He wonders why they can’t bother to get it right: It really does seem as if American political figures actually try to avoid being good at pronouncing foreign words. These are men and women who spend their lives speaking in public on important topics. They have highly educated staff members to do research for them. Why on earth couldn’t they get a bit better at pronouncing simple place names and names of U.S. generals? His theory is that proper pronunciation would work against them, presumably making them look elitist and overly intellectual. Not an unreasonable hypothesis when one considers how Bush v. Gore was played out in the media. Bush seems like a man-of-the-people because he sounds dumb. All this discussion was interesting enough, but then Mark Liberman remembered hearing Judith Irvine present a paper at a conference (emphasis added): …upwardly-mobile men among the Wolof nobility cultivate inarticulateness as a sign of status. They make morphological errors — for example simplifying the Wolof system of noun-class indicators by moving nouns into the default category, as a child or a beginning adult learner might do — and they may even develop a speech impediment. He believes that this may not be a behavior restricted to Wolof society: I think that something a bit more general may be going on. After all, male members of the British aristocracy are also stereotypically disfluent, at least in according to P.G. Wodehouse and Monty Python. UPDATE: Mark Liberman provides a link to this more accurate article on how to pronounce Abu Ghraib. Also, a friend says this about the meaning: I also not so sure about the meaning, gharaab is raven, although it is possible that ghraib is a local plural, so it would meaning would be–literally–“father of ravens”. Ghraib might also mean absence, oddness, or more appropriately, westerners.
OPCFW_CODE
using Microsoft.Extensions.Configuration; using System; using System.Data; using System.Data.Common; using System.Data.Odbc; using System.IO; using System.Text; namespace Citect.AlarmDriver { /// <summary> /// Citect alarm database connection using the ODBC Citect Alarm Driver /// </summary> public class AlarmDbConnection : DbConnection { #region DbConnection public override string ConnectionString { get => connection.ConnectionString; set => connection.ConnectionString = value; } public override string Database => connection.Database; public override string DataSource => connection.DataSource; public override string ServerVersion => connection.ServerVersion; public override ConnectionState State => connection.State; protected override DbTransaction BeginDbTransaction(IsolationLevel isolationLevel) => connection.BeginTransaction(isolationLevel); public override void ChangeDatabase(string databaseName) => connection.ChangeDatabase(databaseName); public override void Close() => connection.Close(); protected override DbCommand CreateDbCommand() => connection.CreateCommand(); public override void Open() => connection.Open(); #endregion /// <summary> /// Database connection /// </summary> private readonly DbConnection connection = new OdbcConnection(); /// <summary> /// Create a new Citect alarm database connection /// </summary> public AlarmDbConnection() { } /// <summary> /// Create a new Citect alarm database connection /// </summary> public AlarmDbConnection(string server, string ip, int port = 5482) { SetConnectionString(server, ip, port); } /// <summary> /// Create a new Citect alarm database connection /// </summary> public AlarmDbConnection(string server, string systemsXml) { SetConnectionString(server, systemsXml); } /// <summary> /// Create a new Citect alarm database connection /// </summary> public AlarmDbConnection(IConfiguration config) { if (!int.TryParse(config["Citect:AlarmDbConnection:Port"], out var port)) port = 5482; SetConnectionString( server: config["Citect:AlarmDbConnection:Server"], ip: config["Citect:AlarmDbConnection:Ip"], port: port); } /// <summary> /// Définit la connectionstring de la <see cref="DbConnection"/> /// </summary> /// <param name="server"></param> /// <param name="ip"></param> /// <param name="port"></param> public void SetConnectionString(string server, string ip, int port = 5482) { var xmlFile = new FileInfo($@"{Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)}\CitectAlarmDriver\{server}.auto.xml"); var xmlContents = $@"<?xml version=""1.0"" encoding=""UTF-16""?> <Systems> <System name=""{server}"" type=""SCX"" enabled=""true"" visibleInViewX=""true"" clientLicensing=""false"" defaultSystemPriority=""10""> <Server name=""{ip}"" cost=""1"" port=""{port}"" compress=""true"" connectTimeout=""30000"" requestTimeout=""120000"" disconnectTimeout=""30000"" disconnectFailedTimeout=""500"" pollInterval=""10"" pollTimeout=""15000""/> </System> </Systems>"; xmlFile.Directory.Create(); File.WriteAllText(xmlFile.FullName, xmlContents, Encoding.Unicode); connection.ConnectionString = $"DRIVER={{Citect Alarm Driver}};Server={server};SystemsXml={xmlFile.FullName};"; } /// <summary> /// Définit la connectionstring de la <see cref="DbConnection"/> /// </summary> /// <param name="server"></param> /// <param name="systemsXml"></param> public void SetConnectionString(string server, string systemsXml) { connection.ConnectionString = $"DRIVER={{Citect Alarm Driver}};Server={server};SystemsXml={systemsXml};"; } } }
STACK_EDU
Colleges of video game design watch for free online Colleges of video game design view photos - University of southern california video game design - College of video game design - College of dupage video game design - University of utah video game design - University of houston video game design - University of washington video game design Colleges of video game design latest news · Bradley University makes a home to nearly six thousand students and offers 130 programs across a graduate school and five schools, including one of the top video game design colleges. The Department of Interactive Media is responsible for establishing several Minors and … · The 50 Best Video Game Design Colleges 2021 1. University of Southern California. School of Cinematic Arts, Viterbi School of Engineering. Los Angeles, California We ranked the best schools for game design in 2021. Check out the Top 50 colleges and the Top 25 grad schools for video game design. Find the best game design … Students at this top game design college in Florida will take classes in: discrete mathematics; design and development analysis; game design production; game balancing; prototyping; game systems integration; If video game design is your passion, and you desire to attend one of the top game design colleges in the US, Full Sail University is an excellent choice! Games developers design, create and produce computer or video games. They work in games development teams with artists, programmers, producers and marketing staff. Games developers usually specialise in a particular game platform (PlayStation, Xbox or Nintendo, for example) and a particular aspect of game development, such as programming artificial intelligence or gameplay. The VFS Game Design campus looks and feels just like the professional game developer studios you'll find throughout downtown Vancouver. Surrounded by some of the city's best cafes, restaurants, and cultural attractions, you'll have all the resources you need at your fingertips: a massive game library, arcades, screening theatres, and your own dedicated workspace. There’s no shortage of educational opportunities for gamers at this small private college with a range of gaming-related majors, including Game Art and Animation, Game Design and Game Programming. Kiersten Murphy of Murphy College Consultants in Issaquah, Washington, is a fan of the school’s “upside-down curriculum that allows freshmen to begin their major their first semester.” · As one of the few colleges that offer a concentration in Game Design, Bramson ORT College is perfect if you’d like to receive training on both the creative and technical side of game creation. This includes courses in 3D modeling, character development, level design, interactive storytelling, digital animation, and more. Laguna College of Art and Design offers 3 Game Design degree programs. It's a very small, private not-for-profit, four-year university in a outlying rural area. In 2019, 28 Game Design students graduated with students earning 19 Bachelor's degrees, 8 Master's degrees, and 1 Certificate.
OPCFW_CODE
Swift is applied to create applications for Mac and iOS acting as a possible replacement for Objective-C. The latter, though actively used but morally obsolete with no future. Having studied Swift, you’ll be able to create applications at once for both platforms and very well to make money on it. Swift is suitable for quick development When the Apple team developed the Objective-C replacement, they had two basic requirements: - It should be easy to learn. - Should help accelerate the development cycle of applications. As a result, Swift has all the attributes of the modern programming language and definitely outperforms Objective-C on all fronts. Key Features: - There are no undefined or uninitialized variables. - There are no errors with the dimensions of the arrays. - There are no overflow errors. - Explicit processing of nil (null) values. You’ll spend more time implementing ideas and less worrying about possible errors, crashes and conflicts of your code. In addition, the language overcame the syntactic verbosity in Objective-C, which simplified the writing and reading. The result time wise, it will take less time to write a similar code in Swift. Swift is a manufacturer Despite the fact that Swift is a high-level language, focused on early learning, it is really fast. According to Apple, Swift is 2.6 times faster than Objective-C and almost 8.4 times faster than Python 2.7. The ultimate goal is to make the language faster than C ++. It is important that Swift is not just quick but also filled with modern language functions that allow you to write truly functional code. Among them: - multiple returns; - built-in templates. And many other things. The introduction of many of these features as well as the improvement of the syntax, makes Swift safer than Objective-C. For example, improving memory handling means fewer opportunities for unauthorized access to data. Transition to the wrong parts of memory, erroneous data changes are also complicated. Another example: more efficient error handling significantly reduces the number of failures and emergence of critical scenarios. Unpredictable behavior is minimized. Free and open A year after the appearance of Swift, Apple made it an open source language. Although this is not a unique phenomenon for the modern world, for the “apple” company such generosity is a rarity. Typically, Apple pushes proprietary technology to highlight its own uniqueness. But the step with Swift has become justified and fruitful. As with any other open source language, Swift is entirely in the hands of the community. Users can suggest ways to fix bugs and improve functions and help to transfer applications outside of Mac and iOS. In the end, users are the main driving force of the language Fast growth and strong demand According to the GitHub Octoverse 2017 report. Swift is the 13th most popular among languages in open source projects. The TNW resource in 2016 reported that the demand for Swift employees increased by 600 percent. By the end of the year, Upwork reported that Swift was the second fastest growing skill on the freelance labor market. Stack Overflow poll 2017 Swift became the fourth most favorite language among active developers. Glassdoor reports an average basic salary for iOS Developer in the amount of 107 thousand dollars. Application development today is one of the most “hot” professions on the exchange. By choosing Swift as the foundation of a career, you definitely will not regret it. Apple future in Swift Apple has no reason to replace Swift with another language in the next decade. Add here 4-year progress both in terms of development and popularity, the ever-increasing sales of “apple” devices and the expansion of the line. Based on these facts, we can say with confidence that the need for Swift developers will grow. If you want to work with Apple, want to be part of their crazy financial reports then you need Swift. It’s time to start training.
OPCFW_CODE
Theme press WordpressWordpress Theme Press We' ve purchased your 22 theme packages and are scrolling them out on a number of websites every month. From time to time we ask for assistance for changes to your style sheet and your staff is quick and efficient. Extremely fashionable and user-friendly, but above all they have a great supporting staff for a novice like me. AccessPress is recommended for anyone who wants to create a great website in no time. Simply, elegantly and simply to use the theme. There' s also great tech for you. I like the look of the theme, it's very well adapted and thanks to our videos it' really nice and easily setup, the staff also supported me, my store is on-line since two month thanks to this theme. Extremely adaptable WordPress topics WordPress helps small business and blogs get more traffic with our WordPress product. It is our business to ensure that your WordPress sites look and feel professionally and always work flawlessly on all your equipment. In order to reach this target, we are continually enhancing our WordPress topics and plug-ins on the basis of your great feedback. Read this short tutorial if you are unfamiliar with WorPdress or don't know where to begin. About the Customizr WordPress theme? About the Customizr WordPress theme? Customizr WordPress Theme is a free web theme that you can use with WordPress to build any kind of website: commercial pages, blog, portfolio, landing pages, forum, business, and more. Customizr does not need sophisticated engineering capabilities to draft and append your website contents with a range of real-time previews. The theme responds: it fits well into any kind of equipment (desktop, tables, smartphones) and is fully interoperable with all today's web browser. WordPress new? The Customizr design allows your website to load quickly and adapt well to any device: smart phone, tray, laptop computer or desk top. Customizr enthusiasts cannot be mistaken. Customizr, what for? WorldPress is probably one of the best utilities to publicize and organise contents (CMS) in the whole wide web, but a website is not only about contents, it is also about designing. This is where the customizr comes in. Customizr Theme is a powerful way to integrate your WordPress designs into your work. During the development of the Customizr theme, the first thing to do was to analyse the most common customer queries and gain an understanding of what makes a customizr page interesting for their customers. Customizr Theme has been designed with a view to some general purpose web site theme that works on most web sites, e.g.: Under the bonnet a Customizr Theme has been encoded to make sure that your web site....: Charges quickly (read more about power settings), adjusts to all types of device (responsive): desktops, tables, smartphones, works with all popular web browser types, Since there is no flawless programming, we've looked for unfamiliar or upcoming errors. When you have a Customizing query, it is possible that it has already been asked and replied to in the more than 10,000 threads in the Customizing Topic Forums. Customizr theme is housed in WordPress. org theme's Repository. In order to be acceptable in this repository, a topic must meet specific needs and policies to provide maximal user safety and be consistent with WordPress's best encoding practice. Every new release submission goes through a check cycle with thousands of automatic reviews, a concluding hand test, and approvals from a Theme Reviews admin. The Customizr theme has been created since its foundation to be easy to customize and expand. You can access the first adjustment layer in the WordPress administration area under Appearance > Adjust. Featuring 135+ different choices to select from, the theme provides countless combination possibilities of unparalleled web designs. Go to the tutorial to explore all the adjustment items in the topic. Customizr provides a living custom CSR box for designer to test and utilize basic styles modification. The Customizr theme is very easy for more sophisticated changes to styles, and there are no restrictions on overwriting and expanding the theme's standard CSS Stylesheets. The design leverages the benefits of the fantastic filter and actionhook API contained in WordPress for more experienced programmers. Customizr's theme is entirely based on an extendable filtering and actions hook API that makes customization an effortless and pleasant experience without ever having to change the kernel tree. If you are a programmer, blogspeaker, or users of the topic, there are many ways to post to the topic: posting to the topic's source tree. Customizr theme is open resource and all the coding is available on Github. Section Text snippets is a collection of insertable text to enhance the theme's kernel theme or feature. Translate the topic. It is always necessary to update our translation and add new language to the topic. In order to compile the topic and submit your article, please read these instructions. You have posted an article on the topic, or you would like to add to the document pages, instructions and tutorials, please don't hesitate to get in touch with us via this online support request page. The Customizr is licenced under the GNU GPL v2 or higher. First, there would never have been Customizr if WordPress hadn't been developed in 2003 by the very inspirational Matt Mullenweg. All of the following persons are either volonteers or programmers who are sharing their stunning codes with the rest of the community. You are a pile of WordPress professionals who review every topic that has been filed with the WordPress.org website. So many thanks to them, and especially to the boys who checked and accepted Customizr: thanks to Dave Bardell for his great work and his help in preparing and updating this document.
OPCFW_CODE
Make new ChakraHost a generic extension point for platform hosting A separate issue to follow on my comment on #5880 How about making this useful for other non-Chakra scenarios? You could make your actual Chakra host to export ts.sys directly instead of ChakraHost. Then have this code here at line 66 instead of the whole Chakra handling: if (ts.sys) return ts.sys; The benefit is that a whole host of other platform-hosting scenarios become easily possible: emulated environment in browser Rhino Nashorn nginxScript ActionScript/Flash JScript.NET Jint IronJS Jurassic Also note that @DanielRosenwasser commented on the original PR: Should this be called NativeHost? It's not unreasonable to expect another host that exposes the same functionality. And @yuit replied: @DanielRosenwasser is that what interface System for ? So that is exactly my suggestion: use System interface and ts.sys variable as an extension point for interested parties to implement a host for the whole tsc.js compiler. Currently the way to run tsc.js on an 'exotic' host you need to emulate WScript or node, or now also ChakraHost with this latest feature. But only node emulation gives fully-functional host, with the main downside is lack of watchFile/watchDirectory in both WScript and ChakraHost. And node emulation is actually a bit of a pain, particularly with the need to implement Buffer at least partially, as getNodeSystem in fact relies on it for BOF detection and whatnot. When I need to run tsc from within an embedded/emulated/hosted environment, I am up for pain dealing with peculiarities of WScript or node semantics — even as I might not be using neither. TypeScript has a lot of potential in enterprise, as a loose analog of VBA for user-authored scripting. And that often means compiling on-demand in embedded scenarios. If you make ts.sys and System interface a proper extension point, it would help a lot. Just to make it clear how the code would look: /src/compiler/sys.ts at line 65: export var sys: System = (function () { if (ts.sys) return ts.sys; // use host-provided implementation if available So the host can do something like: // pseudo-code ExpandoObject tsNamespace = new ExpandoObject(); tsNamespace.setProperty('sys', myNativePlatformAbstractionImplementation); jsContext.GlobalObject.setProperty('ts', tsNamespace); Whatever is the way to inject objects into the global scope, the host will inject ts, add sys onto it and let the tsc get on with it. TypeScript currently already exposes a few extensibility points: CompilerHost and LanguageServiceHost for anyone who wants to embed TypeScript in their applications and customize interactions with the host environment. sys is an implementation detail for the default version of CompilerHost that is used by the batch compiler. We've already cleaned up most of the source code from direct usage of sys in favor of recommended *Host interfaces and I think that introducing another extensibility point without compelling reason will not yield anything but confusion @mihailik are you working on supporting any of these engines at the moment? i would not be opposed to adding a new name to check when initializing System. i would not call it ts.sys, NativeHost or something more unique TypeScriptNativeHost maybe. @mhegazy yep, I am hosting typescript in low-end browsers almost since it was made public: Debugging TypeScript in browser - see IE6, Blackberry screenshot from November 2012: That was a toy research which now turned into a project I am planning to ship properly next year: a portable environment for running code across wide variety of platforms. Typescript helps a lot, although not without its minor pains. I've experimented with other exotic hosts, some of them having serious problems (like keyword collisions) and others just need a bit of emulation layer. I have been picking ES3 incompatibilites in tsc/typescriptServices and submitting PRs, you might have seen those. I indeed care about hosting. I have been picking ES3 incompatibilites in tsc/typescriptServices and submitting PRs, you might have seen those. I indeed care about hosting. I definitely have seen these. thanks for the contributions I've experimented with other exotic hosts, some of them having serious problems (like keyword collisions) and others just need a bit of emulation layer. Do you run the command line compiler directly, or there are other uses? can you use the typescript API and provide an implementation for the compilerHost instead? Previously, like on that 2012 screenshot I did run tsc.js as-is, or recompiled with sourcemaps (emulating WScript or node at some points). Currently I am running typescriptServices.js and thus my extension point is ILanguageServiceHost. Its IDE services are brilliant, but for a simple build function it takes a bit of extra effort, in both passing the inputs as well as deciphering the outputs. I'd much rather had the The assumption that tsc.js/tsserver.js will only run on node and tsc.exe (chakrahost). these are the two scenarios we test. others one can get broken easily, you probably know this first hand with the ES3 fixes. for that i would recommend not relying on System, but using the CompilerHost/LanguageServiceHost, which has a better testing support in other tools already. If you chose to use tsc.js, implementing ChakraHost should do the trick. we can rename it if that makes it more engine agnostic, but that is just ascetics. I've seen several teams (some of them my former colleagues), and heard of others who want to express business logic in JavaScript. They say JavaScript, but what they really want is better, safer JavaScript. We all know what that means ;-) Such use cases need an easy embedding API, not one full of compilation/IDE 'hairy' details: resolution, parsing/preparsing/SourceFile business, extracting various types of errors through different calls. Here's a specific and very real use case. You have a complex financial instrument as an object graph in no-SQL db, and its custom business logic stored next to it as *.ts. When such 'instrument' needs to be activated and used, you load *.ts, build it in a simple predictable way and marry to the data from the graph. System is ideal, ChakraHost is almost as good, except for: missing watchFile/watchDirectory CRLF assumption, potentially screwing up debugging/sourcemaps on Unix ChakraHost name is not a concern, but accomodating for these functional features would be brilliant! Huge thanks for looking into this, by the way. I immensely appreciate your openness and good will! System is ideal, ChakraHost is almost as good, except for: if needed, these two issues should be fixed with one line change each, and should not cause any breaks in other places of the system. ChakraHost name is not a concern, but accommodating for these functional features would be brilliant! I think you can get these scenarios working with small changes; we would love to accept PRs to clean up any of these.
GITHUB_ARCHIVE
[dbjapan] 【締切延長】CFP: Workshop for Ubiquitous Networking and Enablers to Context-Aware Services 2008 - To: dbjapan [at] gms.dbsj.org - Subject: [dbjapan] 【締切延長】CFP: Workshop for Ubiquitous Networking and Enablers to Context-Aware Services 2008 - From: Akimitsu Kanzaki <kanzaki [at] ist.osaka-u.ac.jp> - Date: Sun, 02 Mar 2008 21:34:51 +0900 皆様 阪大の神崎です。 先日ご案内しましたユビキタス関連の国際ワークショップの投稿締め切りが延長と なりました。CFPを改めてお送りいたしますので、是非ご投稿をご検討ください。 http://www.icta.ufl.edu/saint08/workshops/CFPaper/ws-cfp-7.html Workshop Paper Submission: March 12, 2008 Workshop Paper Notification: March 30, 2008 Workshop Final Manuscript: May 1, 2008 ----- The Fifth Workshop for Ubiquitous Networking and Enablers to Context-Aware Services in Conjunction with SAINT2008 Turku, FINLAND July 28 -- Aug. 1, 2008 Call for Papers Theme of the Workshop A "ubiquitous networking" is a federated network technology which supports various enablers such as 3G mobiles, RFID tags, sensors, actuators, etc. On the ubiquitous networking environments, information is explosively used for various kinds of purposes (information-explosion: http://www.infoplosion.nii.ac.jp/info-plosion/ctr.php/m/IndexEng/a/Index/). The ubiquitous network has enough capability to deal with huge number of IP packets generated from enablers. At the same time, a lot of broadband contents are requested to be delivered with perfectly controlled QoS. Efficient and scalable routing and transport mechanism for supporting such various traffics are fundamental requirement on the network. From a service perspective, a number of context or ambient aware services are envisaged for "ubiquitous networking" in "information-explosion era." A service platform will manage and create services based on the context. There should be discussions how to collect and generate user context, how to create or synthesize services efficiently, and how to develop such systems using emerging software and hardware technologies. There could be also discussions how to control network performance based on user policy or service level agreement. Providing robust security over all ubiquitous networks in a simple fashion is an important issue associated with service provisioning to users. Keeping privacy in ubiquitous networks is also a big issue. The other important aspect is enablers or ubiquitous objects themselves where users are commonly faced with. What is a suitable design and implementation of such objects? How they could be connected to a ubiquitous network to provide contexts or how could they communicate with each other? This workshop is one of the best opportunities to address this theme in sufficient depth and breadth, and is intended to share knowledge and exchange ideas, thereby promoting new studies and research topics in this area. We invite not only academic and industrial researchers but also business persons, all who are interested in "ubiquitous networking" and "information explosion." Advancing technologies, reports on research, reports or suggestions on business models and so on are welcome. Important Dates for Authors Workshop Paper Submission: March 12, 2008 Workshop Paper Notification: March 30, 2008 Workshop Final Manuscript: May 1, 2008 Author Registration Due: TBD Workshop: July 28 -- Aug. 1, 2008 (exact date is TBD) Registration Workshop registration will be handled by the SAINT 2008 organization along with the main conference registration. It is the IEEE policy that accepted papers can be published only when IEEE recognized that at least one author has registered for presentation. So, authors will be requested to register along with the final manuscript. Paper Submission Papers should be sent to the Program Committee no later than March 1, 2008 via e-mail. The e-mail address is "saint2008-ws-ubiq-nw[AT]lab.ntt.co.jp". After a review process by the Organizer and Program Committee of the Workshop, authors of accepted papers will be requested to send its final manuscript to IEEE-CS press no later than May 1, 2008. So, authors are kindly requested to submit papers as early as possible to facilitate a review process. Papers and Author's Kit Workshop papers should be within 4 pages, no extra page is allowed. The Proceedings of the Symposium and the Workshops will be published, in separated volumes, by the IEEE Computer Society Press. Please follow the instruction on the web below where authors can find Page Form and appropriate pointers for LaTeX Macros. Note This workshop is partially supported by Japan's MEXT grant-in-aid for Priority area Research called Cyber Infrastructure for the Information-explosion Era. (Principal Researcher: Prof. M. Kitsuregawa) Please note that, according to the SAINT 2008 Organizing Committee, the workshop is subject to cancel when the number of paper presentations will not be enough. Organizer -Shinji SHIMOJO, Osaka University/NICT, Japan Program Committee (Tentative): -Yuuichi TERANISHI, Osaka University, Japan -Takahiro HARA, Osaka University, Japan -Kaname HARUMOTO, Osaka University, Japan -Junzo KAMAHARA, Kobe University, Japan -Takeshi OKUDA, Nara Institute of Science and Technology, Japan -Michiharu TAKEMOTO, NTT, Japan *: You can contact the program committee at "saint2008-ws-ubiq-nw[AT]lab.ntt.co.jp".
OPCFW_CODE
added network instance type to default Hi, Instance type config is required when DUT is assumed to have no network instance configured prior to running the test. If instance type is not configured prior to script run, config failure will be seen as instance type is mandatory. If DUT already have instance type configured, it also will be of type DEFAULT_INSTANCE. Script also does replace with same type, hence replacing instance type is not a problem such DUT devices. Ref PR #: https://github.com/openconfig/featureprofiles/pull/592 Thanks, Prabha Pull Request Test Coverage Report for Build<PHONE_NUMBER> 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 72.664% Totals Change from base Build<PHONE_NUMBER>: 0.0% Covered Lines: 1462 Relevant Lines: 2012 💛 - Coveralls I believe we are expecting the default network instance to have the type set correctly so this shouldn't be necessary. If this is required, this sounds like an implementation bug. In that case, I think we could add it wrapped with a deviation flag. I don't think it is reasonable to require every test to set this value. OC currently does not define a default for a network instance type. Therefore I believe we should explicitly set the network-instance type. Otherwise (as pointed out earlier) it is ambiguous what type will be used as the NOS could use any type it wants and still be compliant. I do believe the model is weak on this point and have filed an issue at https://github.com/openconfig/public/issues/726. Reference: https://github.com/openconfig/public/blob/f3f05d5f6e38ca7c46671ea5a4c11b62d264c10b/release/models/network-instance/openconfig-network-instance.yang#L1193-L1208 @bstoll , Please could you review and provide approval for this pull request. Thanks, Prabha I generally disagree with this approach. We may be able to define this as a deviation and then mark the use cases of it with a deviation flag, but I don't think it is good that every time an operator wants to work on the "default" network instance they have to update(or verify) a leaf value. I can see that the "type" could be marked as required in the OC yang, but regardless of that outcome the device doesn't seem like it is behaving correctly here. Either the "default" network-instance exists and the required "type" value is set on it, or the "default" network-instance does not exist and we need to create it (with a type value). How did the device end up in the state where the "default" network-instance exists, but is does not have a type value set? I don't think this is a case of defining a default value of "type" for all network-instances. Didn't some configuration (likely not even OC) result in the creation of a "default" network instance? I can see two possible resolutions: Explicitly create the default network instance and set it's type in this test. Update OpenConfig model to make the type a required value. Require that the default network instance be used by default when no network instance is specified. Update OpenConfig model to define a default type. I think option 1 is the most clear and should not require a deviation. Maybe there is another better option? After thinking about this some more I am coming around to being OK with the original ask. I was approaching it from the perspective that the default network instance must exist as a result of interface membership, and therefor the network-instance configuration must become populated by something valid. I still think this is this confusing/bad behavior. Being explicit in network-instance configuration does sound reasonable too. Does this mean we should also be explicitly replacing the "enabled" and the "enabled-address-families" leafs to IPV4/IPV6 as well? I generally disagree with this approach. We may be able to define this as a deviation and then mark the use cases of it with a deviation flag, but I don't think it is good that every time an operator wants to work on the "default" network instance they have to update(or verify) a leaf value. They wouldn't - what is seen is just a result of individual test executions assuming a prior environment exists. Does this mean we should also be explicitly replacing the "enabled" and the "enabled-address-families" leafs to IPV4/IPV6 as well? My opinion is that enabled-address-families be removed from the model-set. See https://github.com/openconfig/public/pull/738 At the same time, I will propose removal of the enabled node under /network-instances/network-instance/config. I would like to see a pointer to > 0 implementations that can support this not to mention not being able to do this on the DEFAULT_INSTANCE whatsoever and what that would mean. I agree w/ @dplore option 1 above and provided comment on https://github.com/openconfig/public/issues/726 as well. We don't need a default value here as that is subjective. There can only be a single DEFAULT_INSTANCE so it would need to be another type. An operator should be explicit in setting this value - just that the value must be set since a dangling instance without a type is not viable. My opinion of enabled-address-families was similar to yours: It's not clear in the documentation what impact the knob is supposed to have, and it is obviously a big deal. I think there are some practical use case for "disable", but it's certainly not common or something we are doing in any tests here. We are discussing bringing some of this base interface/network instance configuration into a helper, and I hope we can pull this change out (and all the other instances of it) so every test does not need to repeat this configuration. Thanks for the comments/thoughts. /gcbrun
GITHUB_ARCHIVE
DirectoryInfo.Delete() raising an exception when it is not empty. but i do not want this behavior. when it is not empty it should not deleted with raising exception. any direct way.??? Thanks in advance. If a directory is empty then Directory.GetFiles("DirPath").Length should return zero. Shadow priest tbc pre raid bis - However, I cannot delete any folders from Windows Explorer. I can delete the content within the folders/subfolders but when I try to select the folder and right-click to delete..there is no option to delete. - Im running V60 RC5 and there is a folder I cant delete. Ive tried everything that I can think of. I just ran reiserfsck and it found 2 errors and suggested I run reiserfsck --fix-fixable /dev/md9 and it cleared up the errors. Mar 02, 2015 · How can I delete this spurious folder?-----[Diane Poremsky replied:] 1. Hmmm. If the right click Delete folder command isn't live, use MFCMAPI to delete it. (I'm assuming that is a folder you named, not a name Outlook made up - I've seen outlook create weird folder names when using an antivirus plugin, but it was a different name. - If a directory is specified as a source, then the contents of the directory will be recursively copied into targetDir. Existing files will not be overwritten unless the -force option is specified (when Tcl will also attempt to adjust permissions on the destination file or directory if that is necessary to allow the copy to proceed). rm: cannot remove `/tmp/tektea': Directory not empty 通过上面的验证,首先我们可以得出这条结论: 使用rm -rf命令删除目录时,如果该目录下的文件正在被写入,那么会存在删除失败的可能 。 - Dec 04, 2015 · Delete/Remove User Account and Files. Now you can safely remove user together with his/her home directory, to remove all user files on the system use the --remove-all-files option in the command below: # deluser --remove-home tecmint [On Debian and its derivatives] # userdel --remove tecmint [On RedHat/CentOS based systems] Folder Redirection in Group Policy allows a systems administrator to redirect certain folders from a user’s profile to a file server. In part 3 of this series, I’ll discuss the folder permissions we set on the file server along with justifications for those settings and alternatives. - When a directory or a file has immutable attribute set, you will get the error "Permission denied" while trying to delete the underlying files. If the attributei (immutable bit) is set on a file, not even root will be able to modify it. Simulate delete file "Operation not permitted" on Linux. Create a directory under /tmp. If a file or folder returns when you delete it, it's because it was unable to be deleted on one of your connected devices. You delete a file on computer A. Dropbox is informed that it has been deleted and syncs that change to your account. Dropbox on your other devices is notified that the file has been deleted, so they also remove the file. - Feb 07, 2011 · If you are not able to delete it then make sure ownership of the directory is correct. Also check the attribute set to this directory. You can check the same using command. lsattr it shall be ----- . If it contains a/u/i then that could be the problem. Regards, host.co.in May 08, 2017 · Perhaps the installer folder is making the fuss. Should be in C:\ProgramData\ (on Windows 7, hidden folder). It should have a cryptic name, but inside you should find Kontakt 5.msi and some other files. Delete that folder. - Every day we present FREE licensed software published by developers from all over the world. I made a symbolic link with the following commmand: ln -s ../test5 I want to remove it now but my rm fails: $ rm -Rf test5/ rm: cannot remove `test5/': Not a directory $ rm test5/ rm: cannot rem...
OPCFW_CODE
Incrediblenovel – Chapter 192 – Attack! burn dispensable propose-p2 Novel–Complete Martial Arts Attributes–Complete Martial Arts Attributes Chapter 192 – Attack! town broken But, the level armour for the large python was value some bucks. He peeled it well and filled it into your rucksack he was hauling. “Anyway, which school did that individual originated from? He was intimidating!” He pa.s.sed about the stalks of Illusion Gra.s.s, along with his goal was looked at as as finished. w.a.n.g Teng’s very first goal finished by using a ideal finishing. Very well, however, there was a compact crash, it didn’t topic. “Let’s go. I don’t prefer to continue to be here an instant for a longer period. I don’t thoughts receiving my institution credits subtracted. I will get it for a lesson mastered. I recently became a martial warrior, so I got somewhat extremely pleased. Sooner or later, I must have a very low user profile while i show up,” the previous fresh male reported with residual anxiety. “Who is it?” w.a.n.g Teng shouted coldly. He was surprised. who is the most famous philosopher It had been already 8 pm, although the constructing was still brightly lit up. A lot of students ended up entering and causing the structure. It looked no unique when compared to day time. “Misunderstanding?” w.a.n.g Teng checked out all of them with an unclear look. He carried on, “Somehow, I recall until this wasn’t the things you mentioned just now.” “Don’t search any longer. You don’t dwell in the same world as him.” Bai Xiaocao’s dad sighed when he spotted the absentminded Bai Xiaocao beside him. He didn’t anticipate that anytime he came up back again, rather then sensing happy, Li Xiumei would be anxious that they acquired created difficulty in school. facts about st vincent de paul society He rushed his in the past finally had been able to enter in the education gates right on the dot at 8 pm. “Don’t you would like to deal with for any probability?” w.a.n.g Teng required. “Is this really for us?” Bai Xiaocao hesitated and requested. She still couldn’t believe it. Another party infected exceptionally swiftly. He couldn’t dodge soon enough and may even use only his fist to come back the assault. w.a.n.g Teng: (▼ヘ▼#) On the other side, w.a.n.g Teng finished deciding his online business and came back on the village. He bade farewell to Bai Xiaocao and her daddy and journeyed directly back to Donghai within the taxi cab. “You greedy minimal pet cat!” The sides of their mouth twitched just a little. They exchanged glances with one other and laughed awkwardly. “You must be kidding. We still need factors to attend to. We are going to make very first!” …w.a.n.g Teng was speechless. In fact, approaching household was actually a completely wrong choice. However, these people were obviously contemplating an excessive amount of. When they didn’t invasion him, w.a.n.g Teng was as well lazy to value them. In which managed he or she are derived from? “Eat far more. Evaluate you. You left for several days, but you’ve turn out to be so skinny,” Li Xiumei claimed. On the logistics developing. He didn’t count on that if he got again, in lieu of sensation pleased, Li Xiumei would be nervous that they obtained brought on issues at school. “This minor brat!” The defend considered his back viewpoint because he went apart. He smiled and shook his top of your head. The other celebration attacked exceptionally quickly. He couldn’t avoid at some point and might use only his fist to come back the assault. …w.a.n.g Teng was speechless. Without a doubt, returning your home was really a improper selection. “Let’s go. I don’t prefer to stay here a minute more time. I don’t thoughts finding my education credits subtracted. I am going to use it to be a lesson mastered. I merely became a martial warrior, then i acquired a bit very proud. In the future, I must have a reduced profile after i appear,” another younger man reported with residual anxiety. “Mom, have you so little rely on in me? Can’t you think of a thing very good?” w.a.n.g Teng claimed angrily. Complete Martial Arts Attributes “Don’t look any further. You don’t reside in the exact same environment as him.” Bai Xiaocao’s daddy sighed when he observed the absentminded Bai Xiaocao beside him. Complete Martial Arts Attributes That was definitely an entrapment! Then, they carefully retreated, reluctant that w.a.n.g Teng would suddenly infiltration them and pressure these people to remain powering. “Let’s go, let’s go…” If they gotten to the base of the hill, the trio appeared regarding them. They immediately heaved a sigh of comfort after they observed that w.a.n.g Teng didn’t run after soon after them. “If you don’t want it, it is possible to let it rest here for the crazy beasts,” w.a.n.g Teng said nonchalantly.
OPCFW_CODE
<?php namespace jdf221\AchieveCraft; class Database { private $Mongo; private $Database; private $Groups; private $Icons; public function __construct() { $this->Mongo = new \MongoClient(); $this->Database = $this->Mongo->AchieveCraftLive; $this->Groups = $this->Database->Groups; $this->Icons = $this->Database->Icons; } private function getId($seed, $check = false) { if(!is_string($seed)){ $seed = strval(rand(111111, 666666)); } if (!$check) { $check = function () { return true; }; } $id = "n".substr(shell_exec('date +%s%N'), -4, -1) . substr(md5($seed), rand(1, 16), 2); if (!$check($id)) { return $id; } else { $this->getId($seed, $check); } } private function cleanMongoArray($array){ unset($array['_id']); return $array; } private function iteratorToArray($array){ $return = array(); foreach($array as $element){ $return[] = $this->cleanMongoArray($element); } return $return; } public function getPublicGroups(){ return $this->iteratorToArray($this->Groups->find(array("public" => true))); } public function getGroup($id){ return $this->cleanMongoArray($this->Groups->findOne(array("id" => $id))); } public function getGroupIcons($id){ return $this->iteratorToArray($this->Icons->find(array("groupId" => $id))); } public function getIcon($id){ return $this->cleanMongoArray($this->Icons->findOne(array("id" => $id))); } public function newGroup($name){ $id = $this->getId($name, function($possibleId){ return $this->getGroup($possibleId); }); $insert = $this->Groups->insert(array("public" => false, "name" => $name, "id" => $id)); if(is_array($insert) && $insert['ok'] == 1){ return $id; } else{ return false; } } public function newIcon($base64, $groupId = false){ $id = $this->getId($base64, function($possibleId){ return $this->getIcon($possibleId); }); $insert = $this->Icons->insert(array("id" => $id, "groupId" => $groupId, "base64" => $base64)); if(is_array($insert) && $insert['ok'] == 1){ return $id; } else{ return false; } } }
STACK_EDU
Fix Roblox Error 103 Usually related to parental controls! Tell us about your article reading experience. - Sort of If you're reading this guide, chances are, you're experiencing error 103 on Roblox and need a fix. If so, then we've got some good news and some bad news. The bad news: You've got a Roblox error. Using time you could be spending playing Roblox to fix a problem is never good! The good news: You've got Roblox error 103. It's usually super easy to fix! (and you're reading the best guide on the internet to do that, of course!) Below we have listed out the most common causes of this error and how to fix them. Go through the list from first-to-last as they are ordered by the most common cause/fix to the least likely (so starting at the bottom or mid-way through would probably take longer). Date of Birth If the Roblox account you have linked to your Xbox One has a date of birth resulting in an age younger than 13, then this is likely the cause of your problem. If your date of birth is for an age younger than 13, you will need to create a new Roblox account with the correct date of birth. Due to COPPA regulations (US law), Roblox cannot change the age on an account that is younger than 13, so your only option is to create a new account. You can check the date of for your Roblox account within your account settings (requires a phone, tablet, laptop or computer). It will be displayed under Birthday. For help accessing account settings see below: To open account settings on the Roblox website (browser or computer), log in to your account on roblox.com and click the cog ( ) in the top right of the screen and then click To open account settings on the Roblox app (iPhone, Android or tablet), click the ellipsis ( ) icon in the bottom right and then click Settings, and then If you are using a child account on Xbox to play Roblox, it is likely that the Xbox (not Roblox) parental or privacy controls are preventing connection to the game. You can check and resolve this problem by changing the See content other people make setting to Allow. To do this: - Sign into the parent account on your Xbox One. My games & appsfrom the dashboard. - Scroll to the - Select the child account that you are trying to play Roblox on (and having problems with). See content other people maketo After saving these changes, error 103 should be fixed when your child's account tries to play a Roblox game. If not, continue on to the next step. Hard Reboot Xbox If you're getting this error on an Xbox One, you might be able to fix it by performing a "Hard Reboot". This is a slightly more technical way to say "try turning it off and on again". To perform a hard reboot: - Hold the console power button for 8-12 seconds until the device powers off - Turn it back on After doing this, try to join the Roblox game again. If it doesn't work, continue on to the next step. Game Not Supported If you have checked your date of birth, the parental controls for the account, and rebooted your Xbox, then the case is most likely that the game you are trying to play is not supported on Xbox. Unfortunately there is no fix that works for everyone at the current moment in time, but here are some recommendations: - Check to see if the game you're trying to play has a console version. Phantom Forces, for example, has a specific console edition that is separate to the main Phantom Forces game. - If this isn't the case, until the game adds support for consoles, you will need to play the game on your phone, tablet, computer, or laptop instead.
OPCFW_CODE
NLP stands for Neuro Linguistic Programming, and it deals with three basic elements, that are neurology, language, and programming. Derived from the kind-less language BCPL, it developed a sort construction; created on a tiny machine as a tool to improve a meager programming surroundings, it has turn out to be one of many dominant languages of right now. However as companies implement these concepts and job titles again, they are a bit unsure as to the place they fit in and their relationship to different Data Technology capabilities. The developer is an skilled programmer who understands there’s far more to creating software than the traces of code it consists of. He possesses a much better understanding of software program design ideas and ideas than the programmer and thinks about a problem in its entirety. We will talk extra in regards to the APL programming language on different article posts, for right now let’s give attention to A+. Tony’s applications have some worth, however do not clear up each drawback, nor are many problems (or individuals) addressed. Many college students of Tony Robbins and Ericksonian hypnosis try to use the new discovered talent to influence individuals around them to their very own benefit with out ever informing the subject they have been induced to take a certain actions. In case you are a beginner trying to learn what good Python code appears to be like like then you you need to take the time to learn by the ‘ Hitchhiker’s Guide to Python ‘. This guide will train you tips on how to recognise essentially the most ‘Pythonic’ option to write code. After we create a category in Object Oriented programming, we are writing code to do a particular job. Python is a excessive stage & common objective programming language It … At the moment, nobody can think about a life with out the digital gadgets. I’ve tested, for myself, whether voice mail works when my very own COMPUTER is off and when the Magic Jack system just isn’t plugged in. I referred to as myself, left a message, and didn’t plug the Magic Jack again into the pc till I turned it back on and checked my Even without the system plugged in the (with the attachedwav file was there once I received to my Inbox). Spy gadgets incorporate a recording player and video digicam. Software gadgets include iTunes, Microsoft Office and other laptop functions that customise our experience with programmable gadgets. People living in urban environments with decent earning tend to be addicted to gadgets and gadgets like cellphones and computers. Again, as a result of I’ve usually seemed for answers to folks’s questions by utilizing the MagicJack “knowledgebase”, I gotten to know the site a lot better than I had previously (or that many informal customers might). Talking of cellphones, I’m so busy taking photos as you recommended, looking for and putting in new apps, checking my e mail, and textual content messaging, I don’t have any time for cellphone calls any extra. You can also monitor who has been in your home, flip the lights on when entering the room, and shut the doorways and windows whereas leaving the home, all with a gadget which may price round $500. Pen scanners are a quick solution to raise textual content from a page which will be saved in companies like Dropbox to be used throughout all of your units, or to share instantly on social media. And all in all gadgets make life easier: with a microphone lecturers do not should shout in greater lecture rooms or attendance roster … If you are new to this discipline, chances are you’ll be asking what to search for in a MetaTrader program. It wasn’t for me. All the meditations have been geared toward getting overly-suggestible folks into a trance after which Neurolingustically programming them to join the $10K mastery university. In the good outdated days of the Spectrum you might purchase magazines which feature pre-written video games in fundamental code that you would type into your personal Spectrum and hope the program ran okay. We already know that PHP is occasions extra commonly used in the server-facet web growth than every other language. On this lesson, we’ll learn about one other necessary class called JOptionPane of the javax.swing library. Languages of selection: Usually prefers managed code over unmanaged code as he realises the productivity enhance that comes with it. Uncheck the Create Essential Class†checkbox, it is because Java NetBeans will robotically create a foremost class when we create a brand new kind. Programming for learners requires you to start learning the basic concepts and ideas. With this type of file editing, it is vital to not overwrite the unique file in case something surprising occurs and also you lose all the information. If you’re programming on a home windows machine then an ‘int’ variable sort will get 4 bytes of knowledge, not 2. In case you are programming for another sort of processor that solely has a 16 bit integer (word measurement) then that is tremendous, but you need to declare this as it’d confuse others. Analysts possess good communications expertise (verbal and written) to effectively work with both the top-users and the programming workers. The C programming language was devised in the early Seventies as a system implementation language for the nascent UNIX working system.… The BlackBerry 8310 is one of the greatest fashions from the house of BlackBerry. Regardless of that (actually) minor imperfection perceived solely by the occasional caller, I have to say I sort of love MagicJack. One of the smallest, coolest gadgets on the market right now’s the Flip Video Camcorder. It is in all probability worth mentioning that if the MagicJack mushy phone just isn’t already on the display (in different words, if you have to open it), there’s no matter time is concerned in first getting the MJ screen open. When homo sapiens sapiens (trendy human, the species we all belong to) first appeared, tools like knives, spears, clothing and using fire to prepare dinner meals already existed, expertise is the distinctive adaptation of humans, prehistoric men (of recent human species) have been nothing without their instruments. This pen hooks up by way of Bluetooth with Home windows LAPTOP’s, Macs, Android and iOS units. This gadget displays your Home windows Reside “what’s new” feed on your desktop with real-time updates. I’ve had my magicJack since June and it’s my predominant phone line which I use day by day and it is past incredible. Digicam Gadgets, as wi-fi web cameras, surveillance cameras, sun shades cameras, and so forth. These cool gadgets make for awesome spy tools, should the necessity come up. At the moment my solely subject with the magicjack is that it stop working through my telephone. This is made straightforward by Microsoft’s gallery of gadgets that helps users discover millions of gadgets with various different options. Every kind of gadgets can be found online at very low costs. You possibly can read fanfic just by browsing to the web sites on these gadgets. Our Magic jack doesn’t work from China to the USA to anything except Cellular telephones. … It is tax time – are you ready? Most significantly, you will have an excellent pc with a excessive speed web connection and both transcription and word processing software put in on it as a result of that previous typewriter just isn’t going to do the trick anymore. My Info Protected – Works once more on the Home windows OS. It may well hold all the non-public files together with financial particulars, passwords and different sorts of data safely. A firmware replace display appears after the TV acknowledges the connected USB flash drive with the firmware replace on it, and asks if you wish to save adjustments to your TELEVISION. Spotlight “Community” together with your cursor after which press “Enter.” An Accessing display screen appears while the Sharp Blu-ray participant quickly checks for firmware updates. As soon as extracted using Home windows Explorer go into the SlimBrowser†listing and click on on the sbframe†file with the right mouse button and choose Copy from the drop down menu. Presently, the firmware update is being put in on the TV. A immediate seems beneath the initial message after the firmware replace is efficiently installed that notifies you when it’s safe to disconnect your USB flash drive from the TELEVISION. Do that by proper-clicking the Safely Take away Hardware icon in your pc’s Notification Space (positioned by default within the decrease-proper corner by your clock), clicking “Safely Take away Hardware” deciding on the identify of your USB flash drive in the dialog box that appears and then clicking “Stop.” You may safely disconnect your USB flash drive after a message appears stating that it is now safe so that you can disconnect the machine from the pc. Computer viruses are small software programs which can be designed to unfold from one laptop …
OPCFW_CODE
It’s cooled off due to other distractions, but past few weeks I’ve been getting the editor from the Software Tools in Pascal book typed in to try and get it to work with Turbo Pascal 3.01 on a Z80-CP/M simulator. The editor in that book is very similar in flavor to the Unix ed editor, which is a line editor. I’m doing this because I want a better editor for CP/M than the ones I’ve tried. Out of the box ED is just absolutely terrible. Command line character editors are awful, at least for me, especially today. Line editors are much better ( ed is a line editor). Turbo Pascals editor is nice – for Turbo Pascal. It’s quite clunky and fiddly to use just as an editor. You have to jump through a bunch of commands and screens just to get in and out if you’re not using the compiler. VEDIT, and it’s a TECO clone. Easy enough to arrow around and add characters and what not, but as soon as you want to go beyond that, you’re dumped head first in to their TECOish macro language. And I, honestly, don’t wish that on anybody. Back to command line character editing. I’d rather retype a line than move a blind pointer 10 characters over to make a change. Lots of folks use Wordstar, but that’s pretty darn heavy for just a text editor. I keyed most of the files in using ed on Unix so as to get used to the commands and flow. Using ed to enter the source code hasn’t been painless, but it’s not bad either. And since I’m writing my own editor, I can make tweaks if I see fit (which I will soon anyway since the pattern syntax in this editor isn’t quite the same as regex is in it’s quite the little project. The book has all the code organized in to several small files. And even the way it’s structured as a Pascal program is interesting. In Pascal, you can nest procedures. Historically, myself, I’ve rarely done that. Typically it was done for little helper functions like for recursive routines, things like that. But the editor is, when all is said and done, essentially one, very large procedure with several nested utility routines, and very few global variables. This is all well and good, especially when you leverage #include files to manage the individual routines. But while TP does support including routines, it does not support nesting them. So I can’t trivially convert the #include statements from the raw source in to the equivalent TP directive. So, after I copied all of the files to a “diskette” for CP/M, I wrote my own mini-preprocessor to handle the #include myself. It’s straight forward, but since it nests, it’s also recursive (at least it’s more easily done with recursive calls). Once you run the little pre-processor you end up with a file that’s too big for TP to load. It seems to make a valiant try to compile, but as soon as you get an error (and TP stops on the first error), you get a line number to a file that the editor can’t read. So you don’t know what it is. Which makes the turn around kind of a pain. But while writing the pre-processor, here’s where I ran in to an interesting limitation with TP. First, by default, TP does not generate code that can be used recursively. I have not disassembled any of it, but I imagine its using a lot of static areas for local variables and what not rather than stack frames. That’s ok, because there’s directives to selectively enable and disable support for recursive code. However, one caveat is that when you do use recursive calls, you can not use the var clause in the routine parameters. function thing(a : integer); begin ...; end; function thing(var a : integer); begin ...; end; In the first instance, the a parameter is passed by value. In the second, the var keyword tells it to pass by reference, and that means that the value of a can be changed within the routine and it will change the underlying variable. Version 1 passes the value of a, version 2 passes a pointer to So, for some reason because of how it manages memory, you can not pass the pointer to local variables to recursive routines. Fine. Next, Pascal has the TEXT type, which is, essentially, a FILE of CHAR. TEXT is a file reference. In C, it would be akin to FILE *aFile, using Turbo specifically disallows passing a TEXT (or any kind of FILE variable) to a routine by value. You have to use the var construct. (There are lots of sensible reasons for this.) To wit, perhaps, you can see my conundrum. Initially, I had something like: procedure process_include(var in_file : text, var out_file : text); You can perhaps visualize that this reads in_file, scans each line, and writes it to out_file, If it finds a #include line, it simply opens up the file on the line, and calls procedure process_include(var in_file : text, var out_file : text); work_file : text; if (starts_with(line, '#include')) then begin And…that can’t work. You can’t pass the work_file without a var, and you can’t pass a var to a recursive routine. So, I had to do my own stack to handle the files. In hindsight I might have been able to make it work with direct use of pointers and dynamic memory. But, no matter. It works. I also wrote a simple more utility. CP/M was designed back in a day when you could page through files using ^S/^Q to stop and restart the screen, and ^C to stop because the slow terminals were, well, slow. So it didn’t really need a more utility. But on modern hardware, it’s kind of necessary. By this time, though, I’ve now run in to another thing. By default, the CP/M diskettes I’m using are limited to 64 files in the directory. And I’ve been bouncing off that limit. Very exciting when you bump in to that trying to save work from TP – you’re essentially doomed at that point, because I can’t swap out a diskette at this point using the simulator, and even on a real machine, you can’t swap out the diskette on the fly because CP/M requires a warm restart everytime you swap a floppy – something that you can’t do in TP. So, when that happens, you lose work. Thankfully, I was using a modern host and modern terminal, so I just selected the text page by page that I wanted to keep and copy/pasted it to a safe place while I quit TP. (Oh, and pasting it back in to TP? Not recommended. Not pretty.) This sent me down the rabbit hole of making “diskette management” easier for the z80pack simulator I’m using, because it’s, honestly, a bit of a pain the way they do it now. If this were a “real” computer, then, yea, I’d just be swapping floppies, formatting new ones, PIPing back and forth. But on the tools I’m using from the command line, it’s awkward and a bit painful. So, I need something a little bit higher level to manage those. Now, z80pack out of the box comes with 4 floppies and a hard drive. I could just use the hard drive, but out of the box CP/M is pretty awful with a hard drive. No directories, the USER spaces are kind of terrible. It’s, at least with 2.2, really more of a diskette OS, so I’m trying to stay with the diskette idiom. Creating work floppies, selectively putting utilities on them, etc. And as currently set up, it’s really a bit of headache keeping it all straight. So, I’m working on the meta-problem of operating my CP/M “computer” a bit easier.
OPCFW_CODE
Bitmap Extension for NetLogo This package contains the NetLogo bitmap extension. It allows you to perform manipulations like scaling, converting to grayscale, and grabbing a single channel on images and import them into the patches or drawing. Use the netlogo.jar.url environment variable to tell sbt which NetLogo.jar to compile against (defaults to NetLogo 5.3). For example: sbt -Dnetlogo.jar.url=file:///path/to/NetLogo/target/NetLogo.jar package If compilation succeeds, bitmap.jar will be created. The bitmap extension is pre-installed in NetLogo. For instructions on using it, or for more information about NetLogo extensions, see the NetLogo User Manual. What does the Bitmap Extension do? The Bitmap Extension allows you to manipulate and import images into the drawing and patches. It offers features not provided by the NetLogo core primitives, such as: scaling, manipulation of different color channels, and width and height reporters. To import and manipulate images you will need to include the bitmap extension in your NetLogo model. extensions[ bitmap ] The image file formats supported are determined by your Java virtual machine's imageio library. Typically this is PNG, JPG, GIF, and BMP. PNG is a good, standard choice that is likely to work everywhere. If the image format supports transparency (alpha), that information will be imported as well. Reports a 3-element list describing the amount of R, G, and B in image, by summing across all pixels, and normalizing each component by the number of pixels in the image, so each component ranges from 0 to 255. bitmap:channel image channel Extracts either the alpha, red, green, or blue channel from an image. The input channel should be an integer 0-3 indicating the channel to remove (alpha=0, red=1, green=2, blue=3). The resulting image is a grayscale image representing specified channel. bitmap:copy-to-drawing image x y Imports the given image into the drawing without scaling the image at the given pixel coordinates. bitmap:copy-to-pcolors image boolean Imports the given image into the pcolors, scaled to fit the world. The second input indicates whether the colors should be interpreted as NetLogo colors or left as RGB colors. false means RGB bitmap:difference-rgb image1 image2 Reports an image that is the absolute value of the pixel-wise RGB difference between two images. Note that image1 and image2 MUST be the same width and height as each other, or errors will ensue. bitmap:export image filename Writes image to filename. Reports an image of the current view. Converts the given image to grayscale. Reports the height of given image Reports a LogoBitmap containing the image at filename. bitmap:scaled image width height Reports a new image that is image scaled to the given width and height Reports the width of the given image The NetLogo bitmap extension is in the public domain. To the extent possible under law, Uri Wilensky has waived all copyright and related or neighboring rights.
OPCFW_CODE
package com.media.server.controllers; import com.media.server.helpers.*; import com.media.server.models.helperModels.SongPOJO; import com.media.server.models.ExpiryPeriod; import com.media.server.persistance.repositories.ExpiryPeriodRepository; import com.media.server.persistance.specifications.ExpirySpecification; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import java.util.List; /** * This controller makes is responsible of assigning media to users for a certain period of time. * To use any of its endpoint, user has to provide a request header "Access-Token" with valid and non expired value, * along with every request. To obtain an access token, please refer to UserController. * */ @RestController @RequestMapping("/publishing") public class MediaPublishingController { @Autowired private ExpiryPeriodRepository expiryPeriodRepository; /** * Assign a group of songs to group of users. * * @method POST * @param songPOJO JSON string containing array of users ids, and array of of songs ids * @return 400 in case of empty arrays, or 200 otherwise */ @RequestMapping(value = "/assign", method = RequestMethod.POST) public ResponseEntity assignSongToUser(@RequestBody SongPOJO songPOJO) { List<ExpiryPeriod> expiryPeriodList = MediaPublishingHelper.prepareExpiryPeriods(songPOJO); if (expiryPeriodList.isEmpty()) { return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(new MessageWrapper(Resources.INVALID_INPUT)); } expiryPeriodList.forEach(expiryPeriod -> expiryPeriodRepository.save(expiryPeriod)); return ResponseEntity.status(HttpStatus.OK).body(new MessageWrapper(Resources.SUCCESS)); } /** * Returns all assigned media which expired. * @return */ @RequestMapping("/expired") public ResponseEntity getExpiredConnections() { List<ExpiryPeriod> expiryPeriods = expiryPeriodRepository.findAll(ExpirySpecification.customerHasBirthday()); return ResponseEntity.status(HttpStatus.OK).body(expiryPeriods); } }
STACK_EDU
Most organizations know that innovation is about capturing ideas from everyone regardless of their position. In large organizations, this process is typically controlled in a format, where public “business challenges” are available for all to submit their ideas to. Typically, these challenges also have a start and finish date ensuring that ideas are submitted, when needed by the business. In Project Online you can obviously create projects using a stage/gate approach thereby capturing ideas early in the life cycle. On the other hand, if you want a quick approach to submitting an idea (imagine receiving hundreds every month), then you probably don’t want all ideas to hit your PPM system. Some years ago, Microsoft had an Innovation Hub, which was a SharePoint solution that used Yammer, SharePoint 2013 and Project Online. I blogged and reviewed this solution, but soon after the solution was withdrawn without any explanation. Since then, many have been using SharePoint lists, within the Project Online Site Collection, to capture ideas. Out of the box, portfolio managers could quickly select ideas (list items), and convert them into projects in Project Online. Today, we now have Microsoft SharePoint in the “modern” edition, which looks much cooler, allows for “Microsoft Teams” integration, and a better mobile experience. In the following post I will describe how to build a basic “modern” SharePoint site, and connect this to Project Online using Flow. This could serve as a new way for organizations to build simple Idea Hubs/Portals, which look awesome and connects to a Project Online PPM solution. Step 1: Launch SharePoint “Modern” If your Office365 administrator has allowed SharePoint “modern”, simply navigate to your launch pane, and select “SharePoint” Step 2: Create a new Sharepoint “Modern” site Select a style such as “communication site” and give it a name and description Step 3: Create a list to capture ideas From you newly created SharePoint site, simply click the “new” icon and select “List”. We will use this list as a container for the ideas users will submit Give the site a name and a description Step 4: Idea list configuration Once the list is created, you should now add the columns that you will need for capturing ideas. Think of each column as a question to the user such as “Business Unit”, “Benefit Description”, “Proposed Start Date” and so on. This would also be where some organization would connect to another list holding the current “Business Challenges” allowing for better grouping of the ideas. Below are some examples of how to create columns Creating a “multi line” text field Creating a “priority” column that will serve as the value, which will later be used for project creation is Project Online Step 5: Copy the “new idea” link I will know pretend to create a new idea from our list simply to capture the direct link for creating new ideas. I do this so I can make a “create idea” button directly from the front page of our new Idea Hub. Simply select “copy link” from the buttons in the top of the form as shown below Step 6: Edit the front page Navigate back to the home page of our Idea site. Click on the “edit” button in the top right corner, select one of the image boxes and rename it e.g. to “Create new idea” Now choose what should happen when users click on the image. Select the “from a link” option, and paste in the link from “Step 5” in this post Once done, users can simply click on the image box from the front page, and directly arrive at the idea submission form Step 7: Building the integration to Project Online We will now build an integration allowing ideas to automatically be created as projects in Project Online. To do achieve this, we will use Microsoft Flow, which can be launched directly from the list we created. First step is to click on “create a flow” Step 8: Select a Flow template In Microsoft Flow you can find tons of predefined templates. For this kind of connection to Project Online, I recommend using either the “When a new item is added in SharePoint, complete a custom action” or “When an existing list item is modified, complete a custom action“. Obviously the main difference is what triggers the Flow to run. For this blog post, we will use the “new item” option, which means that every time a new idea arrives in the list, where a certain criteria is met, a project will be created in Project Online Step 9: Configure the Flow template Validate the you are connected and click “continue” You now arrive in the Flow configuration area. Typically the list references are already in place. If they are not, simply add the “Idea” site address and select the “Idea” list. Now click on “new step” and select the “add a condition” option Find the column from the “idea” list, that defines if something should trigger a project creation in Project Online. In my case I use our “Priority” column Now define the “Condition”. In my case, any priority number above 1 will start the creation of a project in Project Online. Imagine that the priority number could be set by another process based on management input or user evaluations. For this post, the number is set manually In the “If Yes” pane, you now have to choose an action. Write “Project Online” and find the “Creates new project” template Add the URL to your Project Online/PWA site, and match your idea column fields to the “Project Name” and “Project Description” settings. This way, new projects in Project Online will get the name and description automatically from the idea description As this is a very basic Flow, we are now done with the configuration. Give the Flow a name and save it, then click “done” You will now arrive at the Flow landing page, where you should see the two working connections to SharePoint and Project Online. From here, you can also see the history of the Flow once it starts to run. Remember to use a Service Account as the connection unless you want all projects to be created with yourself as the project owner Step 10: Try the Flow Navigate back to you Idea Hub and create a new item/idea. In this case, we will give the idea a priority value of “3” as this will trigger the project creation (anything above the value 1) Once the idea is created you should be able to see it in the idea list. Now try to click on “Flow”, and go to the “See Flows” area From the “Flow” overview you can now see a “run history” with one successful event Opening up the event in the history log shows exactly which project was created. This view will also show if something fails, and exactly what step in the Flow was the cause (and why) Step 11: Find the project in Project Online Within 10-20 seconds from submitting the idea, you should be able to find the project in Project Online. If you Project Online configuration triggers a workflows (stage-gate approach) this will now be active as well as the Project Site and Project Schedule Template Notice that the “project description” field also has values transferred from the idea list With a few steps, and no code, you can easily set up an Idea Hub using the modern SharePoint site template. From here, your ideas can be rated, approved and later transferred to Project Online using Flow. You should obviously configure your Idea Hub much more, and perhaps also add direct links to various areas of Project Online such as Project Center. Using PowerBI you can connect all items from the SharePoint list to the running projects in Project Online. Using this approach, a full overview of unapproved ideas, approved ideas, running projects and closed project can be shown in one great report or dashboard. To improve the user experience for those submitting ideas, I would recommend using Microsoft PowerApps to ensure a full mobile experience for the end users. Another great trick to improve the ability to innovate, is to allow for voice and camera from the idea submission form – this is possible out of the box using PowerApps. Have fun trying it out yourself and feel free to reach out if you run into issues. That is amazing Peter, Love the usage of Flow and see it as the future of workflows. I remember in the past the Innovation hub presented at the 2014 conference but this is way easier! Thanks for sharing & neat explenation! Do I need Project if I just want to capture & discuss/rate ideas? And what about the rating? You talk about Powerapps? Is that the way to go? Where will feedback from Powerapp be stored? thx for your feedback! I would always go with PowerApps as this ensures a great mobile experience for those with the good ideas. Data will always be stored in the SharePoint list as PowerApps in this case is simply a frontend. You dont need project to create an idea, unless you are also the person who needs to later run the project in Project Online. This is a great walkthrough and perfect for what me and my team is looking for. Quick question in regards to site permission setup. Does the “visitor” who wants to submit their idea need Edit access to the sharepoint site? I’d like to limit the Edit and Full Control access to only a few folks, where as open the idea submission to everyone.
OPCFW_CODE
Using python for a CAD program dacut at kanga.org Mon May 22 09:15:54 CEST 2006 > No, concurrent access is highly relevant; for example, on a team of > about 50 architects working on design and production drawings for a new > hospital, each floor was one 'drawing' (dwg file), and thus stored on > disk as a separate entity from the other floors. > Now, only one architect could work on one floor at any time! And as info > from the project goes (sums and statistics of data, for example areas), > the only reliable (an efficient) way to gather this was to hack an > in-house application that collected the desired info from the various > binary dwgs stored on disk, and saved this data this to a RDBMS, for > further report processing. You have two orthogonal thoughts going here. It makes sense to collect statistics from a given drawing into an RDBMS in such a way that, whenever a design is checked in, the statistics are synched with the RDBMS. Further, such a system could (and should) make sure that no previous checkin has been clobbered. This is a classic source control system. I'd be surprised if there aren't plenty of add-ons which do this. At Cadence, we provided hooks for such systems to automatically collect necessary information. This does not mean the design itself should be stored as an RDBMS. As I've stated previously, CAD data (both electrical and, it appears, mechanical) does not lend itself to RDBMS relationship modeling. What you want is an easy way to manage the information, not dictate the storage format of the information. > And just to work on one small area of a huge floor, the architect has to > load the whole freaking floor... I don't have any architectural experience, so I can't say whether this makes sense or not. However, I think of this as being akin to complaining, "Just to edit a single function, I have to load the entire source file/check out the entire module/rebuild and rerun all the tests!" Not that this is necessarily an invalid idea. However, my experience with Cadence's tools makes me believe the existing behavior might just be the lesser of two evils. CDBA has a strict library/cell/view hierarchy; the checkout granularity here is (usually) at the view level (that is, when you lock a design, you're locking the view). Two designers could edit two different cells or views in the same library. This leads to all kinds of data inconsistency problems -- cell borders aren't lining up in a layout, for example, or the cached data isn't valid, etc. It's a nightmare. The newer OpenAccess system has an option to manage this at the library level. Unfortunately, this is rarely enabled due to backwards More information about the Python-list
OPCFW_CODE
# -*- coding: utf-8 -*- """ Custom Django test runner that runs the tests using the XMLTestRunner class. This script shows how to use the XMLTestRunner in a Django project. To learn how to configure a custom TestRunner in a Django project, please read the Django docs website. """ import os import xmlrunner import os.path from django.conf import settings from django.test.runner import DiscoverRunner class XMLTestRunner(DiscoverRunner): test_runner = xmlrunner.XMLTestRunner def get_resultclass(self): # Django provides `DebugSQLTextTestResult` if `debug_sql` argument is True # To use `xmlrunner.result._XMLTestResult` we supress default behavior return None def get_test_runner_kwargs(self): # We use separate verbosity setting for our runner verbosity = getattr(settings, 'TEST_OUTPUT_VERBOSE', 1) if isinstance(verbosity, bool): verbosity = (1, 2)[verbosity] verbosity = verbosity # not self.verbosity output_dir = getattr(settings, 'TEST_OUTPUT_DIR', '.') single_file = getattr(settings, 'TEST_OUTPUT_FILE_NAME', None) # For single file case we are able to create file here # But for multiple files case files will be created inside runner/results if single_file is None: # output will be a path (folder) output = output_dir else: # output will be a stream if not os.path.exists(output_dir): os.makedirs(output_dir) file_path = os.path.join(output_dir, single_file) output = open(file_path, 'wb') return dict( verbosity=verbosity, descriptions=getattr(settings, 'TEST_OUTPUT_DESCRIPTIONS', False), failfast=self.failfast, resultclass=self.get_resultclass(), output=output, ) def run_suite(self, suite, **kwargs): runner_kwargs = self.get_test_runner_kwargs() runner = self.test_runner(**runner_kwargs) results = runner.run(suite) if hasattr(runner_kwargs['output'], 'close'): runner_kwargs['output'].close() return results
STACK_EDU
More than 40.000 handwritten texts from Ferdinand de Saussure were left unpublish, waiting for scientist to understand them. The chaotic aspect of these texts lead to a primary need for classification and dating issues. This case study presents a way to extract scientific content, historical and bibliographic context and terminology context from such manuscripts. A knowledge graph constituted by entity classes and relationships, has been used in order to propose an implementation on the Semantic Web. Digitization, Digital Humanities, Historical ontologies, Semantic Web Interface This presentation is based on documents [1, 2, 3, 4, 5] published by the Knowledge engineering research group KE@CUI as output of the SNF project Knowledge engineering models and tools for the digital scholarly publishing of manuscript (grant CR21I2_159747). The field of digital humanities is a collaborative and transdisciplinary area between digital technologies and the discipline of humanities. Ferdinand de Saussure was one of the fathers of modern linguistics (general linguistics, comparative grammar, social sciences). He has made, however, a few publications and he never published in General Linguistics, having only given a course between 1906 and 1911. The famous Cours de Linguistique Générale has been published posthumously (by Bally and Sechehay),, based on the notes taken by his students.. As a consequence, more than 40.000 pages of handwritten texts remain, for the most part, unpublished and unexploited. So, which are the needs for taking advantage of these manuscripts? First of all, to retrieve and access the data. That is, to visualize the manuscripts and their transcription and to gain access through thematic classification plans, people, place, events and concept indexes. Secondly, the understanding process is very important, which means that the exact meaning of each term has to be determined. Moreover, the dating is essential so that the manuscripts can be placed on a chronological order. Lastly, the disclosure of the author’s work, in different forms, is needed. However, these needs face some typical problems like difficulties in the thematic classification, as there are multiple themes in each manuscript, as well as dates and chronological issues. Another limitation is the text order problem and, therefore, there is a need to rebuild the intended text in order to be understandable. Thus, in order to satisfy these needs and gain an holistic knowledge, a multidimensional approach needs to be constructed, including scientific, historical, bibliographical and terminological context. These evolving interconnected knowledge resources can constitute an advanced model to help humanists deal with the knowledge-intensive tasks they must perform when studying historical scientific manuscripts. Requirements and Implementation In order to take advantage of the state of the current knowledge about the manuscripts, the modeling of each context mentioned above is mandatory. Concerning the historical context, there is a need to correlate the time-varying entities and relationships. By entity classes, we refer to people, event, places etc. These classes can be divided into rigid, e.g people, and non-rigid, e.g student. On the other hand, there are the relationships, as shown in Figure 2, that can be separated between the fluent ones, like the variable lives_in and the non-fluent, like the variable place_of_birth. Based on these terms, historical reasoning rules are applied to infer temporal relations. As an example of the latter, we can present the following sentences: - If text X refers to event E then writing-time(X) > time (E) - If A sends a letter to B at time T then A knows B at time T (and thereafter) Regarding the terminological context, a lot of work has already been done by scientists who have invented new concepts, redefined terms and worked on unstable terminologies, showing that terminology evolves over time. Cosenza et al, as an example, identifies 14 terminologies in Saussure’s work. As a consequence, multiple terminologies have been incorporated into the same knowledge graph, created by different researchers and expressed in different formalisms, like TBX terminologies, SKOS schemes, OWL ontologies, tables, texts, etc. There is also a conceptual evolution over time, which means that different definitions can be used for the same term. This can evoke, in general, a global inconsistency. Thus, the challenge for the terminological context modeling is to understand the text and then to date it. The understanding process includes the determination of the meaning of the terms in a text and then the determination of the terminology that has been used by the author. Afterward, in order to estimate the date, the determination of the terminology used in the text is needed, as well as the indication of a possible writing period. Implementation of all the aforementioned has already been done with the help of the semantic web technologies. There have been techniques for temporal modeling and reasoning as well as techniques for dynamic terminology representation and processing. Moreover, manuscripts and knowledge graph have been finally represented and stored. In Fig. 3 there is an example of such a manuscript. In Fig.4 the architecture of the described system is presented. In the second row of this cyclical procedure, we meet the “client” at the first box, the Web Server/Front-end at the second and the Back-end as well as the storage control at the third one. The storage control includes the updates and the authentication. To sum up this procedure, the steps of the knowledge graph services are the following: - Adding manuscripts and transcriptions - Importing knowledge resources (historical context and terminologies) - Temporal reasoning - Semantic indexing of texts with multiple ontologies/terminologies - Terminology finding - Terminology generation (by correlation detection) - Finding relationships (similarity, (dis)agreement) Terminologies and Semantic Indexing The aspect of the terminology is maybe the most important in this attempt of deciphering old manuscripts. The general purpose is to map all the terminological models to a common generic model In order to avoid any logical inconsistencies, we have first of all to consider each terminology in isolation. Moreover, we need to connect the equivalent concepts (semantic alignment). Before concluding to the terminology identification, there is a need of semantic indexing. This, first of all, means that there will be an association between the text elements (multi-) words and the concepts in the terminologies. The most common problem though is the one of polysemy, where one word can correspond to several possible meanings. In order to solve this kind of problems, the current approach is based on a similarity score, depending on the term’s context in the transcription and the term references in the terminology. This constitutes a distributional semantics approach and can lead to the terminology identification by computing similarity scores for all the terminologies. Precisely, score of a terminology = f(score of each term). Conclusion and Future work To conclude, this work has presented a model illustrating how semantics technologies can be applied to historical datasets. It has built the infrastructure for the storage, the semantic enrichment, the visualization and the publication of a corpus of scientific manuscripts. The implementation of these steps is achieved through the Semantic Web techniques. Although this infrastructure has been applied to Ferdinand de Saussure’s work, it is ready to be used on any other corpus of manuscripts as it proposes a multi-knowledge resource structure to represent the evolving nature of an author’s terminology. As an evolution of this work, new steps are developed. Firstly, crowdsourcing for the transcription of the manuscripts has already been taken into consideration. Moreover, new ways to test the temporal (contextual) inferences and new tools for the extraction of scientific contents are under investigation. The ultimate purpose, however, is to finally create a unified interface for the digital humanists. - Aljalbout, S., Cosenza, G., Falquet, G., Nerima, L. A (2016) Semantic Infrastructure for Scientific Manuscripts. In Federico Boschetti (Ed.) proc. International conference 2016: “Digital Edition: Representation, interoperability, text analysis, Venice. - Aljalbout, S., Falquet, G. (2017) A Semantic Model for Historical Manuscripts. In proc. Third International Workshop on Semantic Web for Scientific Heritage at ESWC'17. Portorož, Slovenia, May 2017. - Aljalbout, S., Falquet, G. (2017) Un modèle pour la représentation des connaissances temporelles dans les documents historiques : Applications sur les manuscrits de F.Saussure. In Proc. 28es Journées francophones d'Ingénierie des Connaissances (IC 2017), Caen, July 2017. - Cosenza, G. (2017) Les projets de Digital Humanities relatifs à l’oeuvre de Ferdinand de Saussure. Cahiers Ferdinand de Saussure, no. 70. Droz, Genève. - Cosenza, G. (2016) Entre terminologie et lexique : les chemins de la pensée de F. de Saussure. Cahiers Ferdinand de Saussure, no. 69. Droz, Genève. - Reichling A. (1949), What is general linguistics?, 1:8-24, Lingua, Elsevier - Cosenza G. Dalle parole ai termini: I percosi di pensiero fi F. de Saussure. Edizioni dell’Orso (2016) - Meroño-Peñuela, A., Ashkpour, A., van Erp, M., Mandemakers, K., Breure, L., Scharnhorst, A., Schlobach, S., van Harmelen, F. (2015), Semantic Technologies for Historical Research: A Survey. Semantic Web Journal, 6(6): 539-564 University of Geneva - Thematic Area 6
OPCFW_CODE
Fixed Wing HiL controls broken in 1.6.0rc1 On 1.6.0rc1 the actuator controls are all zeros coming out of the pixhawk. Controls are zeroed when commands originate from the pixhawk (in stabilized/hold mode) and under manual control (using a receiver on futaba SBUS). Versions up to 1.5.5 work well. A diff on the two branches indicates that a bunch of UART rc input stuff was added to simulator_mavlink.cpp, so that was my first thought. However, removing all code inside of a ENABLE_UART_RC_INPUT ifdef block did not fix the problem. Nevermind, turns out this is another manifestation of these two https://github.com/PX4/Firmware/issues/5680 https://github.com/PX4/Firmware/issues/6374 Closing issue in deference to the already open ones. which version of the QGC? The problem is in the px4 firmware and is unrelated to qgc. This should be fixed in the current master. what is the reason of the problem? Pixhawk firmware will not support the flightgear Hil? The current master has not the flightgear hil airframe but xplane. How could I do to do the hil with flightgear? 2017年3月7日星期二,aivian<EMAIL_ADDRESS>写道: The problem is in the px4 firmware and is unrelated to qgc. This should be fixed in the current master. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/PX4/Firmware/issues/6534#issuecomment-284457717, or mute the thread https://github.com/notifications/unsubscribe-auth/ARYeV8CZQB1V2Dc3imr3c2u01hwbPf6-ks5rjDm8gaJpZM4L6JqE . I'm not sure what you mean. The px4 firmware doesn't have specific configurations for different HiL simulations. This issue was around a flag that was not being set properly for HiL simulation of fixed-wing aircraft. This issue was present for all simulation environments, and has been fixed. Separately, and likely the source of your issue is that the interface between qgc and flightgear is broken right now after the change to actuator_controls. Most people doing hil are using xplane so no one has submitted an actuator_controls patch for the flightgear interface yet. In the firmware files which is init.d, there is the the file"1004_rc_fw_Rascal110.hil", so px4 has not support the flightgear simulation. 2017-03-15 22:54 GMT+08:00 aivian<EMAIL_ADDRESS> I'm not sure what you mean. The px4 firmware doesn't have specific configurations for different HiL simulations. This issue was around a flag that was not being set properly for HiL simulation of fixed-wing aircraft. This issue was present for all simulation environments, and has been fixed. Separately, and likely the source of your issue is that the interface between qgc and flightgear is broken right now after the change to actuator_controls. Most people doing hil are using xplane so no one has submitted an actuator_controls patch for the flightgear interface yet. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/PX4/Firmware/issues/6534#issuecomment-286766930, or mute the thread https://github.com/notifications/unsubscribe-auth/ARYeV3-5qrPvKarvWOWjgzKr4I0b8o09ks5rl_uwgaJpZM4L6JqE .
GITHUB_ARCHIVE
M: I'm running an experiment with Facebook Ads for my Tumblog - ivankirigin http://giantrobotlasers.com/post/178007237/im-running-an-ad-on-facebook-to-learn-more-about R: EGF Hi Ivan, It would be interesting to see what would happen if you turned on comments for your blog. You also mention creating a "proper blog" which I think would bring something new to the mix as you build your brand. Right now your Tumblr theme is limited to what you want me to focus on at that moment, with little ability to get to your archives or see what you are all about (tumblr view) vs. a blog or RSS readership will know who\what they are subscribing to. I think there is rationale to keep both, but you should try to build RSS audience at the blog vs. followers and tumbularity on Tumblr. R: ivankirigin Tumblr is social. I don't really care if it stays really small. Opinions are more valuable the more people read them, so I'd like a "proper blog" to grow. The reason to keep both is to keep things properly categorized. Pictures of cats -> Tumblr. Thoughts on a market -> blog. R: ivankirigin If anyone here has any thoughts on something else to tune or play with, let me know. Overall, the experience of buying an Ad on Facebook was far better than my very limited experience with Adsense. R: yeabuddy I've been playing around with FB Ads doing some affiliate marketing stuff the last few months. 1) I think your target demographic, looking by the # of impressions. Try to go a little bit broader than just "tumblog". Try "tumblr", "blogging", "rss", .etc to reach a bigger audience 2) Your CPC is way too high. With FB it's best to go below their suggested bid, and gradually up your CPC amount until you start getting the desired impressions. 3) I'd gather a lot more data before making any kind of analysis on the data you're gathering. 4) Your CTR looks to be on the right track, although still probably not enough data to tell if it's stable yet. 0.1% is alright for a display ad on a social networking site. I'd still try split testing a couple different versions of your ad to see if you can up that. You always want to be split testing, constantly cutting the weaker performing ads, and making a variations of the better performing ads. R: ivankirigin I started at $0.10 CPC, but bumped it up to see more traffic faster. R: sfphotoarts I don't think the content on your tumblr really has enough meat to it to get you startup advisor roles, or public speaking engagements. Start another company that has a successful exit and then go the route of EIR and you'll end up in advisor roles, board seats and speaking at SXSW... R: sopu Yeah, I think I agree. Investing / advising looks fun and interesting, so it is a long term goal of mine. The funny pics I sometimes post on Tumblr are largely unrelated. I need to build a blog. The comment there is really about others I've seen who, imho, aren't that talented, but get attention before they are good bloggers. R: ivankirigin Lol, I commented from my sock puppet account. R: aw3c2 Hey, your site would be much better to read if you made the text non bold. R: ivankirigin I don't like the theme. It was built in. I'd like to work with abby to make things better. Feature request #1 is bigger media. The column is too thin. R: gojomo I had no idea Tumblr even had its own 'follow'; I would be much more likely to subscribe via RSS.
HACKER_NEWS
Software Engineeringbasically bargains with the development of software package. In addition, it contains software package Procedure and routine maintenance. The initial scientific establishment to use the expression was the Division of Datalogy for the College of Copenhagen, Launched in 1969, with Peter Naur being the first professor in datalogy. The term is applied generally in the Scandinavian countries. Another term, also proposed by Naur, is details science; this is now used for a definite industry of information Assessment, which include data and databases. Freebies. Before you pay out, you can know the benefits you are also entitled with our services for example free of charge revisions, absolutely free bibliography, plagiarism scan and buy monitoring. Making a Computer Science dependent assignment is no more a challenge. Our tutors ensure that you will get the most effective aid with computer engineering coursework. We've most effective professionals around the world To help you with the subsequent parts: Leading High quality Computer Science Assignment Help Computer science is an enormous program with several sub-disciplines which enable it to be challenging and burdensome for The scholars to realize it and generate assignments on it. We get You simply primary and authentic, mistake free information as part of your assignment. There is absolutely no scope for plagiarism in any assignment. Some of them simply neglect matters they are able to’t learn but it always brings about unpleasant implications and it is challenging to sustain Along with the curriculum Down the road. Which is why several students are trying to find Individuals, who will ‘do my c++ homework’ and help them learn The subject. [seven] "A crucial step was the adoption of a punched card process derived from the Jacquard loom" making it infinitely other programmable.[Notice two] In 1843, for the duration of the interpretation of a French article within the Analytical Motor, Ada Lovelace wrote, in among the list of numerous notes she integrated, an algorithm to compute the Bernoulli numbers, that's thought of as the main computer program. Around 1885, Herman Hollerith invented the tabulator, which utilized punched cards to procedure statistical details; sooner or later his company grew to become Section of IBM. In 1937, a person hundred many years after Babbage's unattainable dream, Howard Aiken certain IBM, which was creating a myriad of punched card machines and was also in the calculator small business[nine] to produce his big programmable calculator, the ASCC/Harvard Mark I, according to Babbage's Analytical Motor, which by itself made use of playing cards plus a central computing device. When the equipment was completed, some hailed it as "Babbage's aspiration occur accurate".[ten] Herself remaining a straight A scholar has become helping our purchasers to achieve the same amount of grades that she often obtained in her university times. "Inside of over 70 chapters, Each one new or appreciably revised, a person can find any form of knowledge and references about computer science you can consider. In realistic use, it is usually the applying of computer webpage simulation together with other kinds of computation to issues in several have a peek at this site scientific disciplines. Our aim is usually to help learners in clarifying concepts and counsel for his or her assignments. Our remedies must only be utilized as tutoring guidebook instead of as their particular operate. Programming computer architecture assignment help computer graphics assignment help spss assignment help animation huge information catia r programming assignment help r studio assignment help python programming Java sql stata info technique data move diagram assignment help info Assessment computer community assignment help c programming assignment help functioning procedure archicad Professional medical childcare wellbeing science nursing nursing scenario study assignment help biotechnology assignment help Reflective Nursing Assignment Help ‘There was this one particular time After i poorly desired help with 3 consecutive Computer Science assignment. I'd missed my lectures in class and didn’t know the principles properly adequate to cope with that assignment.
OPCFW_CODE
Japan Tour My Japan tour is starting to come to an end. During the past week I have been traveling in Japan, done two general sessions at BEA Tech Day 2005 conferences (Tokyo and Osaka), plus a press conference and a bunch of BEA internal training sessions. I had never been to Japan before, but I am very interested in their culture and I love Japanese food, so this trip has in many ways been a very interesting and fun experience. I had some fears that my visit would turn out similar to the movie 'Lost In Translation' by Bill Murray (for those that have seen this movie), but my fears were unfounded. Most people spoke good english and I only met very generous people with a lot of heart. Excitement around AOP There is a lot of excitement and interest around AOP in Japan, both among developers, management and press. They are especially excited about AspectWerkz (it is actually more popular than AspectJ) and JRockit support for AOP. It was fun to learn that they had translated every single article I have written, to Japanese. On the other hand, the problem having to translate almost every article and book is one of the biggest hurdles on the way to mass adoption of AOP. Press conference The press conference I did was an interesting experience, it was my first real press conference, with interviewers lined up, pounding me with questions. It was a lot of fun. The topic for the interview was JRockit and its new AOP support which they seemed very impressed about. They have already published, two articles based on the press conference: http://www.atmarkit.co.jp/news/200508/31/bea.html http://itpro.nikkeibp.co.jp/free/NC/NEWS/20050830/167117/ and from what I understand, two more articles with the published. BEA Tech Day 2005 in Tokyo The general session I did had around 350 attendees and was simultaneously translated. Things worked out pretty well. People seem to like the talk and came up with a bunch of really good questions at the end. What was even more fun was that there were actually other speakers that talked about AspectWerkz/AspectJ in different contexts. One guy for example, talked about how to implement the transaction and dependency injection parts of the EJB 3 specification using AspectWerkz. In general, one of the main themes of the conference was AOP. Afterwards, during the conference party, I was treated almost like a celebrity, with "fans" coming up and wanted to shake hands and have a picture taken with the mysterious AOP guru from the north pole (Sweden). :-) Seasar framework - a Japanese Spring killer? During lunch at the conference I had the pleasure of having interesting discussion with the founder of the dependency injection and component framework Seasar. For those that have never heard about Seasar, Seasar is a project very similar to Spring, which is more popular than Spring here in Japan. It uses plain bytecode manipulation to do the AOP part (and not proxies), and is based on the AOP Alliance interfaces. One cool thing that I found out about the project is that they are not only using but are very excited about Alex's and my project backport175 (so much that they actually asked to contribute back some code). BEA Tech Day 2005 in Osaka - Kyoto sightseeing Today I am off for Osaka for my last talks at the BEA Tech Day 2005 in Osaka tomorrow. The trip will conclude with one day off, which I will spend sightseeing in Kyoto. Kyoto is the historical and cultural center in Japan, with many buildings and temples being more than 1000 years old (compared to Tokyo which is only a couple of hundred years old). So that is going to be interesting.
OPCFW_CODE
November 28, 2020 at 4:42 pm #127272 Where To Get medication similar to baclofen Product name: Generic baclofen Active component: Baclofen Analogs of baclofen: Bio-baclofen Clofen Colmifen Diafen Espast Flexibac Gabalon Kemstro Lebic Liofen Lioresal intratecal Lioresyl Availability: In Stock! Payment method: Visa / MasterCard / AmEx Were to buy: https://bit.ly/3bq4zx3 Price (Par comprim): start from € 0.73 to 1.63, Par comprim Medical form: pill Prescription required: No Prescription Required for Generic Luvox Rated 5/5 based on 89 user votes. Info: Baclofen is used for treating spasm of skeletal muscles, muscle clonus, cramping of muscles, rigidity, spinal cord injury and pain caused by disorders such as multiple sclerosis. baclofen drug warning baclofen generic drug names acheter du baclofen sur internet How To Buy baclofen in Colorado, Quality baclofen in Virginia Best Generic baclofen in Montgomery Where To Buy Cheapest baclofen 40mg in Stamford buy baclofen online in canada Best Generic baclofen in Riverside Where To Buy Cheap Generic baclofen in Liverpool baclofen for dogs petsmart, order baclofen from india, Buy Cheap baclofen in San Francisco buy baclofen vaccine buy injectable baclofen baclofen without prescription Utah, baclofen for dogs risks, Order Drugs baclofen Bromide in Atlanta, buy my baclofen online Best Pharmacy To Purchase baclofen in Louisiana buy baclofen bangkok, baclofen vs drug test, buy baclofen mexico baclofen to buy in canada, order baclofen from mexican pharmacy buy baclofen vietnam baclofen prescription in nj Purchase Generic baclofen 40mg in Denver Order M.D. approved baclofen in Oakland Best Pharmacy To Buy baclofen in Portsmouth, Discuss the risks and benefits with your doctor before breast-feeding. A very bad skin reaction (Stevens-Johnson syndrome/toxic epidermal necrolysis) may happen. Where Can I Buy baclofen 5 mg in West Jordan, Doctor recommended baclofen 2 mg in West Jordan, Purchase Cheap baclofen in West Jordan, Safe Website To Buy baclofen in West Jordan, Buy Doctor recommended baclofen 5 mg in West Jordan, Where To Order Online baclofen with West Jordan, baclofen Cheap Treatment for West Jordan, Doctor approved baclofen 20 mcg in West Jordan, Looking Generic baclofen Bromide in West Jordan, Low Price baclofen in West Jordan, Where To Get medication similar to baclofen You must be logged in to reply to this topic.
OPCFW_CODE
Implementing Enterprise DevOps Solutions Our animal health pharmaceutical client needed help streamlining their enterprise DevOps process to shorten timelines, reduce costs, and improve supportability. We built and deployed standardized development environments that reduced setup time from 30 days to 30 minutes. What We Did Our animal health pharmaceutical client provides innovative products and services to help raise and care for animals. They use a variety of technologies to support their customers, ranging from static product information sites to complex web applications. The manual process our client used to set up and approve new environments took an average of 30 days. This led to several problems: - long wait times to begin new projects - delayed overall delivery times - higher development costs - inconsistent development environments - inefficient use of IT time Our client needed help streamlining their enterprise DevOps process to shorten timelines, reduce costs, and improve supportability. We had recently built enterprise digital marketing solutions for this client. During those engagements, we demonstrated our expertise in modern DevOps and Agile Software Development practices, including Infrastructure as Code (IaC), Continuous Integration (CI), and Continuous Delivery (CD). As a result, our client knew we could help with their enterprise DevOps project. Our work involved two phases. Phase 1: Developing an API Solution While our client had a lot to accomplish, they first engaged us for a smaller portion of the project. During this phase, we learned more about the root issues our client needed to address. We collaborated with our client’s architects for context on their infrastructure. We built an API solution using an Azure Function App to do validation for ServiceNow. Then we used Azure DevOps pipelines, Terraform, and Ansible jobs to create infrastructure in our client’s new cloud provider of choice, Google Cloud Platform (GCP). Phase 2: Standardizing Development Patterns After we successfully delivered the first engagement result, our client extended our scope to include bigger pieces of the project. We went on to build and deploy several standardized development platform patterns. By packaging practices such as security by design, automated audit, and documentation-as-code with cost-effective architectures, our client could offer ready-built platforms for solution teams to start from. Our team built two initial patterns that paved the way for the project: - Static web app pattern – We built and deployed a static web app pattern from top to bottom. We learned the steps involved in their environment spin-up process, then automated those steps. With automation in place, we tested and confirmed the new standard development practices were working as expected for this pattern. - Web app pattern – We used Terraform to deploy a web app pattern across multiple cloud providers: GCP for core web services and Azure Active Directory (AAD) for single sign-on (SSO). We collaborated with other developers and architects, both from the client and from other vendors, on the patterns they built, including: - VM pattern — We recommended increasing VM configurability, which eliminated the need for users to write code and saved them time. We also built an integration between GitHub Actions Pipeline and Ansible Tower to automate VM hardening. - Data pattern — We reviewed and tested the data pattern to ensure compliance and consistency with other patterns. - Complex web pattern using microservices — We collaborated to deploy an example web application to Cloud Run for Anthos. Our client shared that this project is the most significant step yet for their IT organization, enabling greater business agility and speed-to-market. The project was reviewed by all of their key partners, including Google Cloud, GitHub, and HashiCorp Terraform, and was identified as a best-in-practice implementation of their capabilities. 30 Days to 30 Minutes to set up development environment and infrastructure The new automation — built with Infrastructure as Code tools including Terraform and Ansible — reduced development environment set up from 30 days to 30 minutes. By standardizing the development environments, we cut out manual approval checkpoints and created consistency that minimized approvals needed. Reduced cloud spend by moving to GCP As the client made changes to standardize their development environments, it was an opportune time to move to their new default cloud platform, GCP. We used Terraform to build most cloud infrastructure in GCP. The move to GCP enabled cost savings of $1 million each year. Recent Case Studies Driving Successful Project Outcomes with Dual-Track Agile Transforming Operations Through Cloud Development Services Let’s develop something special. Reach out today to talk about how we can work together to shake up your industry.
OPCFW_CODE
Support importing files from outside of the root folder Is this a bug report? No The current stance of CRA to disallow importing files from outside the root folder of an app forces the most involved users, those with multiple projects, to jump through hoops that should be unnecessary in order to share code, both components and utils. Current options are Symlink folders. This apparently is flakey (https://github.com/facebook/create-react-app/issues/3547) and can potentially cause issues running on Windows Copy the source files into multiple locations every time they are modified, and check in many copies of hundreds of files. This is our current approach at my company as it works well in CI, but results in human error when newbs edit the copied file rather than the original, and is messy with hundreds of unnecessary copies existing in the codebase. If we can allow CRA project to just act as another non-special folder in a larger mono-repo, and import files from outside it's root folder, it would greatly improve the experience of working with many CRA projects while keeping all the benefits of not ejecting. My best experience is w/ using yarn workspaces. It was made intentionally to prevent users from importing something wrong. You can either use workspaces or eject and remove ModuleScopePlugin from the resolve.plugins option in both dev/prod configuration. I personally am having one hell of a time getting CRA to work with yarn workspaces. With the following setup: packages/graphql - all my graphql fragments, queries, mutations, subscriptions packages/controllers - shared JSX code between web/native packages/web - cra 2.0 packages/native - react native CRA says, oh you want to do graphql just use babel-plugin-macros with graphql.macro except that library only supports relative path imports, not an import like @monorepo/graphql/queries/user.graphql CRA also doesn't support monorepo so I have to setup webpack, parcel, or something similar to bundle my JSX, but also to run the babel macros and have it all compiled so react can import it. It just seems to defeat the purpose of CRA, I want my imported code to run under the same config options that CRA has and not have to duplicate it all and have a separate bundler running in packages/controllers. If I don't use a monorepo to share my code, then I am publishing a private npm package every time I make a change. @miraage I know it was intentional, and there were reasons. I'd like to see those reasons revisited given the inordinate amount of hacks it requires to work with multiple projects, which I presume was not fully appreciated at the time the decision was made. A config option to let projects opt out of this feature would be sufficient, where the majority of people still get the same behavior but power users do not have arbitrary barriers to good hygiene put in their way If I don't use a monorepo to share my code, then I am publishing a private npm package every time I make a change. You don't have to publish the package. In your package.json just specify the local path to module You don't have to publish the package. In your package.json just specify the local path to module so I currently have a package.json everywhere, but because these are located outside the cra folder root, they aren't compiled, so I am currently running babel src --watch --out-dir dist in my other packages and using graphql-tag.macro to compile graphql OK, here's some recent activity. This feature still makes sense and prevents clean usage of more than one CRA project per code repo. Any movement on this? Maybe take a look at my PR https://github.com/facebook/create-react-app/pull/6056 Any improvements or suggestions in here? @shaneosullivan This worked for me. Given this structure: ▽ monorepo ▽ common thing.js ▷ cra ▷ other You can add a .env file to cra-folder, setting NODE_PATH='./../' Now you can import common stuff in the cra code: import Thing from 'common/thing'; It's still a work-around, but it's the cleanest solution I've managed so far. @humancatfood amazing, that works!! Thanks so much! What about just using npm link? I have an external library that i'd like to actively develop inside the consuming CRA, but I have to restart the app every time to see the changes. Angular CLI has had a flag when running in dev mode for a while... --follow-symlinks @humancatfood, @shaneosullivan How can you convert these outside components to ES5? @ianqueue I'm sure you can get something more scalable working with npm links, but I never managed to get my head around those and the NODE_PATH solution is good enough for my purposes. @sezeregrek I don't know.. same as everything else? Hi again @ianqueue, so I did get my head around this in the meantime (or at least I think I did..) You don't need to npm link anything at all: in your cra folder, simply do npm install <path/to/your/external/module> This should create an entry in package.json like this: "dependencies": { "your-external-packages-name": "file:<path/to/your/external/module>", // other deps } You can now import it in your cra code: import bla from 'your-external-packages-name'; If you make changes to your external package it should propagate to the cra app and without you having to reload. Does the method described by @humancatfood require that code in the common and other folders be pre-compiled or will CRA compile them with webpack? Nope, it'll all get compiled together by the CRA. What I suggested simply lets you import stuff from another location than before, it doesn't change anything about what happens to that stuff after that, ie compiling, babel-ing, etc. @humancatfood could you provide dummy repository example of your setup ? yarn add path/to/ui-lib for example, where lib is a shared components library and then import Component from 'ui-lib' sure gives me compiler errors @bjerkins It's not like I don't have an actual job that needs doing, but ok, here you go: https://github.com/humancatfood/outside-imports The crucial step is in this commit: https://github.com/humancatfood/outside- imports/commit/a956f08fdac1b1f86dbc2be0459e333825cac0df And here it is deployed: https://humancatfood.github.io/outside-imports/ After playing around with this for a bit longer, I think using npm link is actually better though @humancatfood right. Thanks for taking the time to show what you meant. Does the method described by @humancatfood require that code in the common and other folders be pre-compiled or will CRA compile them with webpack? If you're importing e.g shared React components, then yes. @humancatfood 's example doesn't handle that, unfortunately. @waynebloss Apologies, I was thinking of minification, etc. If you want to import .jsx, then yes, you need to precompile it. Or eject and define your own loaders. Or write React in standard js of course: https://github.com/humancatfood/outside-imports/commit/9589f66b53fca9c3688376735aa6e561452a4976
GITHUB_ARCHIVE
Error code 0x80072f76 - 0x20016 is a Windows Update error code. This error code occurs when the Windows Update service is not running or is not working properly. This can happen if the Windows Update service is not started, is not set to start automatically, or if the service is disabled. 1. Check your internet connection and try again - Open the Network and Sharing Center. - Click on the "View network status and tasks" link. - On the "Status" tab, under "Network Connections", you will see your current internet connection status. - If you are connected to the internet, you will see a green check mark next to your connection. - If you are not connected to the internet, you will see a yellow question mark next to your connection. - Click on the "See all problems" link. - On the "Problems" tab, under "Network Connections", you will see a list of error codes. - If you are having trouble connecting to the internet, try one of the following solutions: - Make sure your internet connection is active and working. - Try connecting to a different internet service provider. - Reset your internet connection by clicking on the "Reset" button. - Try connecting to the internet using a different computer. 2. Check for updates and install any that are available - On your computer, open Windows Update. - If there are updates available, they will be listed. - Click on the “Update” button. - In the “Update Options” window, click on the “See Also” tab. - If there are updates available for your computer that fix the error code 0x80072f76 - 0x20016, they will be listed. - Click on the update to install it. - Once the update is installed, restart your computer. - If the error code 0x80072f76 - 0x20016 is still being displayed, you will need to contact Microsoft support. 3. Try the Windows Store app again - Open the Start screen and search for "Windows Store". - When the Windows Store app appears on the Start screen, right-click it and select "Open." - On the app's page, click the "Settings" link in the upper-right corner. - In the Settings window, click the "Apps & Features" tab. - Under the "Apps & Features" tab, click the "Windows Store" link. - On the Windows Store page, under the "Store apps" heading, select the "Try again" link. - On the "Try again" screen, click the "OK" button. - If the error code 0x80072f76 - 0x20016 still appears, repeat the steps listed in this article. 4. Reset the Windows Store app Reset the Windows Store app: - Open the Start menu and click on the "Settings" button. - Click on the "Apps & features" button and then on the "Windows Store" app. - On the "Windows Store" app, click on the "Reset" button. - On the "Reset Windows Store" dialog box, click on the "Reset" button. - After the reset process is complete, click on the "OK" button. Some users might also have success with: - Restart your device and try again. - Re-register the Windows Store app. - Run the Windows Store apps troubleshooter. - Contact Microsoft support.
OPCFW_CODE
The high tech patent game continues as AOL sells its patents to Microsoft (MSFT) for $1.1 Billion. This all started when Google (GOOG) developed its Android application. Competitors complained that Android programming “borrowed” from existing and previously patented technology. Google’s response was to buy outright Motorola’s cellphone business. The Google purchase of Motorola made sense for both parties. Motorola had been unable to successfully monetize its extensive list of patents and Google needed protection from an increasing number of patent lawsuits. Microsoft is now also taking advantage of someone else’s work and beefing up its patent portfolio as AOL sells its patents to Microsoft. According to AOL the patents include applications for social networking, generation of content, advertising, mapping, and, of course, internet searches. When AOL extracted itself for the disastrous merger with Time Warner it wisely kept its patent portfolio. Now, as AOL sells it patents to Microsoft, its stock rises in response to a billion dollar cash infusion. As AOL sells its patents to Microsoft a couple of thoughts come to mind. An interesting part of the current surge of patent acquisitions is that it is more effective than buying an entire company. In the world of mergers and acquisitions one company commonly buys another, in return for its own shares, in order to obtain patent rights, a market, or a product that it prefers to buy instead of develop. Mergers and acquisitions commonly include layoffs, sales of “excess” assets, and a lot of reorganization. Buying patents, as AOL sells it patents to Microsoft, may well be more cost effective than when Google picked up Motorola Mobility. Microsoft is not picking up a new business to manage. It is simply making good use of its cash trove and bypassing years of R&D in order to have in hand the rights to intellectual property necessary for its own search engine, advertising, and content generation business. For the investor interested in either of these companies, which becomes a good stock investment based upon this deal? Obviously, AOL was a great stock to invest in last week, if one had known about the deal. As AOL sells its patents to Microsoft its stock price has risen consistent with its new pile of cash. We have written before about investing in Microsoft patents. Microsoft has lots of cash so if a billion or so is missing no one will really notice. What is pertinent here is that Microsoft is both gaining patents and limiting the rights of other companies to use the same ideas. The problem for other companies who want to develop similar applications is this. They need to develop from scratch. They also need to research the patents of Microsoft, Google, and others in order not to inadvertently use a “too similar” solution of a programming problem. If they do they are likely to be sued by the likes of Microsoft, Google, or others. In a world where Motorola sells out to Google and AOL sells its patents to Microsoft, the possession of patents amounts to an income stream as other software developers opt to purchase patent rights instead of trying to develop new software for old applications using laborious “work arounds” in order to avoid expensive and potentially ruinous lawsuits in the future.
OPCFW_CODE
|26-Feb-2015||· When creating new tags, triggers, or variables, the name field is now at the top of the form. You will be prompted to rename the tag in the final step if the default name is not modified. Various bug fixes.| |18-Feb-2015||· [New Feature] Added ability to use CSS selectors as operators when setting up triggers.| |12-Feb-2015||· Mobile containers now supported in V2. · Floodlight integration and approvals now supported in V2. · Ability to see and restore deleted container versions. · Consolidated several fields into Fields to Set, and added drop down to allow users to select the field name. · Bug fix for version notes. |05-Feb-2015||· [New Feature] Tables can now be sorted in V2. · Improved error messages in V2. |28-Jan-2015||· Minor bug fixes and UX improvements.| |22-Jan-2015||· [New Feature] Localization added to V2. Ability to select a language preference in V2 (Gear menu ? Settings). · Various additional bug fixes. |14-Jan-2015||· Changes to accounts screen in V2. · Bug fix for pages served with XML media types. · Various additional bug fixes. |08-Jan-2015||· Share Preview support added to V2.· Various bug fixes.| |10-Dec-2014||· Container Import/Export: Export format has been modified to match the JSON format used in the external API. · Various bug fixes. · [New Feature] Google Trusted Stores Tag adds fields for badge position and locale. IDFA collection is now available for Universal Analytics on iOS. Use the “Enable Advertising Features” checkbox on the Universal Analytics tag. · Bug fix for Referrer macro for when referrer field was empty and macro was based on a component of the URL. · Bug fix for Debug Mode, addressing behavior for URLs ending in a hash. |21-Nov-2014||· Locale field in the Trusted Stores tag is now required. · Added API support for built-in variables. |12-Nov-2014||· V2: Tag, Trigger and Variable lists are now sorted by name alphabetically. · V2: Timer trigger event name fixed. · V2: User settings page added. · V2: Bug fix for save button on the tag page for “Some Pages” interactions. · API validation bug fixes. |29-Oct-2014||· Various bug fixes to Preview Mode, ComScore tag, Google Trusted Stores tag and the API.| |15-Oct-2014||· [New Feature] Version 2 beta now available! Includes major revision to the user interface and new workflows. Learn more. · [New Feature] Launched API that allows you to control your accounts and containers programmatically. Learn More. · [New Feature] Container import/export is now available to all users. Learn more. |02-Oct-2014||· [New Feature] Google Trusted Stores is now available for users in the United Kingdom, France, Germany, Australia, and Japan.| |17-Sep-2014||· [New Feature] Adwords Conversion Tracking: Conversions will now appear in Adwords if you have Adwords tags in Android containers. Republish your app for this to take effect.| |05-Sep-2014||· Improvements to Debug Mode stability:| |· Nested values that had circular references now handle that gracefully by displaying the keyName.| |· Events pushed on the data layer by macros are no longer displayed in debug mode (but still work in live mode).| |22-Aug-2014||· Adwords Conversion Tracking Tag: Conversion Value is now an optional field.| |· Adwords Conversion Tracking Tag: New field for Currency Code.| |· [New Feature] Floodlight Sales Tag: Product reporting now supported.| |31-Jul-2014||· [New Feature] Implemented Universal Analytics enhanced ecommerce support for iOS.| |· Enhanced control of dispatching for iOS.| |25-Jul-2014||· Container Version Number Macro now available for mobile containers.| |05-Jul-2014||· Various fixes for Debug Mode.| |01-Jul-2014||· [New Feature] Launched improved Preview and Debug Mode.| |· [New Feature] Launched Tag Firing Priority feature.| |· Fixed issue with Mediaplex Master Client Tag (MCT) tag on SSL pages.| |04-Jun-2014||· [New Feature] Support for new Universal Analytics “Enhanced Ecommerce” plug-in. Allows Universal Analytics tag users to track purchases, refunds, product impressions, etc with GTM. Refer to Ecommerce Tracking (Universal Analytics) for more information.| |15-May-2014||· Bug fix for <area> tags for auto-event tracking (will now be tracked by Link Click Listener).| |· Bug fix for Universal Analytics tag in Internet Explorer. In certain circumstances, the first pixel sent by this tag was dropped in IE.| |06-May-2014||· New Adwords Dynamic Remarketing guide available in the Help Center.| |29-Apr-2014||· Bug fix to the Universal Analytics tag: The legacyHistoryImport field now works correctly on “Fields to Set”.| |22-Apr-2014||· Additional improvements to URL macros: Added ability to grab fragments or hostnames from arbitrary URLs.| |15-Apr-2014||· Improvements to URL macros: Added ability to fetch specific parts of referring URL and the auto-event variable “Click URL”.| |· Added Display Advertising Features to the Universal Analytics tag, enabling features such as Demographics and Interest Reports, Remarketing with Google Analytics, and DCM Integration.| |08-Apr-2014||· Universal Analytics is out of beta with all features fully launched. · Fixed issue in which the gtm.dom event would fire early in IE8 for large, complex pages. · Improved instructions for finding tracking code for Google Analytics. |18-Mar-2014||· Constant string macro: Limit increased to 1024 characters.| |· Lookup table macro: Fixed UI so that when Lookup Table is selected, the header of the second column is updated to properly include the macro name.| |· Form submit listener: fix to issue when form has an input named “action”.| |· Content experiments for mobile apps: New feature adds the ability to run content experiments directly from within Google Tag Manager.| |11-Mar-2014||· Minor UI changes to tag/rule/macro edit pages: Removed Create Version / Publish toolbar| |04-Mar-2014||· Auto Event History Listener: Similar to the other auto-event tracking tags (e.g. Click Listener, Form Listener), we’ve added a new tag type under “Event Listeners” called the “Browser History Listener”. Once executed, this tag will listen for changes to the page’s history. These History events typically happen when the URL fragment (hash) changes in an Ajax app, or when a site is using the HTML5 pushstate APIs. This event listener is useful for tracking virtual pageviews.|
OPCFW_CODE
A common feature of a website where you can do something is to have accounts for your users. The traditional way is letting the user provide a username and password which are stored in your database. The next generation of accounts is federated, where an identity provider like Google or Facebook stores the information for you, and you can use that account to log in to other websites. What would accounts look like when you use Verifiable Credentials? Looking at Trinsic.id Trinsic.id provides a system where you can create an account and log in later by scanning a QR-code. When you register at Trinsic.id, you provide your name and email address. At that moment you are immediately logged-in. In your mail you receive a QR-code. When you scan the QR-code with your wallet app, a credential is added to your wallet. When you log out and want to log in again, you need to scan a QR-code with the Trinsic.id Wallet App. What is the technology behind it? When you enter your name and email address, a user is created for you in the database. It contains your name, email, and a randomly generated UUID. At that moment, a mail will be sent to your email address. The email contains a QR-code. When you scan the QR-code with a barcode scanner, you can see what data it contains. In this case it’s a URL: When you follow the link, you get forwarded to a URL that looks like: The value of the query parameter d_m is a base64 encoded value. When you decode that value, it looks like: As you can see, this is a credential offer that contains the information you entered while registering. Note here that the @type fields are version 1.0 types, for example: did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/issue-credential/1.0/offer-credential. This means that Trinsic.id uses the version 1.0 protocols. Usually, a credential would be offered using an existing connection. You don’t have a connection yet, so in this case a ~service block is attached to the offer. The ~service block contains information for your wallet app to know where it can connect to. You receive your QR-code on your email address. If you are able to scan the QR-code, you must have access to that email address. Therefore, this is a way to verify your email address. offers~attach.data field contains a base64 encoded field again. Decoding that gives you: This is the actual credential offer. It contains references to the schema and credential definition used to generate the offer. When you are logged out and want to log in, you need to scan a QR-code again. This QR-code contains a URL again: When you follow the URL, you get redirected to a URL that looks like: Base64 decoding the d_m parameter gives you: It is a presentation request! Let’s take a look at the request_presentations~attach.data field and decode it: Trinsic.id has sent you a request to prove that you own a Login credential which has been issued by them to you email address. Your wallet will look for a credential which has been made with their schema and their credential definition and a construct a proof with it. When Trinisic.id verifier the proof, you will be logged in to your account on the website. Implementing with ACA-py V1 protocols When you want to recreate this flow with ACA-py, you have to take a couple of hurdles. As noted before, Trinsic.id uses the version 1.0 protocols as defined in Aries RFC0160, RFC0023 and RFC0037. The V2 protocols are defined in RFC0434, RFC0453 and RFC0454. ACA-py has implemented the first two and is working on the last one. V1 Connection-less credential offer Unfortunately, ACA-py does not expose an API to create a connection-less credential (the QR-code in your email). In the issue credential V1 endpoints there is function called credential_exchange_create_free_offer which will add a oob_url to a credential offer. However, this endpoint is not exposed via any Admin endpoint, so there is no way of getting to it. Issue credential V1 contains a similar function which is also not exposed via an endpoint. However, we can create the offer ourselves. Let’s start with creating a schema: Then lets create a credential definition: Now we need to create a credential exchange record using /issue-credential/create. I know the description in the Swagger docs says Send holder a credential, automating entire flow but that is a lie. It is a copy-paste mistake from the /issue-credential/send endpoint :) Second, create a new connection invitation using To construct the connection-less credential, you should create this structure: From the credential exchange record copy From the connection invitation copy Great, now we have a connection-less credential. How do we get it to our users? QR-codes and links As explored before, you can make a QR-code from the connection-less credential and try to scan it with the wallet app. Unfortunately there is a limit to the amount of data that can be encoded in a QR-code, and a basic connection-less credential goes over that limit. This is why Trinsic.id is not creating a QR-code directly from the credential. Instead, the credential should be stored in a database and be accessible via a randomly generated identifier. This is what a URL like https://trinsic.studio/url/dc50d919-0e20-41f6-b015-... represents. When the Trinsic.id wallet scans the QR-code, it follows the URL. The URL gets redirected to URL in the form of https://trinsic.studio/link/?d_m=eyJyZXF1Z... where the d_m parameter is the base64 encoded version of the connection-less credential that is stored in the database. The Trinsic.id wallet reads the URL and looks for the d_m query parameter. It base64 decodes the parameter and adds the credential to the wallet. Using a connection-less credential with ACA-py Unfortunately, there is no endpoint to receive a connection-less credential with ACA-py. This means that, in order to test if your credential works, you need to deploy your ACA-py instance to a publicly available location where the Trinsic.id wallet app can connect with your ACA-py instance that created the credential. If you do this, don’t forget to switch your Trinsic.id wallet network to the ledger that you used when issuing the credential. This means that you need to have your ACA-py set up to run against the publicly available development ledgers that are supported by the Trinsic.id wallet. A connection-less proof request Similarly, a connection-less proof request can be constructed. First, create a proof request: For the connection-less proof request, create a structure like: ~service properties can be copied in the same way as we did for the connection-less credential. The request_presentations~attach field should be filled with the presentation_request field from the Proof Request Record. Again, there is no endpoint for receiving a connection-less presentation proof. Implementing with ACA-py V2 protocols What it means for the issuing of credentials or presenting a proof without connection, is that this communication is out-of-band. Meaning there is no prior connection between the issuer and holder or holder and verifier. With the introduction of the next iteration of protocols including RFC0434 came to life. It describes the Out-of-Band communication of invitations as well as issuing credentials and presenting proof. Creating an Out-of-Band credential-offer or presentation-request should become a lot easier than it was in V1. However, receiving a credential offer is not yet supported in the Out-of-Band endpoint, and wallet apps do not support the new protocols yet. It looks like implementing connection-less credentials and proofs are a bit messy to implement using the V1. Testing them with a wallet app is even more of a challenge due to the required infrastructure. You can expect a follow-up post whenever Out-of-Band implements full support.
OPCFW_CODE
This article explains how to view & manage resource mapping between the project plan resources and the project team resources in EPM Live. The Resource Mapping window will appear during the publish process, and can also be accessed manually by opening Resource Mapping from the Project Options menu. 1. Open the Publisher Menu 2. Select Project Options 3. Select Resource Mapping 3.1 Map Resources To map resources, you must have resources assigned to tasks or you have resources in the Resource Sheet in your project plan. You must also have resources on your project team. For organizations NOT using the Build Team functionality, all resources in the EPM Live Resource Pool will show instead of the project team. Note: You may use named resources and generic resources. 1. Select the name of the resource in both left and right columns. 2. Click Map>>. The name of the resource on the right (project plan) will update to be the full name as shown on the EPM Live team and will include the email address. 3. The name of the resource on the right (project plan) will update to be the full name as shown on the EPM Live team and will include the email address. 4. Map a project team member to a generic place holder. For example, if you entered Developer as a generic placeholder in your project plan for all the development tasks, and upon mapping, you want to assign Jude to all the development tasks, map Jude with Developer. 3.2 Un-map Resources To un-map any mapped resources, select the two (one from each column you’d like to un-map. Then, click <<Unmap. In the scenario that you have already added resources to your project team, but you haven’t yet assigned any resources to tasks, or you haven’t populated your Resource Sheet, you can add the project team resources to your project plan. These resources have not yet been added to your Microsoft Project schedule. Project Publisher will import your EPM Live project team resources to your project plan for you. 1. The Resource Map window shows the project team from EPM Live (“SharePoint Resources”). 2. Select the resources on the left side. 3. Click Add>> to import the resources into your project plan. 4. The added resources now show on the right side under “Microsoft Project Resources.” The resource will also be mapped (shown by the green checkmarks on both sides). 5. Do this for all applicable resources that you want on your Resource Sheet in your project plan, for assigning tasks. Resources added to project plan Resource Sheet. 3.4 Add Resources You may be at the point of mapping resources, but realize you haven’t added all the necessary resources to your project team yet. All resources in your project plan will show under the column “Microsoft Project Resources.” 1. Click Add Resources. The Build Team window for your project will open in your EPM Live site. 2. Add new resources to your project team as needed. Save and close the Build Team window. 3. Return to your Project Publisher Resource Map window. Click Refresh. 4. The updated project team will refresh on the left side under the column “SharePoint Resources.” 3.5 Do not show resource map again If you have completed your resource mapping, and no longer want to be prompted to map resources when you publish your plan going forward, select the check box for Do not show resource map again. Should you want to re-enable the Resource Map window to appear when you next publish, go to the Project Options menu. Under Publish Options, deselect the check box for Hide resource map on publish.
OPCFW_CODE
You honestly would not believe the amount of effort and organisation that goes into putting on an event like tech.days. The team have been planning since December. Every Wednesday, 14 or so of us would have an hour long meeting to discuss the logistics, content and video production. My involvement consisted of turning up for the meetings, briefing the agency that were designing the event website, creating the build scripts for putting the site on Azure and sorting out the content for the web day. The website was pretty much a walk in the park. I’d not run a major site on Azure before, it took a few days to get a process together and create an msbuild script that automated it. I always recommend creating build scripts, it was a habit I developed in my previous job at Systemax before joining Microsoft. Not only did it save me time it also provided a sort of documentation to the guy that had to take over the role when I left. My rule of thumb: on the second time of doing anything, automate it. Deciding on the web day content was much harder. I mean, Microsoft and the web, can mean so many different things: Webforms, MVC, Silverlight, IE, HTML5… a day which covers them all could look disjointed. In January myself and Mark Quirk decided to focus on IE9 and MVC. Since starting as an Evangelist at MS I have tried to avoid producing content that focuses just on our own technologies but instead takes a wider industry view on technology. So when it came to thinking about IE9 speakers I approached Bruce Lawson from Opera to talk about HTML5 (via Chris Mills another great Opera Evangelist) and Rachel Andrew to talk about CSS3. Both of these speakers are industry heavyweights who have years of experience. I practically skipped around the room when they both confirmed. I will be eternally grateful to them both, neither had to do it and they asked for nothing in return… they are just the sort of people that are willing to share and truly believe in creating a better web. For the other talks I looked closer to home. We are blessed in the UK to have Steve Sanderson who works for the Microsoft Developer Division living just down the road. He is an incredibly captivating speaker, funny but to the point. I meet him for the first time last year over lunch and so cheekily asked him to speak about MVC over email. He agreed the same day, again I was ecstatic to get such a high calibre speaker. The rest of the sessions fell into place. The UK expert on site pinning is Stephen Kennedy and since we worked on the Gorillaz project earlier in the year together, I called him up and sure enough he agreed on the spot. No arm twisting or bribery required. The keynote and the WebMatrix sessions were taken care of by my ubelly.com colleagues Dr Andrew Spooner and Sir Andy Robb who can always be relied upon to deliver great talks. Come the end of the Web day I was absolutely shattered. I was so nervous the night before I hardly slept, I was running on just two hours sleep. I relied heavily on coffee and 4 red bulls to get me through (thanks @alexball for pushing the red bull). As the final session came to an end and the cinema emptied. I looked up to the 400 hundred or so empty seats, now cloaked in darkness, and thought to myself… I have the best job ever!
OPCFW_CODE
Alternating and Augmenting Ways Chart complementing methods often make use of particular characteristics to be able to recognize sub-optimal segments in a similar, exactly where progress can be produced to reach a wanted mission. Two well-known properties these are known as augmenting pathways and changing routes, that always immediately see whether a graph is made up of a maximum, or minimum, coordinated, and also the matching is farther along enhanced. https://yanicksarrazin.com/immobilier/non-classifiee/2j0yg54g9 More calculations begin by arbitrarily developing a similar within a graph, and additional polishing the matching if you wish to attain the required mission. https://www.fleurskammerer.com/actualites/czrxihp An augmenting path, then, increases regarding the definition of an alternating way to identify a course whoever endpoints, the vertices at the start as well as the end of the road, happen to be cost-free, or unmatched, vertices; vertices maybe not contained in the similar. Discovering augmenting ways in a graph signals the possible lack of a max matching. http://medicalizacao.org.br/kxxqpgl Make sure to draw-out the alternating road to see exactly what vertices the way initiate and finishes at. Buy Diazepam Boosting roads in matching troubles are closely related augmenting courses in optimum flow troubles, for example max-flow min-cut protocol, as both indication sub-optimality and place for even more processing. In max-flow dilemmas, like in complimentary disorder, augmenting routes tends to be routes the spot where the number of flow between your provider and drain are increasing. https://arcticairkc.com/33ntbm4x3 A great number of reasonable matching troubles are even more complex as opposed to those given earlier. This put complexness typically comes from chart labeling, exactly where corners or vertices described with quantitative qualities, such as for instance weight, fees, taste or other specifications, which brings regulations to potential fights. A standard typical explored within a branded graph happens to be a well-known as feasible labeling, where in fact the tag, or weight allotted to an edge, never surpasses in importance with the extension of individual verticesa€™ weight. This residential property might viewed as the triangle inequality. https://topcarpetcarenyc.com/blog/131ww959 A feasible labeling acts opposite an augmenting course; particularly, the presence of a viable labeling implies a maximum-weighted similar, according to research by the Kuhn-Munkres Theorem. https://www.devilsinsiders.com/skzucga65ns The Kuhn-Munkres Theorem https://www.bloodrootlit.org/yjjv7qlc2d Whenever a graph labeling is definitely practical, but verticesa€™ labeling are actually precisely add up to the extra weight on the sides linking them, the chart is alleged to become an equality graph. https://www.fleurskammerer.com/actualites/o65ku5fgt2u Equivalence graphs tends to be useful in order to fix trouble by elements, since these come in subgraphs of this graph grams G grams , and result someone to the full total maximum-weight coordinated within a chart. Buy Valium 2Mg A range of various other graph labeling dilemmas, and respective tips, really exist for certain setups of graphs and labeling; issues such as for instance graceful labeling, appropriate labeling, lucky-labeling, or maybe the well-known chart color issue. Hungarian Max Matching Protocol https://craadoi-mada.com/n063muiv The algorithm begins with any haphazard similar, most notably a vacant similar. After that it constructs a tree using a breadth-first look to discover an augmenting route. In the event that look locates an augmenting path, the complimentary benefits one more frame. As soon as the coordinating is definitely up to date, the algorithm lasts and searches again for a new augmenting course. If your google search happens to be failed, the protocol terminates because the newest matching ought to be the largest-size matching achievable. http://medicalizacao.org.br/vj5ixo41by Sadly, don't assume all graphs tends to be solvable through Hungarian coordinating algorithm as a chart could have rounds that induce unlimited alternating pathways. Within certain situation, the prosper protocol can be utilized discover an optimum coordinating. Aka the Edmondsa€™ complimentary protocol, the flower formula improves upon the Hungarian algorithmic rule by shrinking odd-length series inside the graph to one particular vertex so to expose augmenting trails after which make use of the Hungarian Matching formula. https://seattlehifi.com/2021/10/7b986bdmuem The prosper algorithmic rule operates by managing the Hungarian algorithm until they incurs a prosper, so it after that shrinks into one vertex. Next, they starts the Hungarian protocol once again. If another prosper is found, it decreases the flower and begin the Hungarian formula once more, and many others until forget about augmenting routes or periods you find. http://kinderscientific.com/q5samq8 Poor people functionality with the Hungarian Matching algorithmic rule in some cases deems it unuseful in thick graphs, including a cultural system. Developing upon the Hungarian similar algorithmic rule might be Hopcrofta€“Karp protocol, that takes a bipartite chart, grams ( elizabeth , V ) G(E,V) grams ( age , V ) , and outputs a maximum matching. The time period complexity on this algorithm happens to be O ( a?? E a?? a?? V a?? ) O(|age| \sqrt<|V|>) O ( a?? elizabeth a?? a?? V a?? https://www.subtle-shoes.com/fr_fr/2021/10/07/xzoo0sbl The Hopcroft-Karp protocol makes use of steps much like those included in the Hungarian algorithmic rule plus the Edmondsa€™ bloom formula. Hopcroft-Karp functions by over repeatedly boosting the sized a partial coordinating via augmenting pathways. Unlike the Hungarian similar protocol, which sees one augmenting path and increases the optimum pounds by belonging to the similar by 1 1 1 on each version, the Hopcroft-Karp algorithmic rule locates a maximal collection of shortest augmenting paths during each version, letting it to improve the optimum body weight belonging to the matching with increments larger than 1 1 - Buy Valium Visa In practice, professionals found that Hopcroft-Karp isn't as excellent being the idea reveals a€” it's often outperformed by breadth-first and depth-first approaches to unearthing augmenting trails.
OPCFW_CODE
Test building failure on Mac Recently our test building keeps failure on Mac. The error message is "failed to load onnx module" in check-onnx-backend. This error is NOT caused by the PR that upgraded onnx to v1.12.0. I have a old PR passed all test previously but failed on Mac today after I triggered another test. Any idea about this error? Did we make any change to the test script or machine? @gongsu832 It certainly has something to do with updating onnx to v1.12.0 since they only start happening after that. If you look at the job log: Successfully built onnx Installing collected packages: typing-extensions, numpy, onnx DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621 DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621 DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621 DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621 Successfully installed numpy-1.23.4 onnx-1.12.0 typing-extensions-4.4.0 I'm guessing it might have something to do with the deprecation warning although it does say onnx-1.12.0 has been successfully installed. The problem with the Mac build is that we have no direct control over it. The env is setup by github actions servers. So it's very difficult to debug. OK I see what's going on. Even though we force python 3.9 on the MacOS build, somehow onnx-mlir started to detect python 3.11. As a result, onnx was built with 3.9 but onnx-mlir was built with 3.11. Frankly, I'm getting rather annoyed with GitHub actions. The whole thing seems rather brittle. -- Found Python3: /Library/Frameworks/Python.framework/Versions/3.11/bin/python3.11 (found version "3.11.0") found components: Interpreter Development Development.Module Development.Embed @chentong319 I have tried several ways to convince onnx and onnx-mlir to find the same python3 on MacOS with no luck. I gave up and I think the proper fix is for onnx to update their find_package command: # find_package Python has replaced PythonInterp and PythonLibs since cmake 3.12 # Use the following command in the future; now this is only compatible with the latest pybind11 # find_package(Python ${PY_VERSION} COMPONENTS Interpreter Development REQUIRED) find_package(PythonInterp ${PY_VERSION} REQUIRED) find_package(PythonLibs ${PY_VERSION}) As you can see, they actually have a commented out find_package which is the right command to use. What onnx-mlir uses is: find_package(Python3 ${LLVM_MINIMUM_PYTHON_VERSION} REQUIRED COMPONENTS Interpreter Development) So onnx should use: find_package(Python3 ${PY_VERSION} COMPONENTS Interpreter Development REQUIRED) @gongsu832 Thank you for the investigation to locate the source of error. The comment in onnx say this is only compatible with the latest pybind11. Therefore, onnx will use this cmake until pybind11 is updated. Anything we can do now for this build error? PR#1831 fixed this problem.
GITHUB_ARCHIVE
package woo.app.suppliers; import pt.tecnico.po.ui.Command; import pt.tecnico.po.ui.DialogException; import pt.tecnico.po.ui.Input; import woo.app.exception.DuplicateSupplierKeyException; import woo.core.StoreManager; import woo.core.exception.DuplicateSupplierException; /** * Register supplier. */ public class DoRegisterSupplier extends Command<StoreManager> { private Input<String> _supplierKey; private Input<String> _supplierName; private Input<String> _supplierAddress; public DoRegisterSupplier(StoreManager receiver) { super(Label.REGISTER_SUPPLIER, receiver); _supplierKey = _form.addStringInput(Message.requestSupplierKey()); _supplierName = _form.addStringInput(Message.requestSupplierName()); _supplierAddress = _form.addStringInput(Message.requestSupplierAddress()); } @Override public void execute() throws DialogException { _form.parse(); try { _receiver.registerSupplier(_supplierKey.value(), _supplierName.value(), _supplierAddress.value()); } catch (DuplicateSupplierException e) { throw new DuplicateSupplierKeyException(_supplierKey.value()); } } }
STACK_EDU
Project Jupyter’s Steering Council member, JupyterHub and mybinder.org Core Developer, co-editor of The Journal of Open Source Education (JOSE) and co-authored an open source book, Teaching and Learning with Jupyter. Jupyter notebooks have become the de-facto standard as a scientific and data science tool for producing computational narratives. Over five million Jupyter notebooks exist on GitHub today. Beyond the classic Jupyter notebook, Project Jupyter's tools have evolved to provide end to end workflows for research that enable scientists to prototype, collaborate, and scale with ease. JupyterLab, a web-based, extensible, next generation interactive development environment enables researchers to combine Jupyter notebooks, code and data to form computational narratives. JupyterHub brings the power of notebooks to groups of users. It gives users access to computational environments and resources without burdening the users with installation and maintenance tasks. Binder builds upon JupyterHub and provides free, sharable, interactive computing environments to people all around the world. Kyle is the host of the Data Skeptic podcast, a weekly interview program covering topics related to data science, artificial intelligence, machine learning, statistics, and cloud computing. Data Skeptic celebrated its 5th birthday this year. As principal architect at Data Skeptic Labs, he leads a team that builds bespoke machine learning and data solutions at scale in industries including aerospace, fraud prevention, retail, insurance, consumer packaged goods, and ad-tech. Kyle also serves as an advisor to several small and medium-sized companies. Data Skeptic Labs released its first official product (a chatbot platform) in 2019. Serverless computing, edge computing, and cloud computing are distinct paradigms in which Python almost uniquely has been highly successful language. Use cases and a few opinionated design philosophies that work especially well in Python will be discussed as a live Data Skeptic episode is recorded exploring these topics. The session will include an interview guest doing a technical deep dive in the style the Data Skeptic podcast is known for, as well as an exclusive look at a Python-based project being secretly developed at Data Skeptic Labs. Milana (Rabkin) Lewis is the co-founder and CEO of Stem, a financial platform that simplifies payments for musicians and content creators. Prior to founding Stem, Milana spent five years as a Digital Media Agent at the premier global talent and literary agency, United Talent Agency (UTA). She helped build UTA’s digital offerings by advising the agency’s individual and corporate clients on emerging distribution platforms, digitally-driven fundraising and monetization opportunities. Milana represented a roster of digital creators, ranging from top YouTube and Vine stars to prominent bloggers and social media personalities, and helped grow their social channels into sustainable and profitable careers. In addition to this work, Milana sourced investment opportunities for UTA’s then newly-formed venture capital division. Despite the abundance of data in today's digital age, not all data is either clear or actionable. Stem's mission addresses these two shortcomings by advocating clarity over transparency by providing actionable insights from data to both empower and enable artist driven businesses to make better informed decisions. In this talk, Milana will discuss how Stem utilizes data both internally and externally in ways that help drive growth for Stem's business, and its clients. Milana will be in conversation with Sylvia Tran, organizer of PyLadies Los Angeles. Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine (UCI). He is working on robustness and interpretability of machine learning algorithms, along with models that reason with text and structure for natural language processing. Sameer was a postdoctoral researcher at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs. His group has received funding from Allen Institute for AI, National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Adobe Research, and FICO. Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or detect bugs in their behavior. For example, determining when a machine learning model is “good enough” is challenging since held-out accuracy metrics significantly overestimate real-world performance. In this talk, I will describe our research on approaches that explain the predictions of any classifier in an interpretable and faithful manner, and automated techniques to detect bugs that can occur naturally when a model is deployed. In particular, these methods describe the relationship between the components of the input instance and the classifier’s prediction. I will cover various ways in which we summarize this relationship: as linear weights, as precise rules, and as counter-examples, and present experiments to contrast them and evaluate their utility in understanding, and debugging, black-box machine learning algorithms, on tabular, image, and text applications.
OPCFW_CODE
The nerd gland never rests As a self-confessed and proud nerd, I cannot help but look at projects to work on. The Scuba Simulator is one such project. This exploration into modelling Scuba Divings effect on the human body brings together my experience as a former PADI scuba instructor, and my love for just seeing if something can be done. Think of it as a sort of technological adventure. The model can be seen in action on scubasim.stulast.co.uk and will be updated as I continue to develop this app, and add a nice front end to it. The first phase of developing the SCUBA simulator has been about finding the most appropriate languages and libraries to support the goal of providing a realistic simulation of the physical and biological effects of depth on the human body. Given that the main goal is to provide a readily accessible demonstration, a web-based solution seemed most appropriate. It means that I can get prototyping quickly, without the need to learn any new languages or IDE's, and can concentrate on understanding the intricacies of the model, rather than getting bogged down in code quandaries. Putting together a toolbox Aside from the actual modelling of the Scuba Simulator, I have had the chance to look deeper into how I set up my development toolkit and change management. Yes - it's all al ittle nerdy, but this is what years of working in the industry have taught me, and it kind of becomes second nature now. So here is the run down of my core development toolkit: GIT Version Control Without questions having a change management system is a lifesaver. Being able to go forward and back through changes, to branch out and try something different is great. I can experiment with an idea, safe in the knowledge that if I do balls it all up, I can just revert back, or switch to a different branch. Bitbucket. Github is great, but there is a limit on how many private projects you can have. Bitbucket is just as great, but let's you have a hole load of private projects. Not a functionality question, just a question of costs. Vagrant Virtual Machine. For many years I used WAMP/XAMPP virtual machines in my web development work. Like many "preferences" it's down to what you first used and first became proficient with. The industry changes and advances so quickly, though, that it behoves us to keep up to date and, in that process, we may be lucky to find a new technology or method that gives us greater flexibility and productivity. I am looking at Docker as part of my foray into DevOps, but for most of my current projects, Vagrant is a simple to use and quick to configure. Visual Studio Code Okay so I have steered clear of Visual Studio for the longest time, basically due to it's bulk. I have tangled with Eclipse, and even made heavy use of Netbeans. In the end though, a text editor designed for coding is a saviour to the modern coder. There are many out there, including Notepad ++ and Sublime Text. I had the opportunity to try out Visual Studio Code, the light-weight text/code editor and you know what, I actually quite like it. The plugins can be a little annoying but in it's raw and uncluttered download state, it's pretty useful. It also has some handy windows for terminal, output and debugging, which cuts down on a lot of switching between application windows just to do a commit or spin up a vagrant box. Node JS, SASS and Gulp. So the final piece of the jigsaw comes down to package management. Node JS provides a great library for this in the form of NPM. By throwing in SASS and scripting the build process in Gulp I have saved hours of work, built consistency in my finished code, and compressed everything for web usage. I have to say this is my first real foray into creating Gulp scripts as opposed to just using them, but I have become a big fan. So with a toolkit pulled together, the task of actually building a model is where the challenge actually lies, and is the main reason to use OOP. THere are so many interacting aspects of Scuba Diving that have an impact on the physiology of the human body. Air consumption increases with pressure, as does the nitrogen load which must be dispersed at a safe rate during the ascent at the end of the dive. The bouyancy of the body changes with depth, changing the rate of ascent of descent, which must be at a kept within safe levels. Even light and sound change once subermerged in water. Thus far, the model is capable of measuring descent/ascent rates and air consumption. Next on the hit list is the modelling of how Nitrogen, Oxygen and Helium affect the body. Both Nitrogen and Helium act as inert gasses and are necessary to dilute the oxygen, which can be toxic at pressure. So much so that some gas mixes used at depth are so rarified that they are unbreathable at the surface due to the lack of oxygen. Nitrogen itself can act as a narcotic, leading to some individuals reporting nitrogen narcosis; effectively being drunk at depth. Without doubt the model will build to be quite complex with many objects interacting and affecting the diver object. But if it was a simple bit of code, it just wouldn't be any fun. This post will be updated as the development progresses.
OPCFW_CODE
2 unstable releases |0.9.0||May 13, 2021| |0.1.1||Mar 27, 2021| #8 in #journal This tool will correlate your equipment choices with your KDA spreads. I play hunt showdown a lot. It's very fun. It's also insanely frustrating sometimes. The game has long matches, very frantic, quick battles, and a wide variey of meaningful character specialization and equipment options. It can take dozens of matches to determine if a loadout is worth it, and there are many loadouts, and a match takes an hour ... In short, it is very hard to get feedback on what equipment loadouts, tactics, or friends are useful. For that, I keep a journal of matches, and am writing this tool to output some insights on the data gathered. To use it, you will have to write down match information. But matches last an hour, so that's not much overhead. Then, you'll have to use the tools this package provides: kda-summarywill summarize your K, D, and A values (and the usual KDA metric) over the entire journal. kda-comparethe alpha (unstable, unreliable) version of some multi-variate hyothesis tests that will tell you if you're doing significantly differently with different loadouts. Keep a match journal like this (fyi this is key value count format ). <date> [<items or friends initials>] [K|D|A|B] For example, my Hunt diary looks a little like: 2021-03-12 BAR+Scope pistol K K B alone 2021-03-12 BAR+Scope pistol K D D jb 2021-03-12 Short-Rifle Short-Shotgun K D jb 2021-03-12 BAR+Scope pistol D jp+jb 2021-03-13 BAR+Scope pistol jp D 2021-03-13 BAR+Scope pistol jp B D D A A 2021-03-13 Shotgun pistol jp D 2021-03-13 BAR+Scope pistol jp K 2021-03-14 Short-Rifle akimbo alone 2021-03-17 LAR Sil pistol alone 2021-03-17 pistol-stock akimbo alone 2021-03-17 Short-Shotgun pistol-stock alone you can use whatever you want to denote loadouts or friends ... it'll just run multi-variate regression on all of them with the important parts: K D A or B, for example... Contents of journal.txt: K K Sniper K D Shotgun K K JP Sniper K D B Shotgun JB K D B Sniper JB is 5 matches: - two kills with a sniper loadout - a kill a death with a shotgun loadout - two kills with a sniper loadout and team-mate "JP" - a kill, a death, a bounty with Shotguns and team-mate "JB" - Same, but with Sniper loadout Let's see the summary over time: $ <journal.txt kda-summary n Date K D A B KDA sK sD sA sB mKDA mK mD mA mB 1 1 2 0 0 0 2.00 2 0 0 0 2.00 2.00 0.00 0.00 0.00 2 2 1 1 0 0 1.00 3 1 0 0 3.00 1.50 0.50 0.00 0.00 3 3 2 0 0 0 2.00 5 1 0 0 5.00 1.67 0.33 0.00 0.00 4 4 1 1 0 1 1.00 6 2 0 1 3.00 1.50 0.50 0.00 0.25 5 5 1 1 0 1 1.00 7 3 0 2 2.33 1.40 0.60 0.00 0.40 Not bad. Notice, kda-summary requires the use of tags K for kills, D for death, B for bounties, and A for assist. It outputs your per-match stats, the KDA value of (K+A)/D, the sum of the K, D, A and B, and the mean (avg / match) of KDA, K, D, A, and B. Date field. If you put dates of the form YYYY-MM-DD somewhere per line in the journal, it will populate that field. See example above or kvc. KDA-Explore (formerly kda-compare) the semantics of 'K' vs 'k' vs "kill" is irrelevant. We explore the data by asking it to analyze variables by name. For example, in the data above, to see kills "K" per match with Sniper and without, you form the "experiment" denoted as "K:Sniper" and ask kda-explore to run that experiment by kda-explore -c "K : Sniper" You can run many experiments seperated by 'vs' (this will change), against many output varaibles ... All are valid: kda-explore "K D : Sniper vs Shotgun"to see which is better kda-explore "D : K"to see if you die more when you kill stuff or not kda-explore "Sniper: JB"to see if you play sniper more or less when you're with JB kda-explore K:allto see kill spreads and sorted rate comparisons for all variables and so on ... each "tag" (item on a line in a journal) is a valid input or output depending on your determination of experiments. $ <journal.txt kda-explore "K D : Sniper vs Shotgun" Processed. Read: 5 rows and 7 variables K Sniper Shotgun D JP JB B Debug: processing: K: Sniper vs Shotgun K:( Sniper ) 5.00/3 = 1.67 vs 2.00/2 = 1.00 Rates are same with p=0.373 K:( Shotgun ) 2.00/2 = 1.00 vs 5.00/3 = 1.67 Rates are same with p=0.373 We note that the rates of kills with shotgun exactly equal the rates of kills w/ not sniper, so this test results are the same. Warning, this functionality will change rapidly prior to 1.0 release $ kda-explore -h USAGE: kda-explore <command> FLAGS: -h, --help Prints help information -V, --version Prints version information ARGS: <command> The A/B comparison to run, of the form '<some variables : <other variables>'. e.g., 'K: pistol' will check kills with and wtihout pitols [default: K D A : all] One way to interpret this is "This doesn't make sense". That's true, it's primitive still, and mostly a toy for my own use. Get for debian / WSL For now, just grab one of the test debs in releases/ sudo dpkg -i kda-tools_0.5.0_amd64.deb You can use the example above as-is. - document match journal format better. see: github.com/jodavaho/kvc.git - improve match journal to allow :count. (see kvc again) - provide linter for match journal - tool to create / factor matricies in format ammenable to third-party analysis (e.g., R) - Perform power tests / experiment design - remove '-c' as mandatory switch ... obsolete when baseline '_' was removed - Provide a library version in C, - If you have an item you use every game, then you have insufficient data. for every test of the form k:Athere must be at least one match without A occuring. It's ok if there's no kills (
OPCFW_CODE
This project is a personal project that I had been thinking about for a long time, as a vague idea, and that I just now managed to put into an actual shape I can show. I believe in the power of words. To me, written words have the power to move someone, to make them feel something. One of the most powerful ways of sharing is using words, and through depicting an image, convey a feeling or an emotion. I have been writing poetry for a few years, and have recently started to explore how I could add context to very abstract and personal words that might only have true meaning to myself and make them more universal. This project is still a work in progress, as I keep adding new poems while exploring new means of creations The newest iteration of this project is an Instagram account I created called @VisualPoetrybyJulie . I use the rows of 3 images that is characteristic of Instagram as a canvas to create a visual story to illustrate a poem. In this layout, I also include the title of the poem and the poem itself. It is a great opportunity to explore how fonts impact the meaning of words and how to actually make the text location and design accurately to the mood of the artwork. I have been using both Adobe Illustrator and Adobe Photoshop to create the visuals. This project is also a way for me to further my knowledge and explore new features of these software and see what I can create with them. I try to create one every week (trying is really the word here !) Here is a sneak peek of my favorites so far: As part of a course I was taking on Visual storytelling, I developed a 360 visual interpretation of a haiku by Kobayashi Issa. Winter seclusion - Listening, that evening, To the rain in the mountain. - Kobayashi Issa Haikus are Japanese short poems. In only three lines, they capture the essence and intensity of a moment, underline the ephemerality of time, describe an everyday scene. Mirroring the minimalism of haikus, this chapter of the project explores how a new layer of the story can be told by adding visuals to accompany simple words. Using graphic elements such as lines and dots, as well as constraining the color palette to black and white, the 360 visuals of this chapter create a new world around this already powerful story. have been using Adobe Illustrator to make the visuals and Adobe After Effects to animate them. I started by storyboarding where I wanted to put elements in my 360 video and how I wanted the animation to evolve. In parallel, I explored how to make 360 videos using After Effects and what were the constraints (distortion of the graphics mainly). I started by working on the last verse of the poem (which is the most advanced piece of my prototype) and then moved on to the first one. This iteration is the oldest one, my first dabble at creating something around poetry. I started by exploring how sounds could portray a very singular atmosphere and how I could use that to deliver my own interpretation of a poem. It is very interesting to explore how in both visual and sound the actual words can be used to fit the whole creation. The poem I chose for this chapter of the project is from one of my favorite poet, Charles Baudelaire. It is in French and it’s called - Le Spleen. It is about how sometimes you go to a dark place, and get very moody but don’t can’t really grasp or comprehend what is happening, except it is dark. The atmosphere of the poem is very gloomy and melancholic, and I wanted to enhance that sensation. The poem is actually whispered and the sound elements are very echoey.
OPCFW_CODE
OPL VST plugin This VST instrument provides an emulated OPL sound chip. It provides all features of the OPL2, and some features of the OPL3. See here for binaries, screenshots etc: http://bsutherland.github.io/JuceOPLVSTi/ What's an OPL? The OPL is a digital sound synthesis chip developed by Yamaha in the mid 1980s. Among other products, it was used in sound cards for PC, including the Ad Lib card and early Sound Blaster series. At a technical level: the emulator has channels comprised of 2 oscillators each. Each pair of oscillators is usually combined via phase modulation (basically frequency modulation). Each oscillator can produce one of eight waveforms (sine, half sine, absolute sine, quarter sine, alternating sine, camel sine, square, logarithmic sawtooth), and has an ADSR envelope controlling its amplitude. The unusual waveforms give it a characteristic sound. Caveats and Limitations Before I wrote this, I didn't know much about VST or the OPL at a technical level. This is the first VST plugin I've written. In hindsight I would have implemented things a bit differently, but it all basically works, and is now reasonably well tested. One thing I have learned is that all VST hosts are not created equal. I only really work with Renoise. Your mileage may vary. Note that I started and work on this project for fun. I'm not accepting donations, but any contributions in the form of code, SBI files, links to music you've created etc are very welcome and help keep me motivated. Please also understand that I also write software full time for a living and have a life outside of software development. How do I use it? Each instance of the plugin emulates an entire OPL chip, but with this plugin, essentially you are just working with two operators: the carrier and modulator. Some documentation which may be useful: - Introduction to FM Synthesis (not specific to the OPL, but a great primer) - OPL2 on Wikipedia - Original Yamaha datasheet - AdLib programming guide Dates back to 1991! - Another programming guide This one is for the OPL3, but most of the information still applies. What can it do? Here are some examples: - Demo showing how parameters affect soundThanks estonoesunusuario! - Dune 2 music reproduced using the pluginGreat work by Ion Claudiu Van Damme - Tyrian remix by Block35 Music - Syndicate theme demo I created for the first release - (your link here...) SBI files are an instrument file format developed by Creative Labs back in the day for the Sound Blaster. Essentially they work as presets for this plugin. Just drag and drop them into the plugin window! I've collected a bunch of presets in this repository. I've also added support for saving SBI files. Please contribute! Percussion mode is now supported! This mode is not very well documented, even in the original Yamaha documentation. Here are some tips on using it based on experimentation and looking at the DOSBox source code. - Bass drum: Uses both operators. Essentially just doubles output amplitude? - Snare: Uses carrier settings. Abs-sine waveform recommended. - Tom: Uses modulator settings. Sine waveform recommended. - Cymbal: Uses carrier settings. Half-sine recommended. - Hi-hat: Uses modulator settings. Half-sine recommended. Also, some much more detailed notes on percussion mode based on experimentation with real hardware! How did you create the instrument programs? To figure out the parameters used by the original games, I just added a printf to the DOSBox OPL emulator, compiled DOSBox, ran the games, and captured their output as raw register writes with timestamps. I hacked together a Python script which parses the raw output, identifying unique instruments and outputting the parameter values. How did you create this? The emulation (ie, the hard part!) is taken straight from the excellent DOSBox project. I also used a function from libgamemusic by Adam Nielsen for converting frequencies from Hertz into the "FNUM" values used by the OPL. The VST was written using Juce, a cross-platform C++ library inspired by the Java JDK. Among other things, Juce provides a GUI for generating boilerplate for audio plugins. The code I wrote is essentially a device driver for the emulated OPL, implementing the VST interface, and providing a UI. So far I've only built under Windows. Thanks to the hard work of Jeff Russ, there is also an OSX build, but I currently have no way to build it myself on OSX.. Windows Build Instructions - Download Juce (http://www.juce.com/) - Download the VST SDK (http://www.steinberg.net/en/company/developer.html) - Run "The Projucer" executable included in Juce. - Open JuceOPLVSTi.jucer - Make any changes to the GUI layout and components here (PluginEditor.cpp). - Save PluginEditor.cpp if modified - Hit "Save Project and Open in Visual Studio". I use Visual Studio Express 2013. - (For Windows XP compatibility) In the project's properties, set platform toolset to Windows XP (Configuration Properties > General).
OPCFW_CODE
Before Jupyter Notebook there was IPython. Well it’s actually the precursor and the concepts are one and the same but this is a great quiz for the newbs that are getting introduced to Data Science. But, the real question we’re postulating in this article is if Jupyter Notebook is the new Excel. I argue that Microsoft Excel seem like it will almost never disappear from financial analysis. It was there towards the modern era of financial analysis and computing and it will likely be around for much longer. However, the question has been raised in the financial analysis and general analyst community do they even need to learn Python or have a Jupyter notebook instance available to them to handle part of their workload. I’d like to talk about three different angles on what we see in the real-world working with FP&A teams, customer engagement analyst, investment banking, and other groups poised to use data to the extreme. Excel has lot’s of Add-Ins (Advanced Analytics, Solver, etc.) When you see someone work in Excel that really knows what they are doing, the don’t even use the mouse more often than they have to. It is really a privilege to watch a financial analyst do their thing, crunching numbers in an Excel spreadsheet database with models and formulas that are mind blowing. When it gets data science-like the more advanced users know how to use the MS Excel Analysis Toolpak (https://support.office.com/en-us/article/load-the-analysis-toolpak-in-excel-6a63e598-cd6d-42e3-9317-6b40ba1a66b4) and really create some amazing algorithmic value on their data. Or they’ve had a chance to work with Solver (https://www.solver.com/) and make use of those incredible features. All of which require no python notebooks or really any coding skills to get some amazing compute and analytical power where computer vision, object detection, and to some degree SQL are not necessary. SAS has been around for a long time too! Getting involved in Data Science today or graduating university with a data science slant, it is possible for one to not even be familiar with SAS. I wouldn’t be surprised if the youngest generation existing university with a DataScience degree were completely unfamiliar with how profound SAS or MatLab have been prior to the amazing recent breakthroughs in Python, Jupyter, and the libraries upon libraries which are empowering most of the modern data sciences. But those systems are in-fact near ubiquitous and very much still in use. So, even as an alternative, though we see most SAS users or consumers just getting SAS exports to Excel at the end of the day, we see this is a system that already has a deeply skilled workforce working with Excel as an output and not necessarily needing any logic from python or Jupyter notebooks. Python libraries are the real quest for power But the real deal is not necessarily about python itself as a programming language. Python is a programming language that appeared almost 30 years ago. That’s a long time. And it has slowly risen to fame with its ease of coding approach and brevity of compilation that gives it a lower barrier to entry than say Java or C++. That being said it is not just the programming language at is core but the myriad of Python libraries for Data Science that help deliver an attainable yet ladder rung obstacle of reaching python data science superiority. The libraries in python for data science that can be used stand alone or combined can allow amazing discoveries and dissections of data to take place. This is both the beauty and the beast of the discipline which depending on the initiative or the data science task one is charges with could mean a mediocre solution or in the hands of the right expert and amazing insight that truly adds to the businesses value. They question will often be in the end, if the effort was unguided, “could this have been done in Excel?”
OPCFW_CODE
Launchpad Pro Mk3 with Bitwig on Linux For my birthday this year, I got a Launchpad Mini Mk3. It's the first time I've used what is called a grid controller and I really found the workflow incredibly intuitive. Thanks to the really good key bindings, navigating Bitwig felt way more direct. However, I noticed quite soon, that I often tried to play chords, melodies and beats with the pads on the Launchpad Mini. Emphasis should probably be on "tried" here: The pads of the Launchpad Mini are definitely not made for actually playing music on them. So although the Launchpad Mini is great for triggering scenes in Bitwig, playing with some of the effects and generally navigating the interface, the lack of velocity sensitive pads seriously limits it as a musical instrument and I soon began eyeing for the bigger siblings of the Launchpad Mini—the Launchpad X and the Launchpad Pro Mk3. I ended up purchasing the "Pro" version—those additional buttons seemed quite tempting. Unfortunately, getting this to work wasn't really a plug and play experience. It turned out that not all steps are well documented (or documented at all) online. Here is how I ended up getting it to work (and yes it's been great since then). Step 1: Mossgraber's controller scripts The first step in the process was to download Jürgen Mossgraber's most recent version of his controller scripts from here. I already knew these keybindings from the Launchpad Mini and they seem to be kind of the go-to solution for all Launchpads. They are really well documented and the documentation is super easy. Unfortunately, the Launchpad didn't work after installing the scripts. Bitwig reported that "my controller" was now "ready to use", but in fact that was about it. Lights on the Launchpad worked, but it seemed to be in the documented standalone mode. Although that's certainly a nice feature about the Launchpad Pro, I wanted this thing to work with Bitwig on my computer. I tried all kinds of things, but there seemed to be no way to get this thing to communicate with my computer. Step 2: Upgrade firmware This redit post was the key to getting it to work. mmntmtrnstn reports that updating the Launchpad's firmware solved the problem for them. Too bad they didn't say how they upgraded the firmware. After a lot of looking around, I found this website that uploads a "hacked" firmware to the Launchpad. This website uses Chrome's ability to run midi to flash the Launchpad (so obviously it doesn't work with Firefox). The website asks to put the Launchpad in "Bootloader" mode. In order to get there, you need to press the "Setup" button on the Launchpad while connecting the device. After that, you can simply flash the new firmware. Obviously, the first attempt ended up uploading an incomplete firmware version and now the Launchpad wasn't working at all—neither in standalone mode nor with Bitwig. There seemed to be no way back, so I chose the way forward and just tried installing again. And this time it worked! And with that, interaction with Bitwig also worked flawlessly. UPDATE: mmntmtrnstn has in the meantime pointed out that Novation has a similar website where you can do an official firmware update. I haven't tried Novation's firmware update and can't comment on it, but I'm confident that it will work too. Did you have a similar experience using a Launchpad pro with linux? How did it go? Was this post useful to get you started? Please let me know in the comments section.
OPCFW_CODE
from lxml import etree from estnltk import synthesize def synthesize_forms(headword, wordtype): forms = [] forms_S = ['sg ab', 'sg abl', 'sg ad', 'sg adt', 'sg all', 'sg el', 'sg es', 'sg g', 'sg ill', 'sg in', 'sg kom', 'sg p', 'sg pl', 'sg sg', 'sg ter', 'sg tr', 'pl ab', 'pl abl', 'pl ad', 'pl adt', 'pl all', 'pl el', 'pl es', 'pl g', 'pl ill', 'pl in', 'pl kom', 'pl n', 'pl p', 'pl pl', 'pl sg', 'pl ter', 'pl tr'] forms_V = ['b', 'd', 'da', 'des', 'ge', 'gem', 'gu', 'ks', 'ksid', 'ksime', 'ksin', 'ksite', 'ma', 'maks', 'mas', 'mast', 'mata', 'me', 'n', 'neg ge', 'neg gem', 'neg gu', 'neg ks', 'neg nud', 'neg nuks', 'neg o', 'neg vat', 'nud', 'nuks', 'nuksid', 'nuksime', 'nuksin', 'nuksite', 'nuvat', 'o', 's', 'sid', 'sime', 'sin', 'site', 'ta', 'tagu', 'taks', 'takse', 'tama', 'tav', 'tavat', 'te', 'ti', 'tud', 'tuks', 'tuvat', 'v', 'vad', 'vat'] if wordtype == 'V': # Need to deal with verbs with extra words ("meelde tuletama"). We take the word that needs conjugating here... if ' ' in headword: extra = headword.rsplit(' ', 1)[0] headword = headword.rsplit(' ', 1)[1] else: extra = None for form in forms_V: form_results = synthesize(headword, form) for result in form_results: # And we put them back together here. Won't help all the time, often real usage "PART1 blah blah PART2" if extra is not None: result2 = result + ' ' + extra forms.append(result2) result = extra + ' ' + result forms.append(result) # TODO: figure out if we can get comparative forms of adjectives else: for form in forms_S: form_results = synthesize(headword, form) for result in form_results: forms.append(result) # We only need unique forms, since we just want this to redirect to the dictionary entries formSet = list(set(forms)) return formSet # The EKI files include their own bold and emphasis sections - we need to deal with them def unescape_definition(definition): definition = definition.replace('&ema;', '<em>') definition = definition.replace('&eml;', '</em>') definition = definition.replace('&ba;', '<b>') definition = definition.replace('&bl;', '</b>') definition = definition.replace('&supa;', '<sup>') definition = definition.replace('&supl;', '</sup>') return definition def process_eki_dictionary(file): tree = etree.parse(file) dictionaryXML = tree.getroot() dictionary = [] for word in dictionaryXML: entry = {'definitions': [], 'forms': []} headword = word.find('c:P/c:mg/c:m', dictionaryXML.nsmap).text entry['headword'] = headword try: wordtype = word.find('c:P/c:mg/c:sl', dictionaryXML.nsmap).text except AttributeError: wordtype = None entry['wordtype'] = wordtype wordDefinitions = word.findall('c:S/c:tp', dictionaryXML.nsmap) for definition in wordDefinitions: definitionEntry = {'definitionTexts': [], 'definitionExamples': []} definitionTexts = definition.findall('c:tg/c:dg/c:d', dictionaryXML.nsmap) for definitionText in definitionTexts: definitionEntry['definitionTexts'].append(unescape_definition(definitionText.text)) definitionExamples = definition.findall('c:tg/c:ng/c:n', dictionaryXML.nsmap) for definitionExample in definitionExamples: definitionEntry['definitionExamples'].append(unescape_definition(definitionExample.text)) entry['definitions'].append(definitionEntry) if wordtype is not None: entry['forms'] = synthesize_forms(headword, wordtype) dictionary.append(entry) return dictionary def build_dictionary(processed_dictionary, destination_file): NSMAP = {"mbp": 'https://kindlegen.s3.amazonaws.com/AmazonKindlePublishingGuidelines.pdf', "idx": 'http://www.mobipocket.com/idx'} page = etree.Element('html', lang="et", nsmap=NSMAP) dictionary = etree.ElementTree(page) headElt = etree.SubElement(page, 'head') bodyElt = etree.SubElement(page, 'body') metaElt = etree.SubElement(headElt, 'meta', charset='UTF-8') framesetElt = etree.SubElement(bodyElt, '{https://kindlegen.s3.amazonaws.com/AmazonKindlePublishingGuidelines.pdf}frameset') for entry in processed_dictionary: entryElt = etree.SubElement(framesetElt, '{http://www.mobipocket.com/idx}entry') shortElt = etree.SubElement(entryElt, '{http://www.mobipocket.com/idx}short') orthElt = etree.SubElement(shortElt, '{http://www.mobipocket.com/idx}orth', value=entry['headword']) headwordElt = etree.SubElement(orthElt, 'b') headwordElt.text = entry['headword'] if entry['wordtype'] is not None: headwordElt.tail = " (" + entry['wordtype'] + ")" if entry['forms']: inflElt = etree.SubElement(orthElt, '{http://www.mobipocket.com/idx}infl') for form in entry['forms']: iformElt = etree.SubElement(inflElt, '{http://www.mobipocket.com/idx}iform', value=form, exact="yes") # We load the definitions and examples as strings to get around the issues with the embedded bold / emphasis for definition in entry['definitions']: divElt = etree.SubElement(shortElt, 'div') for definitionText in definition['definitionTexts']: definitionString = "<p>" + definitionText + "</p>" definitionElt = etree.fromstring(definitionString) divElt.append(definitionElt) listElt = etree.SubElement(divElt, 'ul') for exampleText in definition['definitionExamples']: exampleString = "<li>" + exampleText + "</li>" exampleElt = etree.fromstring(exampleString) listElt.append(exampleElt) hrElt = etree.SubElement(framesetElt, 'hr') dictionary.write(destination_file) dictionary = process_eki_dictionary('psv_EKI_CCBY40.xml') build_dictionary(dictionary, 'dictionary.html')
STACK_EDU
Pirate Blast Gun Bundle (Wii) - $9.95 SHIPPED, used, GameDealDaily.com 9/13 Posted 12 September 2010 - 02:53 PM Posted 12 September 2010 - 03:18 PM Posted 12 September 2010 - 03:24 PM So is it 11.95 or 12.95, make up your mind If you clicked the link you would find out for yourself it's 11.95. Posted 12 September 2010 - 03:31 PM Posted 12 September 2010 - 04:12 PM If it helps, I've ordered one or two things from GDD and never been disappointed. I don't really understand the hate but, again, I probably haven't done as much business with GDD as you all have. The problem is, the internet is full of people who love nothing more than to buy into the mob mentality. If you look at a lot of the "complaints," they often come from people who didn't even buy anything from him. They read something from someone else in the thread...and that person did the same thing...and so forth and so on. It's like the game telephone that you play as a kid. Everything gets distorted down the line, until no one knows what the hell they are talking about anymore. That doesn't stop them from jumping to conclusions, though. As I said earlier, I don't think there is any denying that his customer service leaves something to be desired. There have been way too many legitimate complaints made to ignore. If someone wants to avoid doing business with him over that, I don't think anyone can blame them. Yeah, getting great deals on an item is awesome, but if it's broke or the wrong item or any myriad of problems that can often arise, you'd like to know you can talk to someone about it and get it resolved. Now, I missed out on the hard drives and the knock-off controllers, so I cannot comment on that stuff. But everything else I ever seen him sell is definitely legit, and you see very few complaints about their quality (which, of course, some complaints are obviously going to happen. Even the greatest companies have problems). Like you, I have bought a few things from over the last 2 years or so, and they all came pretty quickly and in the condition described. Posted 12 September 2010 - 04:25 PM Posted 12 September 2010 - 05:29 PM Posted 12 September 2010 - 05:35 PM Posted 12 September 2010 - 08:20 PM 8 bucks + 5 bucks for shipping for an old PS2 game? lmao The title of the thread says '$7.94 NEW SHIPPED'. The $7.94 includes shipping. Posted 12 September 2010 - 08:33 PM Edit: As far as complaints about the site or the items sold on the site , I've never had an issue with getting faulty products , products that weren't what he advertised or bootlegs of any kind. Regarding shipping though , my problems have never been with shipping being too slow (usually) but instead with the items never arriving. Now whether this is bad luck on my part or poor choice of shipping methods on his part , he's always managed to make things right , whether by issuing a refund or sending me a replacement product. Edit 2 : PM sent. Posted 12 September 2010 - 09:48 PM Posted 13 September 2010 - 04:19 AM I also got a PS2 grabbag: Crash of the Titans Spongebob Squarepants Revenge of the Flying Dutchman Capcom Collection Volume 2 Grand Theft Auto 3 Midnight Club 3 Dub Remix 4x4 Masters of Metal I kept GTA3 and Capcom Collections (been wanting this) and traded the others to gamestop for roughly $14 gs credit so that was a win/win order. Posted 13 September 2010 - 04:03 PM Posted 13 September 2010 - 09:34 PM Posted 13 September 2010 - 09:40 PM Posted 13 September 2010 - 09:54 PM Posted 13 September 2010 - 09:55 PM i thought this said "Grabass",i havent played a good game of grabass since i was 10 years old with my uncle and his parole officer behind the local El Pollo Loco! Good Times! Posted 13 September 2010 - 09:59 PM Posted 13 September 2010 - 10:13 PM Posted 14 September 2010 - 02:26 AM Posted 14 September 2010 - 02:32 AM If there trade in value is worth a minimum $50 and its like 11 something after shipping, why not just buy a ton of them and make a fuckton of money trading into gamestop? I'd do it myself but sounds super risky depending on the games you get. You misread. He basically stated that the games in each bag would cost you at least $50 if you bought them together from gamestop. He wasn't talking about TIV, especially since gamestop doesn't accept Xbox games anymore. I'd be tempted to get one after a positive PS2 grabbag experience, but I have almost every Xbox game I want atm. Posted 14 September 2010 - 05:51 AM Posted 14 September 2010 - 06:14 AM Posted 14 September 2010 - 07:15 AM Also, any chance of a batch of cracked discs anytime soon? I had some pretty good luck with the PS2 ones the last time I remember you doing it. Posted 14 September 2010 - 04:42 PM Posted 14 September 2010 - 07:35 PM Posted 14 September 2010 - 07:45 PM Posted 14 September 2010 - 07:46 PM Hrm, already have a huge backlog. May have to pass this time, but both times I've gotten the grab bags in the past were pretty good deals. Now if this was a Gamecube grab bag, I'd have to jump on it. I'd love to see a Gamecube grab bag too. I still need several titles for my collection.
OPCFW_CODE
When troubleshooting connectivity problems between hosts, one of the most effective tools is the packet capture. Software products like Wireshark and Microsoft's Network Monitor can quickly point you in the direction towards resolution in a relatively short time, but security-conscious IT organizations across a multitude of industries and markets are systematically putting the "kaibosh" on installing packet capture software on servers...which is typically where it needs to go. Getting Around Without It There are a couple of ways to get around the "no packet capture on my server" problem. The more involved approach involves creating a SPAN port configuration on a Cisco switch, and attaching a workstation with capture software to the SPAN port. But this requires getting network people involved, complicated or untimely request approvals, and other very inefficient activities. So, onto the leaner, meaner, more efficient way: netsh. Using Netsh for Packet Capture Netsh is a powerful command line utility that was introduced by Microsoft with Windows 2000 (where it was part of the Resource Kit; it has since just been installed with the operating system). At its most basic, netsh lets you look at network configurations on both local and remote systems, but it also lets you reconfigure, backup, and restore configurations as well. It also lets you capture traffic, and that's what we're going to learn about today. If you go to Google and just type "using netsh" you will find a host of returns that showcase the utility's interface and Windows Firewall management capabilities, but very few of them talk about Trace. To begin interacting with Trace, go into an administrative command prompt (or Powershell prompt) and type netsh followed by the ENTER key. Once in the netsh shell, type trace and then the ENTER key to go into the Trace context. Once inside, you can start to set up your trace. In general, it is bad form (unless you just need to snag a capture and get out of there) to capture everything. Typically, you only need to see the packets from either a specific host or small group of hosts, or across a specific TCP or UDP port. There are several capture filters available, and all of them can be reviewed by typing show capturefilterhelp in the Trace context. A helpful starter command within Trace is: netsh trace start capture=yes IPv4.Address=192.168.122.2 protocol=6 tracefile=c:\temp\tcptrace.etl This trace has two filters: - IPv4.Address. Specifies the IP address for capture: 192.168.122.2. Specified this way, the IP address can either be the Sender or the Receiver. - Protocol. These are the available protocols that are present in the capture. "6" indicates all TCP protocol traffic. Popular Protocol types are 1 (ICMP), 6 (TCP), and 17 (UDP). Note that Trace Capture does not filter by port. It also directs the trace output to a file, tcptrace.etl, in C:\Temp. The directory structure (not the file) must be present prior to running the command, or it will not capture to file. If you need to continue trace between sessions, use the persistent=yes command in the trace. This will persist tracing, even upon reboot of the host. This is helpful when attempting to troubleshoot things like DHCP addressing problems. To close the capture, type: netsh trace stop Advanced: Capturing on a Remote Host Sadly, there is no way to use netsh's "remote" (-r hostname) option to capture traffic on the server from the safety and security of your Windows desktop machine. If you enable the remote context, typing "trace" will just give you an error. But with a little tenacity, there is always a way, and that way is to use SysInternal's (a tiny little division of Microsoft) PSExec utility, part of the PsTools suite. Once downloaded, the basic syntax from the local computer is: psexec \\hostname netsh trace start capture=yes tracefile=c:\temp\tcptrace.etl On the local computer, you will get a message saying that the netsh command exited with "error code 0" which means that it exited successfully. When the operation you're hoping to review is completed, type: psexec \\hostname netsh trace stop The log file will be saved to C:\Temp on the remote system. Analyzing the Trace File To read the produced ETL file, you can simply use Microsoft Message Analyzer or Microsoft Network Monitor. If you use Network Monitor, you must go to Tools > Options and set the Windows parser as "active". Then, just use those tools' options to further filter (for example, looking for TPC 515 traffic) and see where the problem is coming from. If you want to use Wireshark, you have to convert the ETL file to the CAP format. You can do this in PowerShell: $s = New-PefTraceSession -Path "C:\output\path\spec\OutFile.Cap" -SaveOnStop $s | Add-PefMessageProvider -Provider "C:\input\path\spec\Input.etl" $s | Start-PefTraceSession and then open the CAP file in Wireshark. And that's it! Happy capturing! As a reference, and a launching point for more "Fun With Trace" start at Netsh Commands for Network Trace.
OPCFW_CODE
Capital and non-capital letters in the Greek alphabet Is there a reason why only some of the capital and non-capital letters of the Greek alphabet are different? It depends how hard they are to write with a pen! The "capital" letters are based on ancient inscriptional forms, the way they were carved into monuments. This is why they're made of straight lines and simple curves. Nice and easy to carve. The "lowercase" letters, on the other hand, are based on manuscript forms, which evolved from the inscriptional ones when they were written with ink on papyrus and parchment over the generations. That's why the lowercase ones are mostly designed to be written with a single penstroke each. How did they end up combined? Mostly through the influence of the Latin manuscript tradition, where the first letter of a section would be written in a special way. With the invention of the printing press, this was eventually codified into the modern rules for "capital" and "lowercase" letters, now well-established for both Latin and Greek (and, eventually, Cyrillic too). So, the differences between capital and lowercase letters comes down to how much adaptation was needed to write them with a pen. Iota and omicron are easy to draw with a single stroke, and don't need much modification at all. Xi, on the other hand, needs its "tiers" connected if you want to write it without lifting your pen from the paper, and alpha becomes a lot curvier when you write it in a single movement. it's also worth noting that what counts as "the same" is not completely clear cut. Does a difference in size count (e.g. C & c)? British English joined-up handwriting typically teaches not to join capital letters up (unlike US cursive), does that count as a difference (this would also distinguish C & c)? Does raising or lowering count as difference (e.g. Ψ & ψ)? Does drawing it a little more curvily count (e.g. Ρ & ρ)? Etc etc etc I might add the capitals are not easier to carve on stone, only; they are easier to scratch on waxed tablets, on shards of pottery (the "ballots" of their world), sheets of lead, etc... anything but papyrus or vellum. Think runes. @Draconis Many thanks for your informative answer. Has research been done on the sequence of adaptation, for example the progression from Σ to σ as I would imagine it never happened overnight? @Farcher in that particular case the missing link is the lunate sigma Ϲϲ, which appeared in both carved inscriptions and written papyri throughout the Hellenistic & Byzantine periods. The modern lower case sigma evolved from this by closing the loop (or adding a serif down at the end of a word), whilst the modern upper case sigma is an archaism
STACK_EXCHANGE
Recently I was assigned to do data migration from Magento 220.127.116.11 to Magento 2.1.0 In this article, I will share some points you must look out for when migrating to Magento 2. There are several things you must be aware of before you start migration: - Magento 2 is a memory hog. So, suggested 2GB RAM is just a bare minimum to run Magento 2. In most cases 2GB RAM will not be enough if you decide to do migration. If you are limited in your RAM you can add swap space. - When installing Data Migration Tool you will need your authentication keys. Your public key is your username; your private key is your password. - If your Magento 1 store has a lot of products you will have to increase the value for max_allowed_packet in MySQL configuration file. I set it to 32 GB. - Be sure to install Magento 2 Data Migration Tool of the same exact version as your Magento 2 website. Before I started migration, I replicated source Magento 1 database on my local server. After installing Data Migration Tool I copied config.xml.dist (which was in my case in vendor/magento/data-migration-tool/etc/ce-to-ce/18.104.22.168 folder) to config.xml file and added the source and destination database mysql credentials as described in Magento Migration Guide. When you start migration tool, you will have a lot of errors such as “Source fields not mapped”, “Source documents not mapped”, etc. You will have to ignore those database tables and columns in corresponding mapping files. You will be working a lot with map.xml.dist file. Since data migration is done in several steps, you also will be working with mapping files located in one directory above (ce-to-ce). In my case I had to change map-eav.xml.dist, map-customer.xml.dist, eav-attribute-groups.xml.dist, and class-map.xml.dist. I also ran into a problem where when I migrated the website, product categories when clicked on showed 404 not found. I figured out it had to deal with Url Rewrite step in data migration process. After playing around with Version191to2000.php (this is were core_url_rewrite table form Magento 1 is translated to url_rewrite table in Magento 2) file I did not have much luck. The better solution was to simply skip Url Rewrite step (I just commented it out in config.xml file) and run re-indexing after migration, so the url rewrites were rebuilt automatically. If your Magento 1 website has SSL certificate installed, you will not be able to get to your Magento 2 website admin login in page if your Magento 2 website does not have SSL. (By the way, your login, password and admin login page do not change after migration. Data migration does not migrate admin accounts. You will have to recreate them manually). In order to fix that you need to set the value of “web/secure/use_in_adminhtml” to 0 in core_config_data table in Magento 2 database. After doing so and cleaning Magento cache from command line you should be able to get to your Magento 2 admin login page. After migration you may also run into a problem where you cannot edit your customers or products. One of the possible problems could be that you have data for your Magento 1 plugins/modules in the database, but those plugins/modules are not installed on Magento 2 website. To fix this problem, check your eav_attribute (customer_eav_attirbute) tables, locate the data that the old plugins/modules were using and remove those rows in the table. These are a few notes that I wanted to share after I did Magento 2 data migration.
OPCFW_CODE
<?php /** * MvcCore * * This source file is subject to the BSD 3 License * For the full copyright and license information, please view * the LICENSE.md file that are distributed with this source code. * * @copyright Copyright (c) 2016 Tom Flidr (https://github.com/mvccore) * @license https://mvccore.github.io/docs/mvccore/5.0.0/LICENSE.md */ namespace MvcCore\Ext\Routers\Media; use MvcCore\Ext\Routers; /** * @mixin \MvcCore\Ext\Routers\Media */ trait PropsGettersSetters { /************************************************************************************* * Configurable Properties * ************************************************************************************/ /** * Url prefixes prepended before request URL path to describe media site version in url. * Keys are media site version values and values in array are URL prefixes, how * to describe media site version in url. * Full version with possible empty string prefix is necessary to put as last item. * If you do not want to use rewrite routes, just put under your allowed keys any values. * @var array */ protected $allowedMediaVersionsAndUrlValues = [ Routers\IMedia::MEDIA_VERSION_MOBILE => 'm', Routers\IMedia::MEDIA_VERSION_TABLET => 't', Routers\IMedia::MEDIA_VERSION_FULL => '', ]; /*************************************************************************** * Internal Properties * **************************************************************************/ /** * Media site version to switch user into. * Default value is `NULL`. If there is any media site version * under key `media_version` in `$_GET` array, this property * has string with this value. * @var string|NULL */ protected $switchUriParamMediaSiteVersion = NULL; /** * Resolved media site version, used in `\MvcCore\Request` * object and possible to use in controller or view. * Possible values are always: `"full" | "tablet" | "mobile" | NULL`. * @var string|NULL */ protected $mediaSiteVersion = NULL; /** * Media site version founded in session. * @var string|NULL */ protected $sessionMediaSiteVersion = NULL; /** * Requested media site version. * @var string|NULL */ protected $requestMediaSiteVersion = NULL; /** * If `NULL`, request was not first, there was something in session stored by * previous requests. * If `TRUE`, request was first, nothing was in session from previous requests * and detected version is the same as requested media site version. * * If `FALSE`, request was first, nothing was in session from previous requests * and detected version is different from requested media site version. * There is necessary to redirect user to detected version from first request. * @var bool|NULL */ protected $firstRequestMediaDetection = NULL; /*************************************************************************** * Public Methods * **************************************************************************/ /** * Get resolved media site version, used in `\MvcCore\Request` * object and possible to use in controller or view. * Possible values are always: `"full" | "tablet" | "mobile" | NULL`. * @return string|NULL */ public function GetMediaSiteVersion () { return $this->mediaSiteVersion; } /** * Set media site version, used in `\MvcCore\Request` * object and possible to use in controller or view. * Possible values are always: `"full" | "tablet" | "mobile" | NULL`. * @param string|NULL $mediaSiteVersion * @return \MvcCore\Ext\Routers\Media */ public function SetMediaSiteVersion ($mediaSiteVersion) { $this->mediaSiteVersion = $mediaSiteVersion; return $this; } /** * Get URL prefixes prepended before request URL path to describe media site * version in url. Keys are media site version values and values in array are * URL prefixes, how to describe media site version in URL. Full version with * possible empty string prefix is necessary to have as last item. If you do * not want to use rewrite routes, just have under your allowed keys any values. * Example: * ``` * [ * 'mobile' => 'm', // to have `/m` substring in every mobile URL begin. * 'full' => '', // to have nothing extra in URL for full site version. * ]; * ``` * @return array */ public function GetAllowedMediaVersionsAndUrlValues () { return $this->allowedMediaVersionsAndUrlValues; } /** * Set URL prefixes prepended before request URL path to describe media site * version in URL. Keys are media site version values and values in array are * URL prefixes, how to describe media site version in URL. Full version with * possible empty string prefix is necessary to put as last item. If you do * not want to use rewrite routes, just put under your allowed keys any values. * Example: * ``` * \MvcCore\Ext\Routers\Media::GetInstance()->SetAllowedMediaVersionsAndUrlValues([ * 'mobile' => 'm', // to have `/m` substring in every mobile URL begin. * 'full' => '', // to have nothing extra in URL for full site version. * ]); * ``` * @param array $allowedMediaVersionsAndUrlValues * @return \MvcCore\Ext\Routers\Media */ public function SetAllowedMediaVersionsAndUrlValues ($allowedMediaVersionsAndUrlValues = []) { $this->allowedMediaVersionsAndUrlValues = $allowedMediaVersionsAndUrlValues; return $this; } /*************************************************************************** * Protected Methods * **************************************************************************/ /** * Return media site version string value for redirection URL but if media * site version is defined by `GET` query string param, return `NULL` and set * target media site version string into `GET` params to complete query * string params into redirect URL later. But if the target media site version * string is the same as full media site version (default value), unset this * param from `GET` params array and return `NULL` in query string media * site version definition case. * @param string $targetMediaSiteVersion Media site version string. * @return string|NULL */ protected function redirectMediaGetUrlValueAndUnsetGet ($targetMediaSiteVersion) { $mediaVersionUrlParam = static::URL_PARAM_MEDIA_VERSION; if (isset($this->requestGlobalGet[$mediaVersionUrlParam])) { if ($targetMediaSiteVersion === static::MEDIA_VERSION_FULL) { unset($this->requestGlobalGet[$mediaVersionUrlParam]); } else { $this->requestGlobalGet[$mediaVersionUrlParam] = $targetMediaSiteVersion; } $targetMediaUrlValue = NULL; } else { $targetMediaUrlValue = $this->allowedMediaVersionsAndUrlValues[$targetMediaSiteVersion]; } return $targetMediaUrlValue; } }
STACK_EDU
It & Software Online Course by Udemy, On Sale Here Practice Exams to test your knowledge and passing your real MS-100 Exam in first attempt (Include Case Study Questions) An excellent training about It Certification MS-100: Microsoft 365 Identity and Services: Tests 2021 Welcome to the practice tests for MS-100: Microsoft 365 Identity and Services: Practice TestsMicrosoft 365 Enterprise Administrators who take part in evaluating, planning, migrating, deploying, and managing Microsoft 365 services. They perform Microsoft 365 tenantmanagement tasks for an enterprise, including its identities, security, compliance, and supporting technologies. This practice test will help you prepare for the real Microsoft official exam test environment. Microsoft MS-100: Microsoft 365 Identity and Services exams improve the following skills questions: Design and implement Microsoft 365 services (25-30%)Manage domainsadd and configure additional domainsconfigure user identities for new domain nameconfigure workloads for new domain namedesign domain name configurationset primary domain nameverify custom domainPlan a Microsoft 365 implementationplan for Microsoft 365 on-premises Infrastructureplan identity and authentication solutionSetup Microsoft 365 tenancy and subscriptionconfigure subscription and tenant roles and workload settingsevaluate Microsoft 365 for organizationplan and create tenantupgrade existing subscriptions to Microsoft 365monitor license allocationsManage Microsoft 365 subscription and tenant healthmanage service health alertscreate & manage service requestscreate internal service health response planmonitor service healthconfigure and review reports, including BI, OMS, and Microsoft 365 reportingschedule and review security and compliance reportsschedule and review usage metricsPlan migration of users and dataidentify data to be migrated and methodidentify users and mailboxes to be migrated and methodplan migration of on-prem users and groupsimport PST FilesManage user identity and roles (35-40%)Design identity strategyevaluate requirements and solution for synchronizationevaluate requirements and solution for identity managementevaluate requirements and solution for authenticationPlan identity synchronization by using Azure AD Connectdesign directory synchronizationimplement directory synchronization with directory services, federation services, and Azure endpointsManage identity synchronization by using Azure AD Connectmonitor Azure AD Connect Healthmanage Azure AD Connect synchronizationconfigure object filtersconfigure password syncimplement multi-forest AD Connect scenariosManage Azure AD identitiesplan Azure AD identitiesimplement and manage Azure AD self-service password resetmanage access reviewsmanage groupsmanage passwordsmanage product licensesmanage usersperform bulk user managementManage user rolesplan user rolesallocate roles in workloadsconfigure administrative accountsconfigure RBAC within Azure ADdelegate admin rightsmanage admin rolesmanage role allocations by using Azure ADplan security and compliance roles for Microsoft 365Manage access and authentication (20-25%)Manage authenticationdesign authentication methodconfigure authenticationimplement authentication methodmanage authenticationmonitor authenticationImplement Multi-Factor Authentication (MFA)design an MFA solutionconfigure MFA for apps or usersadminister MFA usersreport MFA utilizationConfigure application accessconfigure application registration in Azure ADconfigure Azure AD application proxypublish enterprise apps in Azure ADImplement access for external users of Microsoft 365 workloadscreate B2B accountscreate guest accountsdesign solutions for external accessPlan Office 365 workloads and applications (10-15%)Plan for Office 365 workload deploymentidentify hybrid requirementsplan connectivity and data flow for each workloadplan for Microsoft 365 workload connectivityplan migration strategy for workloadsPlan Office 365 applications deploymentmanage Office 365 software downloadsplan for Office 365 appsplan for Office 365 Pro plus apps updatesplan for Office 365 Pro plus connectivityplan for Office onlineplan Office 365 Pro plus deployment” This is an Unofficial course and this course is not affiliated, licensed or trademarked with Microsoft in any way.” Best of luck! Udemy is the leading global marketplace for learning and instruction By connecting students all over the world to the best instructors, Udemy is helping individuals reach their goals and pursue their dreams. Study anytime, anywhere.
OPCFW_CODE
Can we uncheckout elements in hierarchy basis? I would like to uncheckout list of elements in hierarchical manner Ie: Lt say folder1 & folder2, folder3 are framed in hierarchical basis: folder3 is child, folder2 is parent, folder1 is grand parent I would like to uncheckout child first then parent then garand parent: how I can become to know which one is parent and which one is grand parent? Leads are welcomed. You would need to write a script which walks all elements of each folder, starting with the one having the most depth: "F3". The script would then go one level up (possibly F2) and repeat the process for all elements, except subfolders. And then one level up, until you get to F1. In each folder, you can combine cleartool find -exec with the unco command in order to automate the ct unco step. See "How do I perform a recursive checkout using ClearCase?" (except in your case, it is unco, with files first, folders second) Reminder: unco means unco files first, and then only then: unco folders. I have a problem with logic gentle man,. I can’t solve it.. I have tried two logics but can’t fix it off! … hence F3 is last grand Child I have to uncheckout that first then F2 then F1, here by F1,F2,F3 Will be in a tree,, so I can take one element and I can check whether it has any parent node with specific key ., but this logic doesn’t work in expected manner @ArockiaJegan That would be best addressed in a separate question where you can illustrate what exact command you have executed, and the cleartool status which would show it did not work as intended. https://stackoverflow.com/q/71143019/7567396 … hi @VonC please refer this post and let me know if you can help., I feel you may know, here why do we need to do this means if we uncheckout parent first all other directories will get corrupted., as we don’t have much source for those kind of problems I got struck @ArockiaJegan What programming language or scripting language are you using for that? I am using xml scripting which has oops concepts., it’s called Linked modular Object Oriented XML @ArockiaJegan Interesting. Do mention that clearly in your question, and someone familiar with Linked modular Object Oriented XML might answer. it’s not possible gentle man,. That term is proprietary to my office,. So no one will know that.. I just want to know the logic on tree based.. more than two weeks I’m struggling here., and it’s still hanging me
STACK_EXCHANGE
[scribus-dev] Tables GSoC - Weekly Report #10 elvstone at gmail.com Tue Aug 9 19:39:11 UTC 2011 I think you're all pretty much up to date with what I've been doing, but here's a short report for week #10 of the project. I'm also including a little screencast showing some of the new features. Some of it you've probably seen before through my blog or through IRC pastes. = Tables GSoC - Weekly Report #10 = == Work Report == * Fixed a bug in the painting of page grid, baseline grid, guides and margins. The old code assumed that ScPainter would stroke even though stroke mode was set to None. Unrelated to table project but was uncovered by a change I made to ScPainter. * Simplified the table hit test a bit. * Display tooltips with width/height when during resizing of rows/columns/table. * Gave the table related canvas gestures a common base class to avoid some code duplication. * Added support for selecting cells. Individual cells may be selected by clicking them, rectangular areas by click dragging. The selection is painted as a blue-tinted overlay. The new API for selecting cells on the table is: ** void selectCell(int row, int column); ** void selectCells(int startRow, int startColumn, int endRow, int endColumn); ** QSet<TableCell> selection() const; ** void clearSelection(); * Added an optimization to the painting of the table outline during resizing. I'd say it's now ~10x faster. * Added support for resizing rows/column by moving the boundaries between them independently. This mode is entered by holding down Shift while resizing a row/column. * Added padding properties to cell and cell style, and scripting methods to set them. * Started working on cell text content. It's now possible to set the content on a cell using setText() (scripter API also available). The text of the cell is represented by a PageItem_TextFrame, and is painted as part of normal table painting. The calculation of the content rectangle occupied by the text frame takes all the borders along the cell edges as well as paddings into account, and is updated if any borders/paddings are changed, if a style is changed or if a style is set/unset. == Project Status == I don't really feel there's a reason to bring up what was in my schedule for this past week, but for brevity, here it goes anyway: * Scripting API. * CSV and/or ODT import. Funny how "Scripting API" is in there, since it has a been a corner stone for me when it comes to testing throughout the project :) Jokes aside, I feel pretty good about the project status. I was pleasantly surprised that getting the text frames painted went so fast (except that overflowing problem of course). Now it remains to see what kind of problems I'll hit with editing. But I'm staying positive. There are only a couple of weeks left until the official pencils down date, and after that I will work another week full time before the == Problems / Questions == Nothing at the moment. I thought I had run into a wall with that overflowing problem, but it turned out to be just a oneliner fix after == Next Week == On the schedule for week 11 (this week) is: * Wrapping up for final evaluation. * Fix bugs/finish up what I have so far. This means I'll try to do as many of the following tasks as possible, in order of priority: * Basic cell text editing. * Context menu. * PP UI. * SM UI. * Saving / Loading. * Painting for print output. More information about the scribus-dev
OPCFW_CODE
- TensorFlow and Paperspace - Why Train TensorFlow Models on Paperspace? - How to Train TensorFlow Models on Paperspace - Tips for Training TensorFlow Models on Paperspace - Best Practices for Training TensorFlow Models on Paperspace - FAQs about Training TensorFlow Models on Paperspace - Additional Resources for Training TensorFlow Models on Paperspace - About the Author - Connect with the Author If you’re looking to speed up the training of your TensorFlow models, Paperspace is a great option. In this blog post, we’ll show you how to set up a Paperspace machine and train your models on it. Checkout this video: TensorFlow and Paperspace TensorFlow is a powerful tool for machine learning, but training models can be computationally intensive. Paperspace provides a simple, cost-effective way to train your models in the cloud. To get started, sign up for a free account at paperspace.com. Once you have an account, you can create a new workspace in the Paperspace Console. Choose the SDK and framework you want to use from the list of available options; for this guide, we will use TensorFlow with Python 3.6. Once your workspace has been created, you can clone the TensorFlow tutorial repository from GitHub and follow the instructions in the README file to train your model. Why Train TensorFlow Models on Paperspace? There are many reasons to train TensorFlow models on Paperspace. One reason is that it can save you time and money. Another reason is that it can be more convenient than training on your local machine. Some of the benefits of training on Paperspace include: – accelerated training times – increased flexibility – access to powerful GPUs – a fully managed environment How to Train TensorFlow Models on Paperspace TensorFlow is a powerful open-source software library for data analysis and machine learning. With TensorFlow, you can train your own machine learning models to improve your app or website. In this tutorial, you will learn how to use Paperspace to train TensorFlow models. Paperspace is a cloud computing platform that offers free GPU machines for machine learning. To use Paperspace, you will need to sign up for a free account. Once you have an account, you can create a new “machine” for training your TensorFlow model. Creating a new machine on Paperspace is simple. First, log in to your Paperspace account and click the “Machines” button on the left sidebar. Then, click the “Create” button and select “TensorFlow Machine.” You will be prompted to choose a machine type. For this tutorial, we recommend using the “P4000” machine type, which offers 4GB of GPU memory and is ideal for training small- to medium-sized TensorFlow models. You can also use the “P5000” or “P6000” machine types if you need more GPU power. Once you have selected a machine type, you will be prompted to choose an operating system. For this tutorial, we recommend using the latest version of Ubuntu 16.04 LTS (64-bit). Once you have selected an operating system, click the “Create Machine” button and wait for your machine to be created. Once your machine has been created, click the ” Machines” button on the left sidebar and select your new machine from the list. Then, click the “Connect” button and follow the instructions to connect to your machine using SSH. Once you are connected to your Paperspace machine, it’s time to install TensorFlow. First, update your package manager: $ sudo apt-get update Next, install pip (a tool for installing Python packages): “`bash $ sudo apt-get install python-pip“` Finally, use pip to install TensorFlow: “`bash $ sudo pip install tensorflow“` You can now close the SSH connection to your Paperspace machine; we won’t need it anymore for this tutorial. In order to train your TensorFlow model on Paperspace, you will need two things: 1) training data and 2) a training script. The training data can be any file that contains information that can be used by TensorFlow for training (e trained). This could be a dataset in CSV format or even just a plain text file . . . PaperSpace provides tooling that makes it easy manage these resources called DataSets The next step is writing a script which trains (and evaluates) models . This could also live in many places but we recommended storing it in an Input / Output Storage container After provisioning these three things: 1) Data 2) Code 3) Container with libraries – We’re ready run our job! Tips for Training TensorFlow Models on Paperspace If you’re planning to train a TensorFlow model, there are a few things you’ll need to keep in mind. Here are some tips to help you get the most out of your training on Paperspace: – Choose the right instance type. If you’re training a small model, a CPU instance will probably be sufficient. For larger models, however, you’ll need a GPU instance. – Make sure your training data is well-organized and easily accessible. If your data is spread out across multiple locations, it will be more difficult to train your model effectively. – Make use of Paperspace’s built-in features, such as our data ingestion and preprocessing tools. These can save you a lot of time and effort when it comes to preparing your data for training. – Utilize our mastery feature to keep track of your training progress and ensure that your models are converging as expected. Best Practices for Training TensorFlow Models on Paperspace Training a TensorFlow model on Paperspace can be a great way to get the most out of your resources and minimize training time. However, there are a few things to keep in mind in order to get the most out of your training session. One of the most important things to consider is the number of workers you use for training. Too few workers can lead to longer training times, while too many workers can lead to poor resource utilization and decreased performance. The best way to determine the optimal number of workers is to experiment with different values and see what works best for your training data and model. Another important consideration is the type of machine you use for training. TensorFlow models can be very resource intensive, so it’s important to choose a machine that has enough power to handle your training requirements. Paperspace offers a variety of machines, so be sure to choose one that will be able to handle your training load. Finally, keep in mind that training a TensorFlow model can take a long time. Be patient and give your model time to converge before interrupting the training process. FAQs about Training TensorFlow Models on Paperspace Q: How do I train a TensorFlow model on Paperspace? A: You can train a TensorFlow model on Paperspace by using the TensorFlow-GPU Docker container. This container includes all of the necessary dependencies to run TensorFlow on a GPU, so you can train your model with maximum performance. Q: What are the benefits of training my TensorFlow model on Paperspace? A: Training your TensorFlow model on Paperspace allows you to take advantage of our high-performance GPU machines, which can significantly speed up training time. In addition, Paperspace provides a fully managed environment for training your models, so you don’t have to worry about setting up and maintaining your own infrastructure. Q: What are the requirements for training a TensorFlow model on Paperspace? A: To train a TensorFlow model on Paperspace, you will need to sign up for an account and create a machine with a GPU. You can find more information about how to do this in our documentation. Additional Resources for Training TensorFlow Models on Paperspace If you’re looking for additional resources for training your TensorFlow models on Paperspace, check out our blog post on the subject. In it, we provide some tips and tricks for getting the most out of your training session. We also have a helpful tutorial that walks you through the process of training a simple TensorFlow model on Paperspace. If you’re new to TensorFlow or machine learning in general, this tutorial is a great place to start. This tutorial showed you how to train a TensorFlow model on Paperspace. You configured a gradient job using the Paperspace Gradient Jobs API and then created a model using TensorFlow. Finally, you trained the model on the MNIST dataset. About the Author Paperspace is an AI development company that offers a platform for training machine learning models. The company was founded in 2016 by three veterans of the tech industry: AdamHTML, DevenH, and KelseyT. Paperspace’s mission is to make it easy for developers to train machine learning models and deploy them in the cloud. The company offers a variety of services, including a platform for training models, a managed service for deploying models, and a hosted service for running inference. Paperspace’s platform is used by some of the world’s leading machine learning experts, including Y Combinator CEO Sam Altman and Google Brain co-founder Andrew Ng. Connect with the Author If you’re looking to learn more about training TensorFlow models on Paperspace, be sure to check out the tutorial by our very own Dylan silva! You can connect with him on Github and Twitter.
OPCFW_CODE
from datetime import datetime from controller import db import model.drop_point class Capacity(db.Model): """The capacity of a drop point at some point in time. The capacity of drop points may change over time as empty crates can and will be added or removed on demand. Like the location, the capacity is tracked over time to allow for analysis and optimization after an event. Each capacity has a start time indicating the presence of a particular number of empty crates at the drop point at that time. If crates are added or removed, a new capacity with the respective start time is added. If the start time is null, the crates have been there forever. If the number of crates is null, the drop point only consists of a sign on the wall but no crates at all. """ default_crate_count = 1 cap_id = db.Column(db.Integer, primary_key=True) dp_id = db.Column( db.Integer, db.ForeignKey("drop_point.number"), nullable=False ) dp = db.relationship("DropPoint") time = db.Column(db.DateTime) crates = db.Column(db.Integer, default=default_crate_count) def __init__( self, dp, time=None, crates=default_crate_count ): errors = [] if not isinstance(dp, model.drop_point.DropPoint): errors.append({"Capacity": "Not given a drop point object."}) raise ValueError(errors) self.dp = dp if time and not isinstance(time, datetime): errors.append({"Capacity": "Start time not a datetime object."}) if isinstance(time, datetime) and time > datetime.today(): errors.append({"Capacity": "Start time in the future."}) if dp.capacities and isinstance(time, datetime) and \ time < dp.capacities[-1].time: errors.append({"Capacity": "Capacity older than current."}) self.time = time if time else datetime.today() if crates is None: self.crates = self.default_crate_count else: try: self.crates = int(crates) except (TypeError, ValueError): errors.append({"crates": "Crate count is not a number."}) else: if self.crates < 0: errors.append({"crates": "Crate count is not positive."}) if errors: raise ValueError(*errors) db.session.add(self) def __repr__(self): return "Capacity %s of drop point %s (%s crates since %s)" % ( self.cap_id, self.dp_id, self.crates, self.time ) # vim: set expandtab ts=4 sw=4:
STACK_EDU
#!/usr/bin/env python # -*- coding: utf-8 -*- # # @Author: José Sánchez-Gallego (gallegoj@uw.edu) # @Date: 2022-09-01 # @Filename: store.py # @License: BSD 3-clause (http://www.opensource.org/licenses/BSD-3-Clause) from __future__ import annotations from collections import defaultdict from dataclasses import dataclass from datetime import datetime from typing import TYPE_CHECKING, Any if TYPE_CHECKING: from .base import BaseActor, Reply __all__ = ["KeywordStore", "KeywordOutput"] class KeywordStore(defaultdict): """Stores the keywords output by an actor. Parameters ---------- actor The actor to which this store is attached to. filter A list of keyword names to filter. If provided, only those keywords will be tracked. """ def __init__(self, actor: BaseActor, filter: list[str] | None = None): self.actor = actor self.name = self.actor.name self.filter = filter defaultdict.__init__(self, list) def add_reply(self, reply: Reply): """Processes a reply and adds new entries to the store. Parameters ---------- reply The `.Reply` object containing the keywords output in a message from the actor. """ for keyword, value in reply.message.items(): if self.filter is not None and keyword not in self.filter: continue key_out = KeywordOutput(keyword, reply.message_code, datetime.now(), value) if keyword in self: self[keyword].append(key_out) else: self[keyword] = [key_out] def head(self, keyword: str, n: int = 1): """Returns the first N output values of a keyword. Parameters ---------- keyword The name of the keyword to search for. n Return the first ``n`` times the keyword was output. """ return self[keyword][:n] def tail(self, keyword: str, n: int = 1): """Returns the last N output values of a keyword. Parameters ---------- keyword The name of the keyword to search for. n Return the last ``n`` times the keyword was output. """ return self[keyword][-n:] @dataclass class KeywordOutput: """Records a single output of a keyword. Parameters ---------- name The name of the keyword. message_code The message code with which the keyword was output. date A `.datetime` object with the date-time at which the keyword was output this time. value The value of the keyword when it was output. """ name: str message_code: Any date: datetime value: Any
STACK_EDU
// Sensor field (solves collide problem) //I've set a key-less child sprite to the player in the same size and set the anchor depending on the direction the player look (0.5 +/- 0.25, 0.5 +/- 0.25), to let it leap a bit of. // With this "sensor-field" I can work with overlap to pass the object for evaluation. // Battle plan for sensor field: // Create a sprite inside the player file, otherwise create a separate file // On player move - Fix file to player location // Move it ahead a little bit, so that it can overlap with others // update it depending on which player the sprite faces import { Grow } from './../index' import store from '../../../index' export default class Player extends Phaser.Physics.Arcade.Sprite { constructor(config) { super(config.scene, config.x, config.y, config.key); this.scene = config.scene; let scene = this.scene; this.actionCounter = 0; // are probably not necessary this.inAction = false; this.inDialogue = false; this.contactWithCharacter = false; this.characterLastContacted = null; this.isAllowedToMove = true; this.characterInteraction = []; // Cursors registered by Phaser! this.cursors = scene.input.keyboard.createCursorKeys(); scene.physics.world.enable(this); //Later on spawn character from this position on login /*this.lastPosition = { x: x, y: y, emittedOn: Date.now() };*/ // Create a sprite with physics enabled via the physics system. The image used for the sprite has // a bit of whitespace, so I'm using setSize & setOffset to control the size of the player's body. this.setSize(30, 40); this.setOffset(0, 24); // UNFINISHED: Make the collision area around the sprite smaller //this.frame.centerX = 0; if(scene.anims.anims.entries["misa-left-walk"] === undefined){ // pretty good function to put things into that should only be created once at creation :p scene.anims.create({ key: "misa-left-walk", frames: scene.anims.generateFrameNames("atlas", { prefix: "misa-left-walk.", start: 0, end: 3, zeroPad: 3 }), frameRate: 10, repeat: -1 }); scene.anims.create({ key: "misa-right-walk", frames: scene.anims.generateFrameNames("atlas", { prefix: "misa-right-walk.", start: 0, end: 3, zeroPad: 3 }), frameRate: 10, repeat: -1 }); scene.anims.create({ key: "misa-front-walk", frames: scene.anims.generateFrameNames("atlas", { prefix: "misa-front-walk.", start: 0, end: 3, zeroPad: 3 }), frameRate: 10, repeat: -1 }); scene.anims.create({ key: "misa-back-walk", frames: scene.anims.generateFrameNames("atlas", { prefix: "misa-back-walk.", start: 0, end: 3, zeroPad: 3 }), frameRate: 10, repeat: -1 }); } config.scene.add.existing(this); // Sensorfield! this.sensorField = scene.physics.add.sprite(this.x, this.y); this.sensorField.frame.width = 30; this.sensorField.frame.height = 40; } // End of constructor move(time, delta) { // Movement if (this.isAllowedToMove === true){ const speed = 150; this.prevVelocity = this.body.velocity.clone(); // Current movement pattern // register that key is down --> .isDown === true; // No function / event listener to set .isDown === false; // idea: Listen for how long a key was pressed beforehand and set it to false if it wasn't pressed in the last 100ms // 2nd idea: make character always move a full field like in pokemon // Stop any previous movement from the last frame this.body.setVelocity(0); if (this.cursors.left.isDown) { this.body.setVelocityX(-speed); this.sensorField.x = this.x - 13 this.sensorField.y = this.y + 15 } else if (this.cursors.right.isDown) { this.body.setVelocityX(speed); this.sensorField.x = this.x + 10 this.sensorField.y = this.y + 15 } else if (this.cursors.up.isDown) { this.body.setVelocityY(-speed); this.sensorField.x = this.x this.sensorField.y = this.y - 5 } else if (this.cursors.down.isDown) { this.body.setVelocityY(speed); this.sensorField.x = this.x this.sensorField.y = this.y + 28 } // Update the animation last and give left/right animations precedence over up/down animations // Should be added to movement function above? if (this.cursors.left.isDown) { this.anims.play("misa-left-walk", true); } else if (this.cursors.right.isDown) { this.anims.play("misa-right-walk", true); } else if (this.cursors.up.isDown) { this.anims.play("misa-back-walk", true); } else if (this.cursors.down.isDown) { this.anims.play("misa-front-walk", true); } else { this.anims.stop(); // If we were moving, pick an idle frame to use if (this.prevVelocity.x < 0) this.setTexture("atlas", "misa-left"); else if (this.prevVelocity.x > 0) this.setTexture("atlas", "misa-right"); else if (this.prevVelocity.y < 0) this.setTexture("atlas", "misa-back"); else if (this.prevVelocity.y > 0) this.setTexture("atlas", "misa-front"); } /*if(this.input.keyboard.isDown(Phaser.Keyboard.LEFT)){ console.log('yeeehaaa') }*/ // I don't know yet how to stop the animation after keypress, this should happen automatically // but this is currently not the case, I think because of how we installed the thing, maybe it will work if we run it like a vue app? /*this.cursors.left.isDown = false; this.cursors.right.isDown = false; this.cursors.up.isDown = false; this.cursors.down.isDown = false; */ } //End of player is allowed to move function // Player functions } } // End of export
STACK_EDU
I had made an entry (because it asked me to give a name) in boot manager as “Windows 7 Enterprise” at the time of installation…there was also another entry “system reserved” This “system reserved” entry itself when I select, boots the Windows 7 But when I select the “windows 7 enterprise”…it says not found Now, I want to delete this useless entry “windows 7 Enterprise” When I was going through the user guide gives a Location: /boot/system/apps/BootManager But, I do not find this on my system… Not only this, even for Mail application, the paths / directorys / files/ mentioned in the guide do not exist on the system But, I see that on every upgrade of nightly, there is an upgrade of user-guide… What are the things regularly updated in this user-guide? Somebody please clarify… Haiku has to be running for you to see things. Unlike on linux based system, packages are not extracted; they are mounted. That’s why if Haiku is not running, paths are indeed different; you will only see packages and things that are in writable directories. So boot on Haiku and run BootManager. Select, or deselect, the partitions has you need and overwrite boot sector. I have checked in the system when haiku has booted up… With Haiku running, I checked the files in system with what os mentioned in user guide…could not find it… Most likely Tracker is showing you translated names for apps and folders. You can either use the Terminal app to start the boot manager (Open the Terminal app, type BootManager, and hit enter), or you can open the “Locale” preference app, go to the Formatting tab, and un-check the “Translate application and folder names”. (No idea how that looks in the language you use, sorry). I guess that if the user-guide in your language does not matches with what you see when using that “translated names” setting… you should file a ticket, so they get fixed/syncronized. Ha Ha… my system is installed in English language…I am reffering to the user guide in English language I will try from Terminal…as you suggested I just checked, and in beta4, Tracker clearly shows BootManager under /boot/system/apps/ (using English in the GUI) for me. No idea what’s wrong with your system then Are you sure you look into /boot/system/apps/ and not into the system folder of some other Haiku installation you’re not currently booted into? There has to be some misunderstanding on your or our side… You could post a screenshot of your /boot/system/apps/ folder and/or paste the output of ls /boot/system/apps in Terminal. This command shows as below…but when I click on the ‘Home’ drive and try to see, it does not show AboutSystem DriveSetup MediaPlayer SoundRecorder ActivityMonitor Expander MidiPlayer StyledEdit AutoRaise FilWip NetworkStatus Terminal Beam GLInfo Pe TextSearch BePDF HaikuDepot People TV BootManager Icon-O-Matic PoorMan Vision CharacterMap Installer PowerStatus VLC CodyCam LaunchBox ProcessController WebPositive Debugger LegacyPackageInstaller RemoteDesktop WonderBrush DeskCalc LibreOffice Screenshot Workspaces Devices Magnify SerialConnect DiskProbe Mail ShowImage DiskUsage MediaConverter SoftwareUpdater ‘Un-check’ ing the translate application has no effect, since my installation is in English itself I think you confuse the Home with the System hierarchy. All apps are installed in /boot/system/apps. There’s no /boot/home/apps. The “Home” on the Desktop links to /boot/home/ To change the entries in the bootloader, which file is to be edited?..Pl. clarify Just re-run BootManager and only set the checkmarks on the partitions containing an OS you need to boot. Like the “system reserved” and not “windows 7 enterprise” in your case, and edit the labels as you wish them to appear in the boot manager. Yes…but this screen is truncated…only the soft tab"Previous" is visible…softtab “Next” is not visible…screen space shortened…scrollbar also does not appear…how to get full view? There are some known layout issues. For me, it works to resize the window. You may also try to decrease your system font size in Appearances prefs befor launching BootManager. Thank you very much for your suggestions…issue resolved…
OPCFW_CODE
"By a month" Vs. "in a month" What's the difference between these two sentence? He's sure that he'll speak French fluently in a month. Vs. He's sure that he'll speak French fluently by a month. "by a month" is simply not idiomatic at all in English here. For time, one does things or things are done in a month, week, year, day, hour etc. That is the period of time that will elapse. He beat the deadline by a month. There, by a month is used to measure the number of months he won by. Some project has to be handed in in six months. The guy hands the project in in five months. He has beat the deadline by a month. For time that will elapse (go by) we use in: in a month. To measure some amount of time in relation to a set time, we use by: He beat me by ten minutes. They beat us (in the sailing race) by a month. @Perplexed folks We use by to specify a time or deadline, as in by tonight, by tomorrow evening, by the end of the week. @RonalsSole So both can be correct in context. Isn't it? @Perplexedfolks Yes, we use in for a duration (a day, a week, a month, a year) and by for a (rough) point in time (5pm, the end of the month etc) @Perplexedfolks You mean to ask: Both can be correct in context, can't they? Answer: No, by a month can never be right in this context in English. By a month: He beat the deadline by a month. [another meaning] @LucianSava No, that is not right. Our contract can be terminated at a month's notice or with a month's notice. @RonaldSole The OP is repeating the same question again. The answer is no. They are not both idiomatic for what the OP wants to say. The primary difference is that one of those two sentences is grammatical, and the other is not. "He's sure that he'll speak French fluently in a month" is correct. It is equivalent to "He's sure that he'll speak French fluently within a month", or "He's sure that before a month has passed, he'll speak French fluently". In this context, "[with]in [some amount of time]" means "before [that much time] has elapsed". "He's sure that he'll speak French fluently by a month" is not grammatical, because in this context "by [something]" means "before [something] has happened". "In" needs a span - a length of time, which might begin now, or at some other point already established by context ("once he starts the course, he's sure that he'll speak French fluently in a month" would mean the month started when the course did, at some point in the future, rather than right now). "By" needs an point in time: "a month" isn't a point, but "next month" could be (in this context it would be taken to mean "the start of next month"). Hence you might say "he's sure that he'll speak French fluently by summer", because "[the start of] summer" is a point. So is "dinner", though expecting someone to gain fluency between now and dinner is probably unreasonable. You can use "by the time [something happens]" in a similar way, so you could say "by the time a month is up", where "the time a month is up" means the point in time at which one complete month has passed. Or you could say "I'm sure she'll be fluent by dinner time" or "I'm sure he'll be fluent by Summer." (which is probably a little more realistic ;) ) @ColleenV Thanks; I was blanking on good events to use as examples but those are much better and I shall edit accordingly. "By" does not "need an event". "By" needs a deadline for doing something i.e. a date, a time, specified or implied. He will be home by dinnertime. "Can you finish this job by 3:00 o'clock?" @Lambie you are correct; "event" is not quite the right word, but I was focusing on distinguishing points in time from spans of time, and when I wrote the answer I felt the phrasing I've used in this comment might be confusing. I don't remember why I thought that, only that I did. @Darael why not to use "a point of time" instead of an "event"? @Perplexedfolks I'm not entirely sure what past-me was thinking, but at a guess, I was trying to avoid using the word "time" so as to make the distinction between a point and a span clearer. Clearly it didn't have that effect, though. The last couple of comments came in while I was in bed, but I have now edited my answer to talk about points in time. "In a month" is correct. It means that it will take 1 month by the time he'll speak fluent French. "By" is used to indicate the end point of an event. For example, you can say "He'll speak fluent French by February". February is the deadline by which the activity will be completed. Thank you for the answer (+^1), but I really don't understand what's the difference between "in" and "by" in these sentence. You explained that "by" indicates end point of an event. For example "He'll be there in a month" = He'll be there after one month from now. It is the same as "by a month". Isn't it? @Perplexedfolks it's not the same, because "a month" is a span of time, and "by" (in this context) takes a point in time. Hence you can say "by February" (because "February" is acting as shorthand for "the start of February", which is a point in time), but not "by a month" (because "a month" isn't a point, but a distance). @Perplexedfolks In terms of a specific point in time and the use of by, you would say by next month. (Note the lack of a.) Further, it would not be natural to say in next month. @Perplexedfolks I will finish in ten minutes. Same thing. When you are referring to an amount of time, use in: to start in ten minutes, to start in two years. For deadlines, by six o'clock, by 2021.
STACK_EXCHANGE
This is one of the two papers SalesForce Einstein lab published last week. Both of them requires understanding of MT, NNMT and purely attention-based NNMT. Since this first one is not too difficult to understand, I would just give you some background on NNMT first. When NNMT was first perceived. The original form starts with an Encoder of the text, convert it to what usually known as a "thought vector". The thought vector will then decode by the Decoder. In the original setting, both Encoder and Decoder are usually LSTMs Then there is the idea of attention. Well, you can think of it as more like an extra layer just on the thought vector on the decoder side. The goal is decide how much attention you want to pay on the thought vector. Now of course, people have then played with various architecture for these Enc-Dec structure. The first to notice is that such structure usually has a giant LSTM or CNN. But notice that no one really like them! LSTM is hard to parallelize and CNN can consume a lot of memories. That makes Google work mid of this year, "Attention is all you need" a stunning and useful result. What the authors were saying is proposing is to just use the idea of attention to create a system, they call it transformer. There are multiple tricks to get it work but perhaps the most important one is "multi-head attentions", in a way this is like the concepts of channels in Convnet, but now instead of doing one single attention, we are now attend in multiple places. Each head will learn to attend differently. Naturally the method is fast because you can also parallelize it, but then Google's researchers also find it to be better in the BLEU score. That's why top house are switching to purely attention-based method these days. Now finally I can talk about what the Salesforce paper is about. In the original Google's paper, representation learned by multi-attention heads are only concatenate with each other to form one "supervector" But then the authors of the paper decide to use another set of weighting. This again, further improve the performance on WMT14 by 0.4 BLEU score, which is quite significant. This is the second of the two papers from Salesforce, "Non-Autoregressive Neural Machine Translation" . Unlike the "Weighted Transformer, I don't believe it improves SOTA results. But then it introduces a cute idea into a purely attention-based NNMT, I would suggest you my previous post before you read on. Okay. The key idea introduced in the paper is fertility. So this is to address one of the issues introduced by a purely attention-based model introduced from "Attention is all you need". If you are doing translation, the translated word can 1) be expanded to multiple words, 2) transform to a totally different word location. In the older world of statistical machine translation, or what we called IBM models. The latter model is called "Model 2" which decide the "absolute alignment" of source/target language pair. The former is called fertility model or "Model 3". Of course, in the world of NNMT, these two models were thought to be obsolete. Why not just use RNN in the Encoder/Decoder structure to solve the problem? (Btw, there are totally 5 types IBM Models. If you are into SMT, you should probably learn it up.) But then in the world of purely attention-based NNMT, idea such as absolute alignment and fertility become important again. Because you don't have memory within your model. So in the original "Attention is all you need" paper, there is already the thought of "positional encoding" which is to model absolute alignment. So the new Salesforce paper actually introduces another layer which brought back fertility. Instead of just feeding the output of encoder directly into the decoder. It will feed to a fertility layer to decide if a certain word should have higher fertility first. e.g. a fertility of 2 means that it should be copied twice. 0 means the word shouldn't be copy. I think the cute thing about the paper is two-fold. One is that it is an obvious expansion of the whole idea of attention-based NNMT . Then there is the Socher's group is reintroducing classical SMT idea back to NNMT. The result though is not working as well as the standard NNMT. As you can see in Table 1. There is still some degradation using the attention-based approach. That's perhaps why when the Google Research Blog mention the Salesforce results : it said "towards non-autoregressive translation". It implies that the results is not yet satisfying.
OPCFW_CODE
In the old days — we are talking like the 1960s and 1970s — computers were often built for very specific purposes using either discrete logic or “bit slice” chips. Either way, more bits meant more money so frequently these computers were made with just enough bits to meet a required precision. We don’t think that was what was on [Mad Ned’s] mind, though, when he decided to implement a 9-bit CPU called QIXOTE-1 on an FPGA. Like many hobby projects, this one started with an FPGA board in search of a problem. At first, [Ned] had a plan to create a custom computer along with a custom language to then produce a video game. A quick search on the Internet led to that being a common enough project with one guy that we’ve talked about here on Hackaday before knocking it out of the park. [Ned] then thought about just doing a no-software video game. Too late to be the first to do that. Not to be deterred, he decided to duplicate the PDP-8. Whoops. That’s been done before, too. Wanting something original, he finally decided on a custom CPU. Since bytes are usually — if not technically — 8 bits, this CPU calls its 9-bit words nonads and uses octal which maps nicely to three digits per nonad. This first post talks about the story behind the CPU and gives a short overview of its capabilities, but we are waiting for future posts to show more of what’s behind the curtain in what [Ned] calls “Holy Nonads, Part 010.” The downside to doing a custom CPU is you have to build your own tools. You can always, of course, duplicate something and steal your toolchain. Or go universal. 18 thoughts on “Tilting At Windmills Nine Bits At A Time” I once made a FPGA CPU using 18 bit opcodes, but still using a more traditional 32 bit data path. The 9th (or 18th) bit is a parity bit that comes free with the memory, allowing a nice addition to the opcode space. FPGA is never the answer. FPGA is the answer for very specific applications, for example if you have a lot of data (as in, many gigabits per second) to move or manipulate with relatively simple operations. Hey SparkyGSX! Nice to see you here. :-) I was going to say the same, but had a different example in mind. Point remains: for specific cases, it can be very useful. One of the conditions is “low volume” when an ASIC doesn’t pay off. Sometimes the reconfigurability is important. And sometimes the we need some custom circuitry and too much of it to do with discrete LSI chips. (that’s how it started/grew from PALs and GALs). The other use case is interfacing with custom ASICs. Old manufacturing processes are still useful in analog designs (you can easily do GHz stuff in quarter micron), so you do the analog in a custom ASIC and offload the digital stuff to an FPGA. Super common in scientific experiments. Of course, Xilinx also now has chips which integrate ADCs/DACs which blow the cost/power ratios of discrete Gsps ADCs out of the water, so the use case for those is huge. FPGAs have a few downsides compared to an ASIC. Though, an FPGA also has a few upsides compared to an ASIC. Depends on what one needs it for. For very low volume applications where power efficiency isn’t a concern. Then FPGAs are actually rather decent, especially if one wants to fiddle with a lot of stuff in parallel, or use higher speed protocols that one typically won’t find elsewhere. Though, the parallel nature of FPGAs can in a lot of applications just use a multi core microcontroller instead. Usually at a fraction of the price, and with superior power efficiency. But multi cored micros are usually a bit “odd” in their layout, making them at times a bit inept… The high speed busses FPGAs offer is though typically not found on micros or even some SoC. So there FPGAs can be competitive if one only need a few hundred/thousand units. If one however makes enough products, a dedicated ASIC will be more cost effective, if one can live up to a “few” requisites. Usually that one knows what one is going to need out of the chip, and that one doesn’t fudge up the design both from a logic standpoint, but also electrically (here FPGAs are pointless for simulation). (Using an FPGA for logic simulation is this interesting thing that sometimes is super logical. And other times is pointless. Downsides with FPGAs as simulation tools is: 1. The design might not fit on the FPGA. 2. Routing out the design and pushing it over to the FPGA takes time, a software simulation has a decent head start. 3. Software simulation can log anything. 4. Software simulation can simulate the final chip on an electrical level if one has the tools.) Though, we can peer into the auto industry and notice that even there FPGAs are used, despite sales figures in the hundreds of thousands, sometimes millions of units per year. So why aren’t they using ASICs one might ask? And the answer is reconfigurability, if one has made a mistake one can just fix it with the next firmware update. One don’t have to manufacture another ASIC and replace it in all units already sold. And this is a huge advantage for FPGAs. But if power efficiency or peak performance are more important factors, and one is experienced in chip design, then an FPGA isn’t sometimes not even worth considering. (unless one can’t get access to a fab in time. Which is why some companies book their fab space before designing the chip.) But in the end. FPGAs usually gets glorified as this “magical silver bullet” that through some magical mean is supposed to have more performance, cost effectiveness and power efficiency than anything else. Even if it needs more transistor resources to be reconfigurable, adding both a higher power demand and larger latency into the logic, while also requiring a bigger chip with potentially lower yield. (not to mention that FPGAs also usually are stuffed with extra features one likely never will need.) Though, if all one has access to is a 400nm fab, and the FPGA however is made at 22nm, then the FPGA might actually be better in all regards. As long as one don’t fiddle with purely analog/RF stuff… But for the hobby bench, FPGAs are actually not that bad. (If it weren’t for the lack of easily acquired development tools for these chips…) Luckily there’s a fully open-source toolchain that can generate bitstreams for some of the lighter Lattice offerings. Of course, generated bitstreams are not as performant as ones pushed by the OEM tools, but they aren’t the most performant FPGAs anyways. The performance gap is closing. The open source tools are really developing. One vendor has already made them the official sw for their fpgas. I worked on a commercial product with FPGA “glue logic” that actually turned out to be very useful, because we could upgrade the FPGA in the field too, and improve and rix the hardware remotely. Agreed, it’s expensive, but sometimes you don’t have enough QTY to spin an inflexible ASIC. We use them with great success in satellite modems. Unit cost is not very important, but the ability to upgrade the code is a big plus. For cost-sensitive products we do use a custom ASIC, as these are typically mass market things they would not be upgraded anyway. “Since bytes are usually — if not technically — 8 bits, this CPU calls its 9-bit words nonads and uses octal which maps nicely to three digits per nonad.” is an interesting statement. Though, usually a byte is just the smallest directly addressable unit in memory. (parity bits not included since they aren’t technically data.) So if the processors smallest addressable unit is 9 bits, then its byte contains 9 bits. There were also architectures that uses 4, 12, 16, 18, etc bits in their bytes. This is supposedly one of the reasons network speeds are given in bits/second. Since bytes/second just didn’t make any sense. (Though, by the time the internet started becoming “common” I guess there were very very few systems in the world not using 8 bit as their smallest addressable unit. Though, some people might still be running some old 36 bit mainframe in their basement…) Ahh, the good old days of yore, before the computing world settled on 8-bit bytes. Control Data was still producing CPUs with 60-bit words, each of which could hold ten 6-bit characters (or five 12-bit characters, if you wanted full upper & lower case support.) Since those machines were essentially hand-woven from discreet components, every extra bit and every extra word counted towards the price. And the fun instruction set they supported was practically a national security issue. Those CPUs had a built-in instruction to count the number of “1” bits that were set in a register. This was an almost useless operation for the normal business and scientific computing of the era, but was a critical performance enhancement for code breaking. That’s why I said “if not technically” and why network docs call them octets. “Since bytes are usually — if not technically — 8 bits,” Reading it aloud makes it sound like one concludes that a byte often is 8 bits long, and that one suspects it to be 8 bits by definition. It feels like one of these observational statements, that adds in this “and that is likely a fact.” statements in a way that still makes a clear reservation for potentially being wrong. And the thing that makes me look at it that way is because it continues in the same fashion. “this CPU calls its 9-bit words [not bytes?] nonads and uses octal which maps nicely to three digits per nonad.” (though, bytes and words are more or less interchangeable terms as far as architecture design goes… (I though prefer to always use Byte when it refers to the smallest addressable segment of memory. Though, this becomes interesting when most memory management systems handling cache tend to just work with whole cache lines instead… Sometimes to the point that memory allocation even works based on cache lines instead of bytes.)) If one however had written: “A byte is usually 8 bits, but can technically be any number of bits long. This CPU have however gone with the name nonads instead for its 9 bit bytes. And it can neatly map 3 octals for each nonad.” Then it would have been clear that a byte is just a name for a somewhat arbitrary unit in computing. One could add why a byte isn’t always 8 bits, by just saying that “a byte is by definition the smallest addressable segment of memory, how many bits that exactly is becomes very architecture dependent.” My car’s trunk-mounted CD changer apparently uses a 9-bit communication protocol… Okay but the real question–and I can’t believe I’m the first person to ask this–can it run Eunux? I believe the old HP41 calculators used 10 (ten) bit instructions. IIRC the code, when released, used octal (1333) while enthusiasts used hexadecimal (244). In order to handle 4 and 8 bit RGB values, I would probably look at 12 and 24 bit data values. But that’s just me. Honeywell systems had either 8 bit or 9 bit bytes. Please be kind and respectful to help make the comments section excellent. (Comment Policy)
OPCFW_CODE
The time sequence of adding posts to a blog gives a simple structure. But for readers of this and similar blogs, this sequential structure is irrelevant: structuring the content of the posts is more important. The structure sketched here is not offered as a comprehensive map of the whole field of software engineering: it is merely the structure of this blog—that is, of the themes and topics on which I intend to post. The content structure is visible in two mechanisms. First, directional links between posts help navigation: ↑ or ↓ to posts offering a larger view or more detail; and ← or → to a closely preceding or succeeding subject respectively. Second, posts are formed into clusters by labels: clicking a label in the sidebar retrieves and displays all posts bearing that label. These labels are names of four general themes and their sixteen constituent topics, listed here with brief comments: each individual post is labelled with one or more of these names. Labels in square brackets—for example [Feynman]—cluster posts citing the same sources. CPS Cyber-physical system: a system in which execution of computer software governs the physical world. A CPS is bipartite: two obvious parts are its physical World, and its Machine that executes the software. Data, reified and managed by the Machine, is a physical part of many large systems. The System Behaviour is the key property of a CPS. World The physical world of a CPS, comprising: parts—including human participants—whose behaviour is governed—that is, partially controlled—by the Machine; parts in which required consequences and effects of this behaviour are located; and the all enclosing physical environment in which the system is designed to operate. Machine The computing part of a CPS executing the software. In deployment the machine will occupy all or part of many physical computers; but treating it as a single unified machine is a valuable abstraction for developing system behaviour. Data The physically realised data of a CPS—for example, in a database—created, maintained and read by its machine, and forming a further part of its world. Behaviour The physical behaviour of a CPS, evoked by interaction of the machine and the world. The system behaviour is the fundamental property of a CPS: success is a consequence of desired behaviour, as failure is of undesired behaviour. SE Software engineering for cyber-physical systems. SE tasks include designing behaviour in the world to satisfy requirements; developing software; and proving system properties. Processes System behaviour is the emergent result of machine and world interaction: the SE task includes design of processes both in the machine and in the governed world. Models Developing models specifically, but not only, of the world: sufficiently faithful models are essential to dependable behaviour of a CPS. Programs Developing the software to be executed by the machine. In developing behaviour, software architecture should reflect the system behaviour structure. Later, for most systems, software transformation is necessary for efficient deployment and execution in system operation. Requirements Requirements are the qualities and properties a system should possess. The most important requirements are desired consequences and effects of the system behaviour: only these requirements are seriously considered in this blog. HUMAN Human aspects of software engineering, based on group and personal traits and intellectual powers, in social contexts large and small. Traits Personal characteristics, especially those that affect software engineering practice. For example: easier comprehensibility of some descriptive techniques; personal preferences for particular tasks; relegation and and undervaluation of less favoured tasks; preferences for greater or lesser reliance on formality. Practices Personal and group practices and behaviours exercised within a single development project. For example: project documentation; development product reviews. Community Human factors across multiple organisations and development projects in what are or should be activities of an established community. Normal design is the product of an engineering community. Profession Large concerns within engineering more generally. For example: development of specialisations in SE; relationship of SE to established engineering branches; formation of groups by preferred SE practices. METHOD Development method: intellectual aspects of development method, including choice of languages, structuring techniques, process structure, and general principles. Thought The intellectual activities necessary to software engineering. These include design, description and analysis, and should be guided by coherent principles and disciplines. Technique Development techniques such as: behaviour structuring and combination; model structuring and design; descriptive techniques and principles for modelling; program transformation; addressing failure concerns. Structure Structure is the basic tool for managing and mastering complexity. Structure is needed for: the development process; bounding and clarifying models; recognising the nature and shape of dependability; specific development tasks at every level. Semantics Choosing the subjects and physical semantic content of models; choosing modelling languages; formalising model and behaviour designs; clarity in addressing the bipartite nature of a CPS, avoiding confusion between the world and the machine.
OPCFW_CODE
For those that aren't aware I've been inactive on the forum for a while. I'll hopefully be getting the ball rolling again with some cool stuff. Got to get back into game dev after a long hiatus. Also, keep an eye out for a shop I'm opening in the forums. Soon, no ETA. After a much needed mental hiatus I am back on track with my Zombie Survival Project that I've mentioned in previous entries. The game is set within Eden City, a once beautiful town that has become a zombie filled hellhole. Now you the player must take command over a group of survivors to find hope in these dark times. The player will get to design their own avatar, a basic staple for RPGing. However, I need help deciding the rest of the party... The player gets to pick 3 survivors from a list. That way you can build your starting team according to how you'd like to play the game. The player is assigned 3 survivors randomly. This adds variety for every play-through since the chances of getting the same team twice are low. The player designs 3 survivors to join them. This combines the concept of play variety and player choice. The player starts solo and must actively seek out allies throughout the city. Gameplay may be harder as a result. Note that any survivors I've pre-made that can be recruited each have a personal backstory regardless of the options chosen above. These are my main ideas I'm looking at implementing and I'd love to hear some opinions. Please discuss below. Also feel free to ask me questions that may be relevant to this topic! My dear old zombie game keeps getting postponed. Too many decisions, not enough decisiveness. That project has hit the backburner for a minute until I decide whether I should develop in MV instead of Ace. Meanwhile I've got me an ambitious Ace game in the works. I'm way more driven to make this at the moment while my zombie based ideas have come to a halt. All while I'm deciding whether or not I should take a promotion at work. More money, more stress. Ah, life, you fudging beach. So in essence, my project is a zombie survival game, where you control a randomly determined team of survivors. Some major aspects of the survival gameplay is collecting resources such as food, water and fuel. I've decided I can handle this survival aspect in two ways: Camping- This mechanic would allow players to set up multiple camps throughout the city. This would require the player to manage resources at each camp, as well as what team members are where and what each camp can accomplish. Base- This mechanic allows a single home-base area for the player to return to constantly. This allows the player to focus on protecting and fortifying their base while also managing survival tasks for their party of survivors. So I pose the question: If you were in a zombie apocalypse, would you prefer to constantly move from place to place? Or would you rather fortify a single position and use it as a home base? Feel free to discuss your opinions in the comments. So if you seen my Sneak Peek Entry, you know my project in the works is centered around zombies. Also it happens to be using the awesome POP! Horror City style. I'm anxious to share more details with you all, but I'll restrain myself. Please feel free to prod me for any questions. In the meantime enjoy this massive custom piece of pixel art I've made for my game. So in my years of being on this forum I decided to make a blog. Took me long enough, I'll be documenting my RPG Making shenanigans from here on out. I have at least one major project that is slowly coming to fruition. Stay tuned in for more.
OPCFW_CODE
#!/usr/bin/env bash # This script executes the selected matrix_vector multiplication implementation # with a variety of OpenMP configurations to analyse the threading scalability. # The configurable parameters are: # - $v: matrix_vector implementation # - $ps: problem size (horizontal) # - $omp_s: OpenMP implementation # - $omp_sch: OpenMP scheduler # - $omp_aff: OpenMP affinity # - $smt: Number of SMT # - $cores: Number of cores # Update the for statements to configure each parameter search space. output=scalability.txt echo "Scalability plot" | tee $output #for v in `make versions`; do for v in nlayersf ; do for ps in 32; do for omp_s in omp-locking colouring colouring2 colouring-rows ; do for omp_sch in static dynamic guided ; do for omp_aff in none compact scatter ; do for smt in 1 2 4; do for cores in 1 4 8 `seq 16 16 64`; do echo $v $omp_s $smt $cores OMP_NUM_THREADS=$(( ${cores} * ${smt} )) \ KMP_HW_SUBSET=${cores}c,${smt}t \ KMP_AFFINITY=$omp_aff OMP_SCHEDULE=$omp_sch \ ./kdriver.avx2.$v -g $ps 16 -t $omp_s > output t=$(cat output | grep "Loop time" | awk '{print $5}') check=$(cat output | grep "Reduction value" | awk '{print $3}') echo $v $ps $omp_s $omp_sch $omp_aff $cores $smt $t $check >> $output done done done done done done done
STACK_EDU