Skip to main content



Latest Terms in the Stimpunks Glossary


As we go about our work, we expand our glossary, which is currently at 341 terms.

Here are the latest entries.


When we successfully reframe public discourse, we change the way the public sees the world. We change what counts as common sense. Because language activates frames, new language is required for new frames. Thinking differently requires speaking differently.

The ALL NEW Don’t Think of an Elephant!: Know Your Values and Frame the Debate

Often when the radical voice speaks about domination we are speaking to those who dominate. Their presence changes the nature and direction of our words. Language is also a place of struggle. I was just a girl coming slowly into womanhood when I read Adrienne Rich’s words “this is the oppressor’s language, yet I need it to talk to you.” This language that enabled me to attend graduate school, to write a dissertation, to speak at job interviews carries the scent of oppression. Language is also a place of struggle.

Language is also a place of struggle. We are wedded in language, have our being in words. Language is also a place of struggle. Dare I speak to oppressed and oppressor in the same voice? Dare I speak to you in a language that will move beyond the boundaries of domination — a language that will not bind you, fence you in, or hold you. Language is also a place of struggle. The oppressed struggle in language to recover ourselves, to reconcile, to reunite, to renew. Our words are not without meaning, they are an action, resistance. Language is also a place of struggle.

Choosing the Margin as a Space of Radical Openness, bell hooks


#language

This entry was edited (1 year ago)


Latest Terms in the Stimpunks Glossary for July 2024


As we go about our work, we expand our glossary, which is currently at 360 terms. We added 20 new terms in the past month.

Here are the latest terms:

Previously:

#glossary #language

This entry was edited (11 months ago)


Latest Terms in the Stimpunks Glossary for August 2024


As we go about our work, we expand our glossary, which is currently at 375 terms. We added 15 new terms in the past month.

Several of these are for our “Systems of Power Learning Pathway”.

Here are the latest terms:

Previously:

#glossary #language

This entry was edited (9 months ago)


Latest Terms in the Stimpunks Glossary for September 2024


As we go about our work, we expand our glossary, which is currently at 385 terms. We added 10 new terms in the past month.

Here are the latest terms:

Previously:

#glossary

This entry was edited (9 months ago)


Latest Terms in the Stimpunks Glossary for October 2024


As we go about our work, we expand our glossary, which is currently at 370 terms in English and 405 terms across all languages. We added 6 new terms in the past month.

Here are the latest terms:

Previously:

#glossary

This entry was edited (8 months ago)


Latest Terms in the Stimpunks Glossary for November 2024


As we go about our work, we expand our glossary, which is currently at 378 terms in English and 413 terms across all languages. We added 8 new terms in the past month.

Latest Terms


Here are the latest terms:


Previously


#glossary

This entry was edited (6 months ago)



Wine Wayland Merge Request Opened For Clipboard Support





ope, kernel panic :/


I'm on Bazzite and was able to rollback and ~~boot~~ get to the desktop, but I'm not completely sure where to go from here. I think I need to pin my current deployment before doing anything else? I think
sudo ostree admin pin 0
? (No, it's 1)

Help is welcome but I've barely begun to troubleshoot and I just installed Discord to ask on there. Looking for sympathy I guess?

Edit: phrasing

EDIT2: Not just me, someone on the uBlue Discourse linked to this post, and other people have posted about it on the Bazzite Discord. Roll back and wait, and curious if you're also on the Nvidia KDE build too. I should probably be helpful and open an issue on GitHub. :/

Edit 3: devs are working on it :)

Final edit: the devs released a working update this afternoon, I just updated and it works. I kinda like this atomic distro thing, at no point in time was my computer unusable. 10/10 experience, still would prefer to avoid future kernel panic at the disco, lol.

This entry was edited (4 months ago)


YouTrack 2025 Roadmap


Our commitment to you We remain fully committed, as we have been all these years, to developing YouTrack as a platform that evolves with your needs. Our promise to you is to ensure that YouTrack continues to be available in both Server and Cloud versions,

Our commitment to you We remain fully committed, as we have been all these years, to developing YouTrack as a platform that evolves with your needs. Our promise to you is to ensure that YouTrack continues to be available in both Server and Cloud versions, giving you the freedom to choose the hosting option that […]

Our commitment to you


We remain fully committed, as we have been all these years, to developing YouTrack as a platform that evolves with your needs. Our promise to you is to ensure that YouTrack continues to be available in both Server and Cloud versions, giving you the freedom to choose the hosting option that best fits your organization and data governance policies.

Learning from experience as teams choose YouTrack


YouTrack is growing faster than ever, with a double-digit percentage increase in the customer base over the last year. We see adoption increasing in various team functions across small and medium enterprises. Together with our consulting partners, we work closely to help the largest enterprises with thousands of employees migrate their processes to YouTrack.

We know how tough it is to find the right project management tools – so many are out there! Teams often spend months exploring solutions to find the one that fits their needs.

Many teams switch to YouTrack to save costs without losing functionality or to find a server solution suitable for teams of a few to several hundred employees. Others transition from lightweight tools or development-specific issue-tracking platforms to gain more flexibility for growing teams and to bring various departments to work together. Some are just beginning their journey, moving from email or chat-based coordination to smart solutions like YouTrack that can support the existing business flows.

After adopting YouTrack, many teams have shared invaluable feedback, shaping our immediate priorities for 2025 and our roadmap for the future. Still, we remain flexible with our planning, leaving room to respond to your needs while staying true to our long-term vision and commitment to YouTrack as a flexible and powerful solution.

A bold new design for YouTrack

The big step in 2025


We’ll start 2025 with a bold new look and feel for YouTrack, introducing changes that reflect years of learning and exploration into how teams work on projects.

New navigation panel

The main menu will move from the top to the left side of the screen, accommodating the growing number of features and pages in YouTrack. We aim to make all the YouTrack key sections more accessible, reducing clicks and making navigation faster for all users. Starting in early 2025, the new navigation panel will be enabled by default for all users. While you will temporarily have the option to switch back to the old one in version 2025.1, we recommend embracing the new interface as we hope it will become the standard by midyear.

Project hierarchy path for tasks, tickets, and articles

This project-centric hierarchy will provide clickable breadcrumb paths for tasks, tickets, and articles, allowing you to navigate through project-related information effortlessly. For example, if you’re part of a design team contributing to a specific project, you will only ever be a couple of clicks away from any of the tasks and articles related to your design project.

Redesigned Issues page

One of the main differences between YouTrack Lite and Classic has been the layout of the Issues page. This meant that if you had a layout preference, your choice between YouTrack Lite and Classic was restricted. You don’t need to worry about that anymore! The unified Issues page will eliminate the need to choose between Classic and Lite, combining their capabilities into a unified experience. It will include customizable settings, allowing you to tailor your task list to a table or list view with as much or as little detail as you need. We’re also working to improve the way you can search and filter tasks. Once implemented, this redesign will be the default, and there will not be an option to revert it.

Further UX improvements for everyone


Later in 2025 and beyond, we will focus on expanding and enriching the new design with:

  • Revamped project pages for teams to stay connected to their work context.
  • Advanced project-centric navigation to keep you in your project’s context when needed.
  • A My Work page with a personalized view designed to help you focus on your tasks and priorities.
  • Onboarding for new users to guide them more easily through YouTrack’s features and help them be productive from day one.


AI assistance for team members and managers


We’re dedicating resources to delivering AI automation features for team members and their project managers. The free AI assistance that comes out of the box in YouTrack will be enhanced with options that are useful in daily work.

  • Let AI complete content for you. We are improving AI’s ability to suggest text completions, making drafting tasks or updates faster and more intuitive.
  • Task field suggestions from AI. Intelligent suggestions for completing fields in your tasks can streamline work and help you provide critical details effortlessly in no time.
  • Future AI capabilities. Looking ahead, we are working hard to implement the AI-based automation of your routine at a new level. We want to make it possible for you to start your day by reviewing the jobs done for you by AI agents and approving suggestions for moving forward with your projects, either in YouTrack or in the other connected systems.


Expanding Helpdesk for internal and B2B support teams


While our Helpdesk solution currently allows standard users to be contributors when they are involved in helping your support teams with customer tickets, our customer feedback has highlighted another important scenario – internal helpdesk projects. In these cases, standard users also need to be able to act as reporters in internal helpdesk or service desk setups. To address this, we will also focus on improving the user interface for tickets submitted by internal reporters, improving email notifications for internal reporters, ensuring proper visibility and comment settings for users, and resolving other issues related to ticket visibility and comments for such a scenario.

Our long-term goal with the Helpdesk solution is to introduce advanced capabilities for separating and managing client organizations within it. This will enable B2B support teams to provide tailored experiences for different clients.

More flexibility for project managers

Planning canvas


We’re working on introducing a planning canvas to make project planning more visual and collaborative. This feature will allow teams to start working using a whiteboard-style interface where ideas can be drafted and transformed into actionable tasks with just a few clicks. Teams will also be able to move existing tasks to the canvas for better visualization and interactive adjustments.

Customized naming for tasks


Would you like to call your tasks something other than “issues”? You may want to call them “documents” for legal projects, “purchase orders” for finance ones, or “jobs” or “employees” for HR and recruitment workflows. We’re continuing to work on a feature that will give you the flexibility to tailor the naming and structure of your entries when starting a specific project.

More apps for your teams


YouTrack apps already allow you to add significant functionality, including entire new pages. For example, the Diagram Editor app lets teams create and manage visual diagrams directly within YouTrack.

We are committed to further supporting our partners and customers in building apps. If you have an idea for a new app or feature, feel free to contact us – we’d love to collaborate and discuss how we can help.

Extended enterprise features


In relation to YouTrack Server, for our large enterprise customers, we plan to introduce more tools to help you monitor database performance and manage workloads effectively. These updates will help teams ensure stability and optimize resource usage as they scale.

We’ll prioritize Docker images as the primary method for installing and upgrading server instances, with .zip distribution support ending in early 2025. As containerized deployments offering secure and isolated environments become the IT standard, we aim to align with industry best practices by focusing exclusively on containerized solutions.

For YouTrack Cloud, we’re committed to providing a guaranteed cloud service uptime of 99.99% and confirming this in our terms of service.

We’re also expanding single sign-on (SSO) support to include automated user provisioning and deprovisioning. Okta and Entra ID integration improvements are coming in early in 2025, with additional updates to follow. To further enhance user management for enterprise teams, we’re introducing:

  • SCIM 2.0 protocol support to further enhance user management capabilities.
  • OIDC protocol support to make it possible to sign-in with even more identity providers.


Let’s shape the future of YouTrack together


We’d love to hear from you! Your feedback shapes YouTrack’s future, and we’re always open to ideas, suggestions, and insights. Whether you want to share a feature request, an improvement suggestion, or just your thoughts, get in touch with us by commenting on this blog or using our public project tracker.

Thank you for being a part of the YouTrack community. Together, we’re building a more powerful YouTrack for 2025 and beyond.

Your YouTrack team,
Celebrating 15 years with you!

Go to Source



Top Java Conferences and Events in 2025


Planning your 2025 tech calendar? Java conferences offer more than just technical sessions – they’re your gateway to connecting with a vibrant community of professionals and passionate Java fans, exploring new cities, and finding fresh inspiration. As the

Planning your 2025 tech calendar? Java conferences offer more than just technical sessions – they’re your gateway to connecting with a vibrant community of professionals and passionate Java fans, exploring new cities, and finding fresh inspiration. As the IDE of choice for professional development in Java and Kotlin, IntelliJ IDEA is built to support developers […]

Planning your 2025 tech calendar? Java conferences offer more than just technical sessions – they’re your gateway to connecting with a vibrant community of professionals and passionate Java fans, exploring new cities, and finding fresh inspiration.

As the IDE of choice for professional development in Java and Kotlin, IntelliJ IDEA is built to support developers at every stage of their journey. That’s why our team will be at these events—whether at a booth where you can meet our team or through expert talks sharing insights from the cutting edge of development.

We’ve put together this guide to help you choose the events that best match your interests and goals.

Java conferences and events to attend in 2025

Devnexus


📍Atlanta, Georgia, USA

📅 March 4–6

💡JetBrains booth: +

Devnexus is the longest-running and largest Java ecosystem conference globally, bringing together developers, architects, and tech enthusiasts to share knowledge and explore the latest advancements in the Java world.

Devnexus 2025 is packed with three days of workshops, sessions, and full-day training seminars to boost your skills. With 14 tracks, over 160 expert speakers, and a celebration of Java’s 30th anniversary, it’s the ultimate event for anyone passionate about Java and development.

JavaOne


📍Redwood Shores, California, USA

📅March 18–20

Since 1996, JavaOne has been a popular gathering spot for developers worldwide to connect, learn, and celebrate everything Java. Designed by developers for developers, this one-of-a-kind annual event is hosted by Oracle’s Java organization.

JavaOne 2025 will celebrate two big milestones: the launch of Java 24 and Java’s 30th anniversary. You don’t want to miss those! Learn from expert-led sessions and keynotes, meet the stars of the Java community, and pitch your ideas at the JavaOne Unconference.

Devoxx


Devoxx is a series of global community-driven conferences organized by developers who understand what other developers truly need. With events in Belgium, France, the UK, Poland, and more, Devoxx keeps its global appeal while reflecting the unique culture and tech trends of its region.

Here are the must-visit events for 2025 from the Devoxx series:

  • Devoxx Belgium

📍Antwerp, Belgium

📅October 6-10

💡JetBrains booth: +

  • Devoxx France

📍Paris, France

📅April 16-18

💡JetBrains booth: +

  • Devoxx Poland

📍Krakow, Poland

📅June 11-13

  • Devoxx UK

📍London, United Kingdom

📅May 7-9

Java Land


📍Nürburgring

📅 April 1-3

JavaLand features 148 presentations carefully selected from nearly 470 submissions, ensuring a diverse and high-quality program featuring a dedicated training day on April 3, hands-on workshops, and countless opportunities to engage with industry experts and fellow developers.

The conference also offers the NextGen program, which allows students and trainees to attend JavaLand 2025 for free.

Spring I/O


📍Barcelona, Spain

📅May 22-23

💡JetBrains booth: +

Spring I/O is a popular conference for the Spring Framework ecosystem hosting over 1,200 guests annually. You will start with a full-day of workshops and then enjoy two days of expert-led sessions about Spring Framework updates, microservices architecture, reactive programming, cloud-native development, Kubernetes integration, and more.

While in Barcelona, attendees can also explore iconic landmarks such as the Sagrada Família, Park Güell, and the Gothic Quarter. Don’t miss this chance to enhance your skills and stay ahead in Spring technologies! See you there!

SpringOne


📍The USA

📅 August

💡JetBrains booth: +

SpringOne is the premier Spring conference, with over 2,500 attendees. It offers both virtual and in-person attendance options. Attendees have the opportunity to engage with the latest Spring innovations and technical content presented by community members and the Spring development team.

While this year’s event hasn’t been announced yet, you can check out the session recordings from the last year’s event on this YouTube channel to get a general idea of what talks to expect. For the most current information, please visit the official SpringOne website.

JavaZone


📍Lillestrøm, Norway

📅September 3-4

JavaZone, one of Europe’s largest Java conferences, is gearing up for its 24th edition in 2025. Organized by javaBin, the Norwegian Java User Group, this annual event gathers over 3,000 developers, architects, and tech enthusiasts. The 2025 speaker lineup is pending, but past editions have showcased international and local experts talking passionately about all things Java.

J-Fall


📍The Netherlands

📅November

Though the exact dates and place are not announced yet, get readyto team up with 1,800+ Java people from the Dutch Java User Group (NLJUG) in November.

You’ll hear from amazing speakers, including Java Champions, and you’ll get a chance to participate in hands-on workshops and deep-dive sessions. It’s more than just talks – J-Fall celebrates everything Java, with the NLJUG Innovation Award highlighting the most groundbreaking Java project of the year. Plus, explore the buzzing market floor where companies showcase their Java-powered innovations.

J-Fall is exclusively for NLJUG members and always sells out fast – last year, it was sold out in just an hour! Want in? Once the sales are open, secure your spot!

Several speakers from JetBrains will be there, so feel free to join their sessions and network.

See you there!


We hope this guide helps you to plan the year for cool networking and new knowledge. Our team will be happy to meet you!

Go to Source



Faster Debugging for Massive C++ Projects in Rider


If you work on large Unreal Engine projects in Rider, you’ve likely experienced the dreaded debug-step delay – hit F10 for Step Over, contemplate making coffee, then finally see the next line execute. These delays occur in LLDB Debugger, the engine behin

If you work on large Unreal Engine projects in Rider, you’ve likely experienced the dreaded debug-step delay – hit F10 for Step Over, contemplate making coffee, then finally see the next line execute. These delays occur in LLDB Debugger, the engine behind Rider’s debugging capabilities for C++ and other native languages. After months of focused […]

If you work on large Unreal Engine projects in Rider, you’ve likely experienced the dreaded debug-step delay – hit F10 for Step Over, contemplate making coffee, then finally see the next line execute.

These delays occur in LLDB Debugger, the engine behind Rider’s debugging capabilities for C++ and other native languages. After months of focused optimization work, we’ve made significant improvements to our custom version of LLDB on Windows. As a result, Step Over has gotten a major speed boost – up to 50 times faster in certain cases. These improvements will debut in Rider 2025.1 and the EAP builds leading up to it.

Download Rider 2025.1 EAP

In this blog post, we’ll tell you about the game-changing optimizations that got us there.

Creating a benchmarking environment


Our initial challenge was to reproduce the slow stepping behavior users report. Standard Unreal Engine demos weren’t showing the full extent of the problem – the difference between a 10ms and 100ms step isn’t immediately obvious. The real issues emerge in massive projects with gigabytes of debug symbols.

To properly assess the situation and measure our improvements, we generated an extreme test case – a C++ project with a 1 GB binary and 8 GB of debug symbols to amplify the stepping delays our users might encounter.

A look under the hood


To better understand our approach to tackling the performance issues, let’s look at how stepping actually works in Rider.

LLDB Debugger is part of the LLVM-project-compiler infrastructure, designed specifically for C++ and other native languages. As part of the LLVM project, it provides low-level debugging capabilities essential for complex codebases. Rider maintains its own customized version of LLDB to ensure an optimal debugging experience for C++ projects, particularly with Unreal Engine development.

When you press F10, the IDE sends a stepping request to LLDB, which must:

  • Find the next stop location (the instruction address).
  • Place a software breakpoint there and resume the debugging process.
  • After stopping at the instruction address, check if the step is really finished. If not, try again.
  • Once the step is finished, restore the call stacks.
  • Report to the IDE that the step has finished.
  • The IDE requests all call stacks for all threads.
  • The IDE requests the visualization of local variables for the current frame from LLDB, which triggers dozens of expression evaluations in LLDB.

This process involves resolving symbols by addresses and names, as well as examining assembly instructions. The larger the debugging program and the more debug symbols it has, the slower these operations become.

Here are some specifics on the optimizations we implemented to address this.

Our optimization strategy

Negative caching


During stepping, LLDB makes numerous requests to debug symbols (PDB), most commonly to use addresses to resolve symbols. For example, when LLDB restores the call stack, it resolves every frame address using debug symbols from the PDB. Very often, the root (the bottom) of the call stack consists of functions like invoke_main(), __scrt_common_main_seh(), __scrt_common_main(), and mainCRTStartup(). These functions are compiled into the user application, but the compiler doesn’t provide debug symbols for them. As a result, LLDB cannot find any debug information for these addresses.

Although LLDB was already caching successful lookups internally, we discovered that it was ignoring failed lookups, which turned out to be surprisingly expensive operations. Our implementation of caching for these failed results gave us an immediate performance boost.

Looking beyond the mutex


Because LLDB is multithreaded, certain operations require mutex locks to ensure thread synchronization. For one, access to debug symbols is mutex-protected, which means we need to be extra cautious with these blocking operations in performance-heavy scenarios. Any slow operation with debug symbols inherently affects the entire stepping process. One such bottleneck turned out to be searching for template function symbols by name.

Why do we need to search for template functions at all? After each step, Rider asks LLDB to show you what’s in each variable in the current frame. To do this, LLDB uses Natvis, which might need dozens of expression evaluations for just one variable. Each evaluation means LLDB has to resolve names in the debug symbols, including trying to match type names against function names.

The core problem lies in how MSVC’s PDB files store template function names. Take test<const char *, int> as an example – LLDB won’t find it because MSVC expects test<char const *,int> (notice how const moved and the spaces disappeared). Previously, Rider’s LLDB would search for test<*> to try to find all possible matches. This is a seemingly simple solution, until you consider how many variations of std::unique_ptr<*> exist in a real project. This wildcard search became painfully slow with thousands of template instantiations.

We solved this by transforming the template name into MSVC’s exact format before searching. While LLDB still adds wildcards, the more specific name pattern dramatically reduces the search scope. This optimization not only significantly reduces template function search times but also fixes incorrect cast expression parsing.

Next line, please!


Rider uses custom scripted thread plans rather than native LLDB stepping, which has Windows-specific issues. But how exactly does stepping work? When you invoke Step Over, you’re telling the debugger to advance to the next line – but finding that “next line” is trickier than it sounds.

The thread plan scans instructions starting from the program counter (RIP in x86_64) until it finds an instruction on a different line or hits a branch instruction (like jmp, call, ret) – even if that branch is on the same line. To actually move to the chosen instruction, LLDB uses software breakpoints, overwriting the target instruction with a breakpoint instruction (int3 in x86_64). After hitting the breakpoint and stopping, LLDB restores the original instruction and checks if this is where we really want to stop.

Here’s where it gets interesting: sometimes that “next line” lands us inside an inlined function. While that’s fine for stepping in, it’s not what we want for stepping over. Previously, when the thread plan detected an inlined function, it had to repeat the whole process – find another instruction, set a breakpoint, and try again. In optimized builds with heavy inlining (very common in STL and Unreal Engine code), this could happen hundreds of times per step! Each round requires at least two memory writes, making it an expensive operation.

Our fix? We now check instructions in advance and simply skip them if they’re in an inlined function.

Is it a call?


Another key optimization focused on how thread plans identify call instructions. While LLDB’s SB API can check if an instruction is a branch, it lacks a direct method to identify calls specifically. This forced thread plans to check instruction mnemonics using the available method, which proved costly – resolving mnemonics requires parsing the complete instruction including operands and comments.

To solve this issue, we added a new method to the SB API that checks for calls by examining the instruction bytecode directly, bypassing the overhead of full instruction parsing.

Measuring the improvements


We compared stepping performance between Rider 2024.3 and 2025.1 across several scenarios, measuring the time between sending a stepping request to LLDB and receiving the first frame stack:

Large C++ project built without optimizations:
A bar chart depicting a dramatic improvement in stepping time for large C++ projects built without optimizations
Same project built with optimizations enabled:
A bar chart depicting a dramatic improvement in stepping time for large C++ projects with optimizations enabled
Same project build with optimizations, but specific scenarios where stepping operations previously caused 23-second delays:
A bar chart depicting a dramatic improvement in stepping time for large C++ projects where specific stepping scenarios used to cause 23sec delays
Real-world test using a sample game project built with Unreal Engine (320MB binary, 2.5GB debug symbols):
A bar chart showing a dramatic improvement to stepping speed in game projects built with Unreal Engine in Rider
The improvements deliver up to 50x faster stepping times, with most operations now completing in under 100ms. While these extreme test cases may not reflect every project, developers working with large C++ codebases, particularly Unreal Engine projects, should notice significantly smoother debugging sessions.

We need your help


These debugging improvements will be available in Rider 2025.1, as well as the EAP builds leading up to its release. As we continue to work on the performance of the debugger, we’ll be looking out for edge cases that might need additional optimization. Here’s where your input is invaluable!

We’ve put together a self-profiler for you to use whenever you’re dealing with slow stepping through C++ code. The Profile Native Debugger Process action can be invoked via Search Everywhere or from the Help menu.
A screenshot of the Search Everywhere pop-up with "Profile Debugger" entered into the search.
When you trigger this action, the IDE will ask you to grant administrative privileges for profiling. Once you’re done profiling the problematic actions of the debugger, you’ll get a notification with a link to the resulting snapshot. This snapshot can then be shared with us on YouTrack and Zendesk and may be a crucial asset in future investigations. We appreciate your help in perfecting the debugger with us.

Download Rider 2025.1 EAP

Special thanks to:


  • Mikhail Zakharov for developing the self-profiler.
  • Aleksei Gusarov for thorough code reviews.


Go to Source



Global Developer Population Reaches 19.6 Million in 2024: Explore the Updates in Our Data Playground


A year ago, we launched the Developer Ecosystem: Data Playground, an interactive dashboard offering insights into the global developer landscape. Today, we’re excited to introduce a significant update that enhances the dashboard with refined data, salary

A year ago, we launched the Developer Ecosystem: Data Playground, an interactive dashboard offering insights into the global developer landscape. Today, we’re excited to introduce a significant update that enhances the dashboard with refined data, salary ranges, and even more granular insights into developer demographics and market trends. What’s New? To make it easier to […]

A year ago, we launched the Developer Ecosystem: Data Playground, an interactive dashboard offering insights into the global developer landscape. Today, we’re excited to introduce a significant update that enhances the dashboard with refined data, salary ranges, and even more granular insights into developer demographics and market trends.

What’s New?


To make it easier to explore and analyze different aspects of the developer landscape, we have now separated the IT Salary Calculator from the Data Playground dashboard. This will allow users to focus specifically on salary insights or dive deeper into developer population estimates without overlap:

  • The IT Salary Calculator now stands on its own, allowing users to focus exclusively on salary trends. Our calculator provides clear salary ranges tailored by country, programming language, and total experience level.
  • TheData Playground has been refined with a new methodology and 2024 data for deeper insights. This dashboard offers an in-depth view of global and regional developer populations, distributions across age groups and experience levels, and the prevalence of various programming languages, OSs, technologies, and more.
“This separation ensures a more streamlined experience and makes it easier to find the exact insights you need.”



Irina Chichikova
Frontend Developer

How Many Developers Are There in the World? 2024 Update


Using our 2023 methodology, we estimated approximately 13.4 million professional developers worldwide. For 2024, our revised model puts the number closer to 19.6 million professional developers globally — a major increase driven by key updates to our methodology. To ensure accuracy, we have also revised historical estimates, aligning past data with our refined model.


Why Did the Numbers Change?


  1. Improved Data Sources

We’ve integrated new data from international labor organizations and the latest data from our 2024 Developer Ecosystem Survey, ensuring our estimates reflect the most up-to-date trends.

  1. Refined Methodology

Our model now better accounts for regional specifics and countries where developer growth has been accelerating. The five fastest-growing developer populations between 2019 and 2024 are:

:flag-in: India: 1.3M -> 2.6M (+1.3M, +14% YoY)

:us: United States: 1.7M -> 2.9M (+1.2M, +11% YoY)

:cn: China: 3M -> 3.9M (+0.9M, +5% YoY)

:flag-br: Brazil: 0.4M -> 0.6M (+0.2M, +9% YoY)

:jp: Japan: 1.1M -> 1.3M (+0.2M, +4% YoY)

  • India now accounts for 14% year-over-year growth in its developer workforce, driven by its booming IT sector.
  • The United States and China continue to dominate in total numbers of professional developers.
  1. Expanded Definition of Professional Developers

While we continue to focus on professional developers, we’ve refined our categories to include roles like Software Quality Assurance Analysts, Testers, and Data Scientists. These professions were previously grouped into broader categories that also included many non-coding specialists. With these roles now explicitly categorized in labor statistics, our estimates provide a more accurate picture of the developer workforce.

“Our Data Science team worked on updating the model to incorporate fresh labor statistics and better reflect the rapid growth of the ICT sector. We also focused on including more coding professions in estimation, such as Software Quality Assurance Analysts and Data Scientists, to ensure our estimates align more closely with the realities of the evolving developer landscape.”



Vasiliy Kaminskiy
Data Science Team Lead

“The global developer workforce is expanding faster than ever. Whether you’re hiring, planning a product launch, or exploring new markets, our updated Developer Ecosystem: Data Playground offers the insights you need to stay ahead.”



Nadia Lokot
Product Manager

Start Exploring Now

IT Salary Calculator: Now with Ranges


The IT Salary Calculator has been significantly enhanced to provide users with more accurate and actionable insights. The tool now offers salary ranges segmented by country, programming language, and total experience level, making it invaluable for job seekers, employers, and industry analysts.

“The salary data still comes from the Developer Ecosystem Survey but is now more precise. This year, respondents provided exact incomes (e.g., $4,200 monthly) instead of selecting from ranges (e.g., $4,000-$5,000). Additionally, we now display gross annual income – before taxes and including bonuses – for better comparability with other salary sources.”



Mikhail Tribunskiy
Data Scientist

Check salary distribution for your country and programming language with real data.

Try the IT Salary Calculator

Let us know your feedback as we continue to improve and expand this resource!

Go to Source



IntelliJ IDEA 2025.1 EAP 3: Kotlin K2 Mode Updates, Enhanced Logical Code Structure View, and More


The IntelliJ IDEA 2025.1 Early Access Program is in full swing, and build #3 is now available! You can download this version from our website, update directly from within the IDE, use the free Toolbox App, or install it via snap packages for Ubuntu. Down

The IntelliJ IDEA 2025.1 Early Access Program is in full swing, and build #3 is now available! You can download this version from our website, update directly from within the IDE, use the free Toolbox App, or install it via snap packages for Ubuntu. Download IntelliJ IDEA 2025.1 EAP 3 We’re covering all of the […]

The IntelliJ IDEA 2025.1 Early Access Program is in full swing, and build #3 is now available!

You can download this version from our website, update directly from within the IDE, use the free Toolbox App, or install it via snap packages for Ubuntu.

Download IntelliJ IDEA 2025.1 EAP 3

We’re covering all of the notable updates introduced in this Early Access Program in our dedicated 2025.1 EAP blog section. Below are the highlights from this week’s release.

Kotlin

Java-to-Kotlin auto-conversion with copy-paste in K2 mode


We are getting closer to reaching feature parity between K2 and K1 modes. K1 mode’s auto-conversion for Java code in Kotlin files is a popular feature that makes it easier to cross the barrier between the two languages. Now, in K2 mode, you can also paste Java code and have it automatically translated to Kotlin. If you want to add a Kotlin file to your Java project, Kotlin will also be configured automatically.

Frameworks and technologies

Support for Liquibase in the Logical code structure view


We’ve improved the Logical code structure view introduced in IntelliJ IDEA 2024.3, expanding it to support additional file types. With IntelliJ IDEA 2025.1 EAP 3, you can now enjoy a more meaningful structure representation and streamlined navigation tailored specifically for Liquibase change sets. Easily explore and manage your change sets with an intuitive overview that highlights their logical hierarchy, helping you stay organized and productive when working on database schema changes.

Code completion for nonexistent Spring Data repositories


IntelliJ IDEA now helps you write code with even fewer distractions by automatically creating Spring Data repositories for you. Simply start typing the entity name, and if the repository doesn’t exist, the IDE will suggest creating one. Choose the repository type and seamlessly continue your work by adding derived query methods and processing the extracted data.

These are this week’s most noteworthy updates. For the complete list of changes, check out the release notes.

We’d love to hear your thoughts – share your feedback in the comments below or on X, and report any bugs through our issue tracker.

Stay tuned for more news coming next week, and happy developing!

Go to Source



Python Developer Advocate – Will Vincent


Hi, I’m a new Python Developer Advocate, focusing especially on web development. If you have any questions or want to share ideas around JetBrains products, most notably PyCharm, say hi at william.vincent@jetbrains.com. Background I’m a former Django Boar

Hi, I’m a new Python Developer Advocate, focusing especially on web development. If you have any questions or want to share ideas around JetBrains products, most notably PyCharm, say hi at william.vincent@jetbrains.com. Background I’m a former Django Board Member and the author of three books on Django. Since 2019, I’ve co-hosted the Django Chat podcast […]

Hi, I’m a new Python Developer Advocate, focusing especially on web development. If you have any questions or want to share ideas around JetBrains products, most notably PyCharm, say hi at william.vincent@jetbrains.com.

Background

I’m a former Django Board Member and the author of three books on Django. Since 2019, I’ve co-hosted the Django Chat podcast and co-written the weekly Django News newsletter while also maintaining several open-source projects, including the awesome-django and Lithium starter project repos. These days most of my new content is found on LearnDjango.com, which has a growing list of free tutorials and premium courses.

I love learning and teaching, so if you attend a Python conference, you might see me give a talk. Before focusing on web development in Python, I worked at multiple early-stage startups, including Quizlet, and taught a course on Web Development at Williams College.

What I’ll Be Doing

I’ll be attending a number of conferences this year, including DjangoCon Europe, DjangoCon US, PyCon US, EuroPython, and PyTorch. Come say hi if you see me there.

I’m working on several open tickets around core Django as well as its official documentation.

And I’m excited to learn more about data science this year, exploring ways that it can (or should) overlap with web development, so be on the lookout for future videos, blog posts, and more from me in this area.

What Interests/Excites Me About the Role

I’ve worked with JetBrains for years as a Django Board Member. Their annual fundraiser is the single biggest contributor to the Django Software Foundation Budget. And the annual Django Developers Survey, which I started up, is now run in conjunction with JetBrains, who helps with the formatting and data analysis. In short, JetBrains is a long-time partner in the open-source community and continues to be.

As a Developer Advocate, I will continue to teach what I learn about Python, web development, and data science to the community. Now is a particularly exciting moment as new AI features make their way into text editors and IDEs. There is a lot to figure out around how developers can properly benefit from these tools.

Fun Facts

  • I met my wife at the University of St. Andrews, where I lived in the same dorm as Prince William and Kate.
  • I was a Book Editor before switching into tech
  • I’m a longtime Liverpool FC supporter

Connect


Go to Source



JetBrains and LinkedIn Partner to Launch a Professional Certificate


JetBrains has partnered with LinkedIn to offer the Java Foundations Professional Certificate, exclusively available on LinkedIn Learning. With over 1 billion members on the platform and 7 people hired every minute, LinkedIn provides unmatched access to pr

JetBrains has partnered with LinkedIn to offer the Java Foundations Professional Certificate, exclusively available on LinkedIn Learning. With over 1 billion members on the platform and 7 people hired every minute, LinkedIn provides unmatched access to professional growth opportunities. Together, with our expertise in software development and education, we’re combining the best of both worlds: […]

JetBrains has partnered with LinkedIn to offer the Java Foundations Professional Certificate, exclusively available on LinkedIn Learning.

With over 1 billion members on the platform and 7 people hired every minute, LinkedIn provides unmatched access to professional growth opportunities. Together, with our expertise in software development and education, we’re combining the best of both worlds: expert-led credentials and career-oriented networking.

START LEARNING

What’s a professional certificate?


LinkedIn Learning offers professional certificates from industry leaders, including Microsoft, Atlassian, GitHub, Adobe, and now JetBrains. Professional certificates make it easy to complete courses, take assessments, and share your credentials without leaving LinkedIn – great for making your profile stand out to recruiters.

With over 20 years of expertise in software development and the trust of 11.4 million developers, we can ensure you’re learning from the best. The Java Foundations Professional Certificate enhances your LinkedIn profile, validates your Java skills, and gives you real-world experience. You’ll work in IntelliJ IDEA, the industry’s leading IDE for Java development, gaining practical knowledge essential for your career.

Why Java?


With 78% of Java developers choosing IntelliJ IDEA and over a million learners on JetBrains Academy, the Java Foundations Professional Certificate on LinkedIn Learning was a natural progression for us. But in a world buzzing with AI, why is learning Java still important?

Java provides a strong foundation in programming concepts. Learning Java makes it easier to dive into specialized fields like AI later. Our Computer Science Learning Curve Survey 2024, conducted among more than 23,000 learners, shows that most of them begin their journey with foundational languages like Java.

Java remains one of the most widely used programming languages. It powers industries from mobile development to large-scale enterprise solutions. Its demand continues to grow, offering stability and opportunities in your career path.

What you’ll learn


The Java Foundations Professional Certificate consists of five courses that are designed to take you from zero experience to proficiency in Java. By the end of the series, you’ll have gained the skills required to apply for junior developer positions right on LinkedIn.

Syntax and Structure

Start your journey by learning how to install Java and IntelliJ IDEA and work with variables, data types, and key language components. You’ll also practice controlling program flow with conditional logic and loops, as well as using Java collections.

Objects and APIs

In this course, you’ll go through the fundamentals of objects and APIs in Java. You’ll learn about inheritance, polymorphism, abstraction, interfaces, and data structures, in addition to getting tips on handling exceptions, resolving errors, and preventing memory leaks.

Object-Oriented Programming (OOP)

This course covers the fundamentals of OOP in Java to help you write secure, scalable, and maintainable code. You’ll find out how object-oriented principles are built into the language with classes, instances, and constructors to represent real-world objects.

Data Structures

Data structures are crucial for nearly all applications. This course covers essential data structures and their uses, as well as considerations like speed and performance. You’ll learn about arrays, their benefits and drawbacks, and how to use them effectively. The course also delves into Java Collections, emphasizing the Collection interface, which underlies most Java data structures.

Java Database Connectivity (JDBC) API

Developers building mobile, web, or desktop apps often need to integrate relational databases. In this course, you’ll be introduced to the JDBC API and learn how to use it to manage data from databases like Postgres, Oracle, MySQL, and SQL Server in Java applications.

We can’t wait to see you getting started with the Java Foundations Professional Certificate and taking your first steps on the path to becoming a developer. Make sure you let us know how you get on, and don’t forget to share your certificate and tag us on LinkedIn!

The JetBrains team

Go to Source



9 Tips for Productive Java Development With Databases in IntelliJ IDEA


In this article, we’ll share nine time-saving ways IntelliJ IDEA can boost your productivity when developing Java applications with databases – whether you’re starting a new project or diving into an ongoing one. Get IntelliJ IDEA Ultimate Create data sou

In this article, we’ll share nine time-saving ways IntelliJ IDEA can boost your productivity when developing Java applications with databases – whether you’re starting a new project or diving into an ongoing one. Get IntelliJ IDEA Ultimate Create data sources automatically from properties IntelliJ IDEA makes it easy to create a data source for your […]

In this article, we’ll share nine time-saving ways IntelliJ IDEA can boost your productivity when developing Java applications with databases – whether you’re starting a new project or diving into an ongoing one.

Get IntelliJ IDEA Ultimate

Create data sources automatically from properties


IntelliJ IDEA makes it easy to create a data source for your Spring project right from the application.properties file – simply open it and click on a gutter icon next to the properties.

In the opened Data Sources and Drivers dialog, you’ll see a data source already assigned and the database-related fields prefilled – all you need to do is to test connectivity (just in case) and click OK. The data source will be created for you.

Test Spring Data JPA query methods without running the application


IntelliJ IDEA simplifies Spring Data JPA method query verification! It provides autocompletion for names and the ability to check generated queries without running the application. Just click the dedicated gutter icon to execute repository methods directly in the JPQL console.

Review database schemas as diagrams


Database diagrams are great for quickly grasping the structure of databases and understanding the relationships between their various objects. IntelliJ IDEA can create detailed diagrams for data sources, schemas, or tables to help you analyze the data structure more effectively. To generate a diagram, right-click a database object in the Database tool window and select Diagrams | Show Diagram.

You can also assign colors to diagram objects to further enhance the way you interact with and comprehend your database structure.

Review query results right in the editor


IntelliJ IDEA provides a compact way to review query results right in the editor. To enable it, click the In-Editor Results button in the query console before running your query. ​​This is especially useful for working with smaller datasets or data samples.

Modify query data in the results set view


When you need to make changes to cell values in IntelliJ IDEA, you don’t have to write and re-run queries! Simply click on a cell value that you want to edit, enter the new value, then click the Submit button (⬆) or ⌘↩/Ctrl+Enter to push changes to the database.

View query results as charts


Charts provide a powerful and user-friendly way to quickly gain actionable insights from your query results. This feature is particularly useful when analyzing large datasets, looking for patterns, or presenting trends in an easily comprehensible format.

To open chart settings, click the Switch to Chart icon on the data editor toolbar. You can choose from a wide range of chart types, including bar charts, pie charts, area charts, line charts, and more, depending on what best suits your needs.

When you need to present your findings or keep snapshots of data dynamics, you can export charts in .png format. To save a chart snapshot, simply click the Export to PNG button in Series Settings.

Profile your query with an execution plan


You can also visualize execution plans for queries, illustrating the set of steps that were used to access data in a database and the cost of each step – in other words, how long it takes to run the statement.

To open the execution plan, right-click an SQL statement, select Explain Plan | Explain Plan, and then click on the Show Diagram icon.

Use DB migration libraries to update application databases


Database schemas evolve over time as business requirements change, and database schema updates and migration can be tricky and error-prone when done manually. Instead, take advantage of IntelliJ IDEA’s built-in support for automatically generating migration scripts based on existing JPA entities. For more information, refer to this article.

Leverage AI Assistant


AI Assistant makes data query and managing data faster and more efficient. It helps speed up SQL query generation, provides explanations, suggests fixes, and can even generate test data tables!
AI Actions for SQL query in IntelliJ IDEA


By following these tips, you can optimize your workflow, save time, and make working with databases more productive and enjoyable. Check out this page to learn more about the database tools in IntelliJ IDEA.

Happy developing!

Go to Source



JITWatch4i: Analyzing IntelliJ IDEA’s Startup


Introduction A typical Java or Kotlin programmer spends most of their productive time either creating application code in an editor or searching for bugs in a debugger. Occasionally, they might dive into a profiler when looking for places where the applic

Introduction A typical Java or Kotlin programmer spends most of their productive time either creating application code in an editor or searching for bugs in a debugger. Occasionally, they might dive into a profiler when looking for places where the application spends too much time. However, they almost never venture into the Java C1 or […]

Introduction


A typical Java or Kotlin programmer spends most of their productive time either creating application code in an editor or searching for bugs in a debugger. Occasionally, they might dive into a profiler when looking for places where the application spends too much time. However, they almost never venture into the Java C1 or C2 compilers and their resulting products – low-level assembly code. For the most part, the Java compilers are black boxes that typically remain closed under normal circumstances.

Some time ago, I tried to analyze the startup of larger projects, such as IntelliJ IDEA. When analyzing program startup in a JIT-based virtual machine, it’s essential to realize that standard profiling tools don’t provide an accurate picture of CPU load or the performance of individual methods. The final times are distorted because some code runs in a non-optimized form and some CPU capacity is used for compiling methods. This led me to search for a tool that could display compilation processes, and that’s how I found JITWatch.

This article introduces JITWatch4i, an IntelliJ IDEA plugin based on JITWatch, designed for analyzing and visualizing compilation processes directly within IntelliJ IDEA. After laying a theoretical foundation with a general overview of Java’s tiered compilation process, we’ll demonstrate the plugin in action, comparing IntelliJ IDEA’s startup speed under different values of the -XX:TieredOldPercentage parameter.

JITWatch


Developed by Chris Newland, JITWatch is a Java program that analyzes JVM (HotSpot) compilation logs and provides a detailed analysis of the behavior of both compilers in the Java Virtual Machine.

Why JITWatch4i?


From my point of view, the original JITWatch is undoubtedly a useful tool for analyzing JIT processes in the JVM. However, while using it, I encountered several issues that made my work more difficult:

  • Configuration requires you to set paths to source files, that is, to all modules containing code for the analyzed project. For large applications like IntelliJ IDEA, whose complete structure you may not even know, this is quite time-consuming.
  • Configuration requires you to set class locations, essentially duplicating the classpath of the analyzed program.
  • JITWatch uses JavaFX for its UI. It takes a long time to visualize a project with tens of thousands of compilations in JavaFX, and some charts are practically unusable.
  • Installing and running JITWatch isn’t complicated, but it also isn’t a one-click process.

When I used JITWatch, I particularly struggled with its lack of direct integration into a modern development environment – specifically the environment of the project I was working on. Eventually, I concluded that such an integration could bring new life to this great project, which led me to create JITWatch4i.

JITWatch4i is a plugin for IntelliJ IDEA that integrates the JIT analysis visualization features of the original JITWatch directly into the IDE. Integration with IntelliJ IDEA removes the need to manually configure paths to source code or compiled classes, as the IDE already has information about the structure of the currently open project and its dependencies, which the plugin can directly use.

Furthermore, the original JavaFX framework was replaced with the older but simpler Swing library, which is significantly optimized in the JetBrains Runtime. As a result, visualizing large projects is still reasonably fast, even when dealing with a large number of compilations.

Typical JITWatch use cases


According to the documentation of the original JITWatch project, this tool is useful in several key areas. It allows you to:

  • Verify whether methods you consider performance-critical were JIT-compiled during program execution.
  • Find out exactly when certain methods were compiled and better understand the impact of JVM compilation threshold settings.
  • Learn how long the compilation of individual methods took, which ones took the compiler the longest, or which generated the most native code.
  • Better understand how Java compilers work.
  • Track how your source code is translated into bytecode and ultimately into machine code.


Introduction to Java compilation


To delve further into this topic, it’s helpful to have a basic understanding of Java compilers.

Fundamentally, the JVM contains an interpreter, which is used for a limited number of initial method calls, and two main compilers:

  • C1, which is capable of quickly generating less-optimized native code. By default, C1 generates code that also collects profiling statistics later used by C2. This mode is called tiered compilation.
  • C2, although slower than C1, creates code that is significantly faster. C2 leverages statistics collected by the code compiled with C1 to decide how to optimize the code. Statistics for a given method are gathered while it is running in an interpreter or in code compiled with additional profile-gathering code.


Compilation levels


In this context, you’ll often hear about five compilation levels labeled L0–L4:

  • L0 – A term indicating that a method is executed in the interpreter, during which basic statistics, such as the number of calls and backward jumps, are collected.
  • L1 – C1 compilation that does not include profiling for C2. It provides the fastest possible output from C1 and is mainly used for trivial methods where deep optimization in C2 wouldn’t provide a significant benefit.
  • L2 – C1 compilation with limited profiling, with statistics collected on the number of method calls and backward jumps. This allows us to determine which methods are actively used so that their subsequent compilations can be planned. L2-compiled code is on average about 30% faster than L3-compiled code. At application startup, when the C2 compiler is overloaded, it’s more time-efficient to compile code using L2. If the method remains active, the scheduler’s decision mechanism will later choose to compile it at L4.
  • L3 – C1 compilation with full profiling. Unlike L2, this level also gathers statistics on conditional branches and information about which classes are used in the method. L3 code is the slowest compiled code produced by the C1 compiler. The compiler scheduler aims to minimize the time a method spends running in L3 code.
  • L4 – Compilation with C2, which leverages the statistics collected previously. This makes it possible to generate faster, more efficient native code.


Compilation queue


The JVM uses a compilation queue to manage and prioritize tasks across compiler threads. Methods are queued based on a compilation policy that prioritizes those likely to benefit most from optimization, ensuring the efficient use of resources and delivering performance gains.

Compilation parameters


During its lifetime, a single method can run under 5 different compilation levels (L0–L4). Transitions between the per-method compilation levels (L1–L4) in the JVM are controlled by a set of key parameters. These parameters dictate when a particular method is promoted to a higher level of optimization, which in turn influences both startup speed and long-term performance. Below are the most important ones:

  • -XX:Tier3InvocationThreshold – The number of calls required to transition to L3. Default value: 2,000.
  • -XX:Tier4InvocationThreshold – The number of calls required to transition to L4 (C2). Default value: 15,000.
  • -XX:TieredOldPercentage – A somewhat mysterious parameter that significantly impacts startup speed. It specifies the percentage threshold after which a method is considered old and ceases to be prioritized, based on the length of the compilation queue. Default value: 1000.

These parameters influence how quickly methods transition between compilation levels. Compilation levels are normally upgraded, with the exceptions being cases of deoptimization or when the C2 compiler is overloaded. Lowering these parameters accelerates the progression of code through its compilation levels, effectively speeding up its “maturity.” However, this comes at the cost of increased overhead during application startup, as methods are compiled more frequently and at earlier stages.

Analyzing the startup of IntelliJ IDEA


One use case for the JITWatch4i plugin is analyzing an application’s startup. Let’s demonstrate this with an example of IntelliJ IDEA’s startup under different values of the -XX:TieredOldPercentage parameter. For simplicity, we’ll compare two tests: one with -XX:TieredOldPercentage=100000, which is the default value in IntelliJ IDEA, and another with -XX:TieredOldPercentage=1000, which is the default value in the JVM.

To analyze the startup, we need to run IntelliJ IDEA with parameters that generate compilation logs, which we will then load into JITWatch4.

For the first test, we set the following in idea64.vmoptions (the configuration of TieredOldPercentage is already in use):
-XX:TieredOldPercentage=100000-XX:+UnlockDiagnosticVMOptions-XX:+LogCompilation-XX:LogFile=compilation_100k.log
For the second test, we set this in idea64.vmoptions:
-XX:TieredOldPercentage=1000-XX:+UnlockDiagnosticVMOptions-XX:+LogCompilation-XX:LogFile=compilation_1k.log
For easier comparison, we use the following command to make IntelliJ IDEA run for the same amount of time in both cases:
timeout --kill-after=5 20 ./idea.sh
We load the compilation logs into JITWatch4i and compare them using the Timeline and Comp. Activity tabs.

Timeline


The graph on the Timeline tab illustrates the L1–L4 compilations over time, with each line color representing a specific compilation level:

  • Black – Total compilations
  • Blue – L1
  • Red – L2
  • Magenta – L3
  • Green – L4


-XX:TieredOldPercentage=100000-XX:TieredOldPercentage=1000
When comparing the charts, it’s clear that:

  • With -XX:TieredOldPercentage=100000, there are far more L2 (red line) compilations than with -XX:TieredOldPercentage=1000. Typically, a method follows the path L0 → L3 → L4 unless the C2 compiler is overloaded. When C2 is overloaded, a method may be compiled from L0 to L2 instead of to L3. Whether a method takes the L2 path or skips straight to L3 or L4 depends on how busy the C2 compiler is and whether the method is considered “old”.
  • If the compiler is overloaded, the method may go to L2.
  • If the method is old, L2 is skipped entirely and the method goes from L0 to L3 or L3 to L4.

The TieredOldPercentage parameter determines when a method is considered old by adjusting the JVM *Threshold parameters at which a method changes levels. Once a method is classified as old, the JVM stops routing it through L2. An issue arises, however, when the C2 compiler is overloaded and unable to accept new tasks, which means methods cannot graduate from L3 to L4, leaving them stuck at L3 for an extended period. This slows performance because L3 compilation involves collecting extensive statistics.

  • The charts show that for -XX:TieredOldPercentage=100000, the numbers of L3 and L4 compilations are almost the same. In this case, methods that do not progress to L4 remain in L2 instead of being promoted to or staying in the slower L3 code. L2 code is approximately 30% faster than L3 code, so this configuration avoids generating an excess of slower L3 methods. As a result, IntelliJ IDEA starts faster with this parameter.


Compilation queue

-XX:TieredOldPercentage=100000-XX:TieredOldPercentage=1000
In the Compiler Queues chart on the Comp.Activity tab, you can see the length of the compiler queue over time. Comparing them reveals that the -XX:TieredOldPercentage=1000 queue is initially more overloaded than the -XX:TieredOldPercentage=100000 queue.

Compilation activity

-XX:TieredOldPercentage=100000-XX:TieredOldPercentage=1000
Let’s compare the lengths of individual method compilations in the Native Size chart on the Comp.Activity tab. The X-axis represents time, and the rectangles correspond to the compilation of individual methods. The height of a rectangle is proportional to the length of the resulting native method. It is apparent that the compilation of some methods in C2 is truly long.

Further gains may be achievable by postponing the compilation of methods in C2 that take a long time. Initial experiments suggest this does yield results. Though the improvements have only been marginal so far, further tweaking the set of postponed methods could boost them.

Conclusion


JITWatch4i expands the capabilities of the original JITWatch through a plugin-based integration with IntelliJ IDEA, eliminating source-path setups and speeding up visualization for large projects. The example of IntelliJ IDEA’s startup under different -XX:TieredOldPercentage values shows how the JVM balances quick-to-compile L2 code versus slower but highly optimized L4 code. Using the analysis in JITWatch4i, these optimization steps become transparent, allowing you to understand the impact of your settings and ultimately leading to more informed performance tuning and faster startup times.

References:



Go to Source



An Introduction to Django Views


Views are central to Django’s architecture pattern, and having a solid grasp of how to work with them is essential for any developer working with the framework. If you’re new to developing web apps with Django or just need a refresher on views, we’ve got

Views are central to Django’s architecture pattern, and having a solid grasp of how to work with them is essential for any developer working with the framework. If you’re new to developing web apps with Django or just need a refresher on views, we’ve got you covered. Gaining a better understanding of views will help […]
An Introduction to Django Views
Views are central to Django’s architecture pattern, and having a solid grasp of how to work with them is essential for any developer working with the framework. If you’re new to developing web apps with Django or just need a refresher on views, we’ve got you covered.

Gaining a better understanding of views will help you make faster progress in your Django project. Whether you’re working on an API backend or web UI flows, knowing how to use views is crucial.

Read on to discover what Django views are, their different types, best practices for working with them, and examples of use cases.

What are Django views?


Views are a core component of Django’s MTV (model-template-view) architecture pattern. They essentially act as middlemen between models and templates, processing user requests and returning responses.

You may have come across views in the MVC (model-view-controller) pattern. However, these are slightly different from views in Django and don’t translate exactly. Django views are essentially controllers in MVC, while Django templates roughly align with views in MVC. This makes understanding the nuances of Django views vital, even if you’re familiar with views in an MVC context.

Views are part of the user interface in Django, and they handle the logic and data processing for web requests made to your Django-powered apps and sites. They render your templates into what the user sees when they view your webpage. Each function-based or class-based view takes a user’s request, fetches the data from its models, applies business logic or data processing, and then prepares and returns an HTTP response to a template.

This response can be anything a web browser can display and is typically an HTML webpage. However, Django views can also return images, XML documents, redirects, error pages, and more.

Rendering and passing data to templates


Django provides the render()shortcut to make template rendering simple from within views. Using render()helps avoid the boilerplate of loading the template and creating the response manually.

PyCharm offers smart code completion that automatically suggests the render()function from django.shortcuts when you start typing it in your views. It also recognizes template names and provides autocompletion for template paths, helping you avoid typos and errors.

The user provides the request, the template name, and a context dictionary, which gives data for the template. Once the necessary data is obtained, the view passes it to the template, where it can be rendered and presented to the user.
from django.shortcuts import renderdef my_view(request): # Some business logic to obtain data data_to_pass = {'variable1': 'value1', 'variable2': 'value2'} # Pass the data to the template return render(request, 'my_template.html', context=data_to_pass)
In this example, data_to_passis a dictionary containing the data you want to send to the template. The render function is then used to render the template (my_template.html) with the provided context data.

Now, in your template (my_template.html), you can access and display the data.
<!DOCTYPE html><html><head> <title>My Template</title></head><body> <h1>{{ variable1 }}</h1> <p>{{ variable2 }}</p></body></html>
In the template, you use double curly braces ({{ }}) to indicate template variables. These will be replaced with the values from the context data passed by the view.

PyCharm offers completion and syntax highlighting for Django template tags, variables, and loops. It also provides in-editor linting for common mistakes. This allows you to focus on building views and handling logic, rather than spending time manually filling in template elements or debugging common errors.
PyCharm Django completion
Start with PyCharm Pro for free

Function-based views


Django has two types of views: function-based views and class-based views.

Function-based views are built using simple Python functions and are generally divided into four basic categories: create, read, update, and delete (CRUD). This is the foundation of any framework in development. They take in an HTTP request and return an HTTP response.
from django.http import HttpResponsedef my_view(request): # View logic goes here context = {"message": "Hello world"} return HttpResponse(render(request, "mytemplate.html", context))
This snippet handles the logic of the view, prepares a context dictionary for passing data to a template that is rendered, and returns the final template HTML in a response object.

Function-based views are simple and straightforward. The logic is contained in a single Python function instead of spread across methods in a class, making them most suited to use cases with minimal processing.

PyCharm allows you to automatically generate the def my_view(request) structure using live templates. Live templates are pre-defined code snippets that can be expanded into boilerplate code. This feature saves you time and ensures a consistent structure for your view definitions.

You can invoke live templates simply by pressing ⌘J, typing Listview, and pressing the tab key.
blog.jetbrains.com/wp-content/…
Moreover, PyCharm includes a Django Structure tool window, where you can see a list of all the views in your Django project, organized by app. This allows you to quickly locate views, navigate between them, and identify which file each view belongs to.
blog.jetbrains.com/wp-content/…

Class-based views


Django introduced class-based views so users wouldn’t need to write the same code repeatedly. They don’t replace function-based views but instead have certain applications and advantages, especially in cases where complex logic is required.

Class-based views in Django provide reusable parent classes that implement various patterns and functionality typically needed by web application views. You can take your views from these parent classes to reduce boilerplate code.

Class-based views offer generic parent classes like:

  • ListView
  • DetailView
  • CreateView
  • And many more.

Below are two similar code snippets demonstrating a simple BookListView. The first shows a basic implementation using the default class-based conventions, while the second illustrates how you can customize the view by specifying additional parameters.

Basic implementation:
from django.views.generic import ListViewfrom .models import Book class BookListView(ListView): model = Book # The template_name is omitted because Django defaults to 'book_list.html' # based on the convention of <model_name>_list.html for ListView.
When BookListView gets rendered, it automatically queries the Bookrecords and passes them under the variable bookswhen rendering book_list.html. This means you can create a view to list objects quickly without needing to rewrite the underlying logic.

Customized implementation:
from django.views.generic import ListViewfrom .models import Book class BookListView(ListView): model = Book# You can customize the view further by adding additional attributes or methods def get_queryset(self):# Example of customizing the queryset to filter booksreturn Book.objects.filter(is_available=True)
In the second snippet, we’ve introduced a custom get_queryset() method, allowing us to filter the records displayed in the view more precisely. This shows how class-based views can be extended beyond their default functionality to meet the needs of your application.

Class-based views also define methods that tie into key parts of the request and response lifecycle, such as:

  • get() – logic for GETrequests.
  • post() – logic for POST[strong][/strong]requests.
  • dispatch() – determines which method to call get() or post().

These types of views provide structure while offering customization where needed, making them well-suited to elaborate use cases.

PyCharm offers live templates for class-based views like ListView, DetailView, and TemplateView, allowing you to generate entire view classes in seconds, complete with boilerplate methods and docstrings.
Django live templates in PyCharm

Creating custom class-based views


You can also create your own view classes by subclassing Django’s generic ones and customizing them for your needs.

Some use cases where you might want to make your own classes include:

  • Adding business logic, such as complicated calculations.
  • Mixing multiple generic parents to blend functionality.
  • Managing sessions or state across multiple requests.
  • Optimizing database access with custom queries.
  • Reusing common rendering logic across different areas.

A custom class-based view could look like this:
from django.views.generic import Viewfrom django.shortcuts import renderfrom . import modelsclass ProductSalesView(View): def get(self, request): # Custom data processing sales = get_sales_data() return render(request, "sales.html", {"sales": sales}) def post(self, request): # Custom form handling form = SalesSearchForm(request.POST) if form.is_valid(): results = models.Sale.objects.filter(date__gte=form.cleaned_data['start_date']) context = {"results": results} return render(request, "search_results.html", context) # Invalid form handling errors = form.errors return render(request, "sales.html", {"errors": errors})
Here, custom getand posthandlers enable you to extend the existing ones between requests.

When to use each view type


Function-based and class-based views can both be useful depending on the complexity and needs of the view logic.

The main differences are that class-based views:

  • Promote reuse via subclassing and parents inheriting behavior.
  • Are ideal for state management between requests.
  • Provide more structure and enforced discipline.

You might use them working with:

  • Dashboard pages with complex rendering logic.
  • Public-facing pages that display dynamic data.
  • Admin portals for content management.
  • List or detail pages involving database models.

On the other hand, function-based views:

  • Are simpler and take less code to create.
  • Can be easier for Python developers to grasp.
  • Are highly flexible and have fewer constraints.

Their use cases include:

  • Prototyping ideas.
  • Simple CRUD or database views.
  • Landing or marketing pages.
  • API endpoints for serving web requests.

In short, function-based views are flexible, straightforward, and are easier to reason about. However, for more complex cases, you’ll need to create more code that you can’t reuse.

Class-based views in Django enforce structure and are reusable, but they can be more challenging to understand and implement, as well as harder to debug.

Views and URLs


As we’ve established, in Django, views are the functions or classes that determine how a template is rendered. Each view links to a specific URL pattern, guiding incoming requests to the right place.

Understanding the relationship between views and URLs is important for managing your application’s flow effectively.

Every view corresponds with a URL pattern defined in your Django app’s urls.py file. This URL mapping ensures that when a user navigates to a specific address in your application, Django knows exactly which view to invoke.

Let’s take a look at a simple URL configuration:
from django.urls import pathfrom .views import BookListViewurlpatterns = [ path('books/', BookListView.as_view(), name='book-list'),]
In this setup, when a user visits /books/, the BookListView kicks in to render the list of books. By clearly mapping URLs to views, you make your codebase easier to read and more organized.

Simplify URL management with PyCharm


Managing and visualizing endpoints in Django can become challenging as your application grows. PyCharm addresses this with its Endpoints tool window, which provides a centralized view of all your app’s URL patterns, linked views, and HTTP methods. This feature allows you to see a list of every endpoint in your project, making it easier to track which views are tied to specific URLs.

Instead of searching through multiple urls.py files, you can instantly locate and navigate to the corresponding views with just a click. This is especially useful for larger Django projects where URL configurations span multiple files or when working in teams where establishing context quickly is crucial.

Furthermore, the Endpoints tool window lets you visualize all endpoints in a table-like interface. Each row displays the URL path, the HTTP method (GET, POST, etc.), and the associated view function or class of a given endpoint.

This feature not only boosts productivity but also improves code navigation, allowing you to spot missing or duplicated URL patterns with ease. This level of visibility is invaluable for debugging routing issues or onboarding new developers to a project.

Check out this video for more information on the Endpoints tool window and how you can benefit from it.

youtube.com/embed/xanrdSKV1k4?…

Best practices for using Django views


Here are some guidelines that can help you create well-structured and maintainable views.

Keep views focused


Views should concentrate on handling requests, fetching data, passing data to templates, and controlling flow and redirects. Complicated business logic and complex processing should happen elsewhere, such as in model methods or dedicated service classes.

However, you should be mindful not to overload your models with too much logic, as this can lead to the “fat model” anti-pattern in Django. Django’s documentation on views provides more insights about structuring them properly.

Keep views and templates thin


It’s best to keep both views and templates slim. Views should handle request processing and data retrieval, while templates should focus on presentation with minimal logic.

Complex processing should be done in Python outside the templates to improve maintainability and testing. For more on this, check out the Django templates documentation.

Decouple database queries


Extracting database queries into separate model managers or repositories instead of placing them directly in views can help reduce duplication. Refer to the Django models documentation for guidance on managing database interactions effectively.

Use generic class-based views when possible


Django’s generic class-based views, like DetailView and ListView, provide reusability without requiring you to write much code. Opt for using them over reinventing the wheel to make better use of your time. The generic views documentation is an excellent resource for understanding these features.

Function-based views are OK for simple cases


For basic views like serving APIs, a function can be more effective than a class. Reserve complex class-based views for intricate UI flows. The writing views documentation page offers helpful examples.

Structure routes and URLs cleanly


Organize routes and view handlers by grouping them into apps by functionality. This makes it easier to find and navigate the source. Check out the Django URL dispatcher documentation for best practices in structuring your URL configurations.

Next steps


Now that you have a basic understanding of views in Django, you’ll want to dig deeper into the framework and other next steps.

  • Brush up on your Django knowledge with our How to Learn Django blog post, which is ideal for beginners or those looking to refresh their expertise.
  • Explore the state of Django to see the latest trends in Django development for further inspiration.


Django support in PyCharm


PyCharm Professional is the best-in-class IDE for Django development. It allows you to code faster with Django-specific code assistance, project-wide navigation and refactoring, and full support for Django templates. You can connect to your database in a single click and work on TypeScript, JavaScript, and frontend frameworks. PyCharm also supports Flask and FastAPI out of the box.

Create better applications and streamline your code. Get started with PyCharm now for an effortless Django development experience.

Start with PyCharm Pro for free

Go to Source

This website uses cookies to recognize revisiting and logged in users. You accept the usage of these cookies by continue browsing this website.