Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

A bit off topic, but as a developer of one, DaniWeb started on SVN and eventually moved to Git when redoing the platform from scratch about a decade ago. I also really like being in GitHub, despite it being a private repository so I’m not taking advantage of the network effect.

Ulfson 0 Newbie Poster

May I ask what inspired the decision to host the project on Sourceforge instead of GitHub? I rarely see well-maintained projects on sourceforge these days, sadly.

JForum had already lived on SF for several years before I became involved. At this point there are just 2 committers, and while we talked about moving to Git, it never seemed to provide enough upside to justify the work. Personally, I was also involved in other projects on SF, and thus quite familiar with everything it offers.

It's true that SVN is unfashionable these days, and that it might be easier to attract outside help with Git. But given that feature work on JForum has largely come to a halt, due to forums becoming less fashionable themselves, I'm not sure that that would buy us much.

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

Congrats on the latest release of your software. May I ask what inspired the decision to host the project on Sourceforge instead of GitHub? I rarely see well-maintained projects on sourceforge these days, sadly.

Ulfson 0 Newbie Poster

Since this is a discussion forum, I dare use it to announce a new release of the best (IMO) Java open source discussion software, JForum. There isn't much new in terms of functionality or bug fixes, but it now runs natively on JakartaEE 10 - meaning you need Tomcat 10.1 or comparable to run it. Previous versions ran fine using the webapps-javaee mechanism of Tomcat, but that's supposed to be something of a temporary hack. And for JForum, it's no longer needed now :-)

You can find a ready-to-run war file at https://sourceforge.net/projects/jforum2/, along with documentation.

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

If you have the enterprise plan you should have an emergency phone option under support. Cloudflare also has a 24/7 Emergency Hotline at +1 (866)-325-4810. There's a Live Chat option for Business. Call their sales number and get some answers. +1 (888) 99 FLARE

A disputed charge back on a C/C always gets their attention, and billing would probably call you. I wouldn't play their game.

I don't have an enterprise plan. When I called their number last week, I got routed to the billing dept, which is just a robot saying to leave a support ticket online.

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

Status update: There was an update to my ticket saying that there is higher than normal demand for support and to keep holding. At least it's a response?

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

We make heavy use of PHP's Memcached::getDelayedByKey which requests multiple items all from a specific server (hashed on the server key, which is the Topic ID #). That ensures that when you pull up a topic, we can retrieve all items for that topic in one request, all from one server.

toneewa 115 Junior Poster

If you have the enterprise plan you should have an emergency phone option under support. Cloudflare also has a 24/7 Emergency Hotline at +1 (866)-325-4810. There's a Live Chat option for Business. Call their sales number and get some answers. +1 (888) 99 FLARE

A disputed charge back on a C/C always gets their attention, and billing would probably call you. I wouldn't play their game.

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

Probably this is me just venting more than anything, but has anyone had success in getting in touch with someone from Cloudflare billing?

Cloudflare recently charged us a boatload of money for Argo accelerated traffic that was the result of a DDoS attack. Cloudflare was able to detect on their end there was a DDoS attack going on at the time. I tried submitting a billing ticket a week ago and there have been no responses to it as of yet. Pretty frustrating.

asadalig -8 Newbie Poster Banned

In 2025, cross-platform apps aren't inherently slower than native ones—it really depends on how they're built. Modern frameworks like Flutter, React Native, and Kotlin Multiplatform have closed the performance gap significantly. If developers optimize properly, most users won’t notice a difference. However, for extremely high-performance needs like advanced gaming or heavy AR, native still holds the edge. So, the idea that cross-platform is always slower is more myth than fact today.

Himadri_3 -4 Newbie Poster

It’s a question we still hear a lot—and honestly, the answer depends on what kind of app you're building.

A few years back, native apps definitely had the upper hand when it came to speed and smooth performance. They were built specifically for iOS or Android, so naturally they could tap into every feature and run super efficiently.

But now? Cross-platform tools like Flutter and React Native have come a long way. Most day-to-day apps—think eCommerce, service booking, productivity tools—run just as smoothly with cross-platform development. Unless you're building something like a heavy 3D game or complex AR experience, you probably won't even notice a difference.

In fact, businesses love cross-platform because it helps them launch faster, maintain easier, and save costs.

So… is the speed difference still a dealbreaker? Or is it time we let go of that old belief?

We'd love to hear your take. Have you worked with both? Noticed any real performance issues?

Reverend Jim 5,259 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

If that is a demand then don't expect much in the way of positive feedback. If it is a request for help then a polite tone and much more information will likely lead to a better response.

Flaan4me 0 Newbie Poster

Convert website to apk

Miles_0 0 Newbie Poster

Thanks for overwhelming response. How do you ensure efficient cache management with Memcached in such a distributed system, especially when fetching multiple items per page while minimizing cache misses?

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

AI generated content is very obvious to me, as well. However, DaniWeb gets a ton of AI generated submissions every day and I am looking for a reliable way of blocking it without having to delete the posts after-the-fact, and also without having to stick all new posts into a moderation queue.

Ja sa bong -4 Newbie Poster

I don't need any tool for detect an AI written content because if you have seen enough of it, you're going to find it very easy spotting it immediately. I'm not saying some tools that do the job isn't good but I can do it without them is what I'm trying to say because AI generated contents are very obvious.

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

Good question. We have multiple web servers that are load balanced, and only 2 database servers in a master/slave setup. I know there are ways to shard the database so you could essentially put different records of a table on different servers, but I personally have no experience with that. I am pretty sure you can shard by row or by column.

We also have a Memcached pool (for caching) where we specify the server we want to use by passing in a server key into all gets and sets. We make sure that all cached items that need to be fetched on a particular page are fetched at once, and always from a single server. That seems to work well for us.

rproffitt 2,706 https://5calls.org Moderator

From the web I'm reading "I see a lot of people mention they got a fat bill from cloudwatch logs. I'm about to head into production and I want to make sure I don't make that mistake."

So the integration is there and possibly a nice drain on your company's bottomline.

LiLo1001 0 Newbie Poster

How does Cloudwatch work and how does it integrate with other AWS services?

Miles_0 0 Newbie Poster

What are the best practices for ensuring data consistency and integrity when using cloud storage for large-scale applications? Specifically, how do different platforms handle issues like eventual consistency, data replication across regions, and conflict resolution in distributed environments? Are there any recommended tools or strategies to optimize these processes for high availability and reliability?

dexcowork 0 Newbie Poster

Your DNA file import may not be working due to file format issues, incompatible software versions, or missing dependencies. Make sure the file is formatted right, the software is up-to-date, and all the libraries are installed. The problem might be resolved by updating or reformatting the file.

Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

I can’t tell if every response in this topic is low quality AI generated or not. :(

rproffitt commented: Be-boop, click, buzz, error not found. +17
QuintinFields -24 Newbie Poster Banned

make sure your Ubuntu system is updated. Install Apache, MariaDB, and PHP, since those are essential for Nextcloud. Once you’ve got those, secure your MariaDB installation and create a database for Nextcloud.Download the Nextcloud package from their website, unzip it to your web directory, and make sure Apache has the right permissions. You’ll need to configure Apache to serve Nextcloud by creating a new config file.Finally, head to your server’s IP address or domain in your browser and complete the setup through the Nextcloud web interface. If you want to keep things secure, setting up SSL is a good idea. Hope that helps

Pelorus_1 -56 Newbie Poster

To improve Python's performance, you may want to utilize libraries like NumPy and pandas for effective data management. Profiling tools like cProfile can help pinpoint any bottlenecks in your code. Employing multiprocessing can make use of multiple CPU cores, and integrating Cython can accelerate vital parts of your Python code. Lastly, for high-performance tasks, incorporating C/C++ extensions can markedly boost performance.

Salem commented: poster of chatgpt hand waving pith with no actionable substance -4
Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

Yes, it's a non-negotiable that you mass insert records into the database. In my situation, when I mentioned it was more performant to update records in PHP than MySQL, I was specifically trying to transform strings with regex and some very simple logic. Although there were many, many rows, it was more performant to update them one at a time in PHP than MySQL because MySQL's IF() and REGEXP() functions aren't anywhere as performant as PHP's. So, in my case, using the right tool for the job meant recognizing that MySQL was best as a datastore and PHP was best for handling all the business logic.

cored0mp commented: Thanks, mass records insert it is! +1
Salem 5,265 Posting Sage
  1. Get something working in Python if that's your comfort zone.
  2. Profile it to find out where the bottlenecks are (do not make guesses, find evidence)

For example, if the program is spending a lot of time waiting for I/O (aka the disk), then writing chunks of code in C isn't going to buy much (it will just wait more efficiently).

If it's spending a lot of time talking MySQL, then research alternative methods of driving MySQL. Driving the same methods in C won't fix anything.

How much faster could it be?
Say for example all the input conversion was zero cost. Time how long it takes to dump say 10K records into the MySQL database.
If that time is only marginally faster than doing all the work, nothing will be gained by rewriting chunks in C.

How much faster could you make it?
Turning 1 hour into 55 minutes isn't going to change the way you use the program. You're still going to run it and then go to lunch.
You need an order of magnitude improvement (like hours into minutes) to make it worth doing. Shaving off a few % might be academically interesting, but if your time is someone's $, then think carefully.

Next, what's the payback period?
Say you spend a month (say 150 hours) carefully crafting, debugging and testing the best C version ever.
If you made it 10 minutes faster, you need to run it 1000 times before you see a return on …

cored0mp commented: Thank you, I realize that I must now profile my system. +1
Reverend Jim 5,259 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

I can't speak on optimization in general without seeing the code. In my previous life I was a Windows SysAdmin/dbadmin as well as a digital plumber. I wrote many apps that had to move large quantities of data from place to place, for example, importing 16000+ records into a SQL database every five minutes. I did all this with vbScript (today I would choose Python). The trick to processing that many records quickly was using vbScript to format the records, then using BULK INSERT to insert all of the records in one transaction. This drastically reduced the processing time by not having to submit each insert separately. The import load later grew to 16,000 records on the hour and 16,000 at five past the hour, plus the regular 16,000 every five minutes. Scripting easily handled the load. You could easily write the massaging code in c/c++ and compare it to the equivalent in Python. Considering the overhead for file I/O would be the same in both, I'd be surprised if the difference was significant.

cored0mp commented: Thank you. I have decided that I shall go for the bulk insert. +1
Dani 4,675 The Queen of DaniWeb Administrator Featured Poster Premium Member

I wish I could help but I do MySQL and PHP. I definitely agree with you that it makes sense to get data into the database and then process it and query it as much as you can from in there. However, it’s been my experience that, just because you can do it from within MySQL, doesn’t mean you should. MySQL is not always the best tool for the job, the same way Python or PHP are not always the best tools for the job. Sometimes it’s more efficient to do things, especially manipulating records, from outside the database.

cored0mp 54 Junior Poster in Training

Hey Gang!

I'm hitting a point with my (python/mysql/linux) app for processing large amounts of network records where I need to make a design decision.
I definitely want my app to have high performance. Because optimization as a skill set is so rare there is no reason not to employ it if you have it. No one can copy you because innovation is not what most tech start-ups do and they grew up coding on pay-per-flop architectures.
My methodology is to get the data into the database and to let the database do database things. I got very good advice here at Dani Web to switch to MySQL. This has been a tremendous time saver!
And yet some things I can rely upon python and python alone to do.
I am trying to minimize those things. For example, I plan to hand-write warehousing drivers in ANSI-C to get the data into the database without python.
And yet not everything can be accomplished in C.
Do any of you have any general advice about python optimization? I have tried all of the obvious things like optimizing the interpreter, things that have always worked with perl or ruby. Python has been less than cooperative.

technocratsh -3 Newbie Poster

Yes, I can help with that! To self-host a Nextcloud server, you'll need to follow these steps:

  • Choose a server
  • Install an operating system
  • Install dependencies
  • Download Nextcloud
  • Set up your database
  • Configure Apache
  • Run the installer
  • Secure your installation
  • Set up backups

That's a high-level overview, but each step can have more details depending on your specific setup and needs. There are plenty of detailed guides online that can walk you through each step if you need more information. Good luck!

Pelorus_1 -56 Newbie Poster

To self-host your Nextcloud server on Ubuntu Linux, first install Apache, MariaDB, and PHP. Then, download Nextcloud from the official website and extract it to a web directory. Configure Apache to serve the Nextcloud directory and secure your server with SSL. Finally, complete the web-based setup by visiting your server's IP address in your browser.