toneewa 81 Junior Poster in Training

I added your mentioned alter table index. Took 4.375 secs in MySQL Workbench. It did speed up the results. I'll have to try another method of importing to see if I can improve this. I also ommitted the overhead of converting variables for display output measurements, because they were included in the C++ times. If we were to compare the two, it as folllows:

Query     Duration / Fetch
C++:                 0.30402 sec
MWB:     0.000 sec / 0.297 sec

WHERE:
C++:     0.018953 sec
MWB:     0.000 sec / 0.000 sec

HAVING:
C++:     0.0626546 sec    
MWB:     0.062 sec / 0.000 sec

I don't consider this bad at all for connecting to it through localhost. Let's not talk about display times. :)

Biiim 182 Junior Poster

INSERT INTO electronics.products(ProductID, ProductName, Price)
VALUES
('1','capacitors', 2.50),
('2','resistors', 4.50),
('3','rectifiers', 7.50),
('4','diodes', 10.00),
('5','ICs', 25.00),
...
('50000','...', ...);

I don't see any mention of an index on it & your benchmark indicates you don't have one:

ALTER TABLE `products`
    ADD PRIMARY KEY(`ProductID`),
    ADD INDEX `Price` (`Price`);

You should also specify your hardware for a benchmark as it makes quite an impact on the speed

You can also fine tune the MySQL system variables to better handle the queries you want to run, Query cache for instance will cache the most common queries so they don't have to be re-run again when nothing has changed, you generally don't need to bother with this though as it should run fast enough for most purposes as it is.

There is also different table storage engines, InnoDB is trasactional so it is very robust, if you want something faster but are not too worried about corruption you can use MyISAM, it has less rollback functionality but this makes it faster. I was using that for an email marketing DB ages ago (logging each sent email, failed, read, clicked etc. it was over 400,000 a day on an old HDD hard drive, it was a pretty average linux server).

There is definitely something very wrong with your setup cause I import 10 million rows of 6 columns every week in less than 10 minutes and that is slow as it is LOAD FILE from a csv (thats about 17,000 inserts a second). …

toneewa 81 Junior Poster in Training

I wonder how these IP addresses are issued. Static, dynamic, or is DHCP on? It reminds me of the time a network printer that stopped working, after the power went out. Other devices connected to the network after it's setup. Then, after rebooting, it got a different IP, but the host still thought it was on the old one. I've seen the same thing when hosting game servers. I was just curious what each machine thinks it's IP is. I have two UPS that have prevented over 300 outages.

This video explains the setup of the galera cluster, and maintaining high availability through an outage.

rproffitt commented: Good point. If it's that, it's not a bug at all. +17
toneewa 81 Junior Poster in Training

I know when you don't use aggregate calculations use WHERE. Producing the same results can be done. I added another test with MAX(Price). With a database over 305K rows, 3 columns, MySQL Workbench is unstable importing. Took over 2 hours. WHERE is faster, and should be used, when no functions are needed.

WHERE Query execution time: 0.160881 seconds
HAVING Query execution time: 0.245288 seconds

Same results:
    Query execution time: 0.302986 seconds
    49.76
    49.28
    49.2
    49.86
    49.34
    49.44
    49.88
    49.78
    49.74
    49.99
    49.52
    49.1
    49.51
    49.27
    49.13
    49.92
    49.45
    49.56
    49.89
    49.5
    49.06
    49.48
    49.18
    49.35
    49.19
    49.21
    49.17
    49.75
    49.72
    49.93
    49.09
    49.25
    49.11
    49.83
    49.01
    49.4
    49.36
    49.85
    49.81
    49.77
    49.32
    49.8
    49.69
    49.24
    49.15
    49.96
    49.38
    49.63
    49.84
    49.14
    49.08
    49.02
    50
    49.67
    49.97
    49.05
    49.62
    49.91
    49.82
    49.7
    49.9
    49.73
    49.58
    49.12
    49.95
    49.42
    49.79
    49.3
    49.23
    49.16
    49.64
    49.66
    49.04
    49.71
    49.94
    49.53
    49.03
    49.65
    49.41
    49.31
    49.29
    49.87
    49.55
    49.49
    49.68
    49.33
    49.46
    49.6
    49.47
    49.54
    49.26
    49.61
    49.59
    49.37
    49.07
    49.43
    49.39
    49.98
    49.22
    49.57
    WHERE Query execution time: 0.160881 seconds

    49.76
    49.28
    49.2
    49.86
    49.34
    49.44
    49.88
    49.78
    49.74
    49.99
    49.52
    49.1
    49.51
    49.27
    49.13
    49.92
    49.45
    49.56
    49.89
    49.5
    49.06
    49.48
    49.18
    49.35
    49.19
    49.21
    49.17
    49.75
    49.72
    49.93
    49.09
    49.25
    49.11
    49.83
    49.01
    49.4
    49.36
    49.85
    49.81
    49.77
    49.32
    49.8
    49.69
    49.24
    49.15
    49.96
    49.38
    49.63
    49.84
    49.14
    49.08
    49.02
    50
    49.67
    49.97
    49.05
    49.62
    49.91
    49.82
    49.7
    49.9
    49.73
    49.58
    49.12
    49.95
    49.42
    49.79
    49.3
    49.23
    49.16
    49.64
    49.66
    49.04
    49.71
    49.94
    49.53
    49.03
    49.65
    49.41
    49.31
    49.29
    49.87
    49.55
    49.49
    49.68
    49.33
    49.46
    49.6
    49.47
    49.54
    49.26
    49.61
    49.59
    49.37
    49.07
    49.43
    49.39
    49.98
    49.22
    49.57
    HAVING Query execution time: 0.245288 seconds

    Diode 24.1361
    Capacitor 24.1737
    Transistor 24.0247
    Resistor 24.1304
    Inductor 24.1018
    HAVING Query execution time: 0.436416 seconds

    Diode 24.1361
    Capacitor 24.1737
    Transistor 24.0247
    Resistor 24.1304
    Inductor 24.1018
    WHERE Query execution time: 0.432615 seconds
Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

I think you might be missing my point. If you’re fetching different data, there’s no way of saying whether where or having is faster. You would need to write queries that use each, but retrieve the same data, to see which is faster for a use case. Even then, it totally depends on the data in the database itself. The same query can be fast for some datasets and slow for others.

toneewa 81 Junior Poster in Training

Correct. It wasn't about displaying the results, but to measure the different clauses. I'm not impressed by only 95 INSERTs/sec, and a maximum write speed of 175 KB/s for importing data. Increasing to 50K shows WHERE to be faster.

50K:

    WHERE Query execution time: 0.0599129 seconds
    HAVING Query execution time: 0.0621748 seconds

    HAVING Query execution time: 0.0629348 seconds
    WHERE Query execution time: 0.0627212 seconds

5:

    WHERE Query execution time: 0.0002878 seconds
    HAVING Query execution time: 0.0002313 seconds

    HAVING Query execution time: 0.0002674 seconds
    WHERE Query execution time: 0.0004905 seconds


INSERT INTO electronics.products(ProductID, ProductName, Price)
VALUES
('1','capacitors', 2.50),
('2','resistors', 4.50),
('3','rectifiers', 7.50),
('4','diodes', 10.00),
('5','ICs', 25.00),
...
('50000','...', ...);

If you have a test case you want to measure, please do share.

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

But you're not comparing apples to apples. Your WHERE query and your HAVING query perform different calculations and do different things. In one, you're plucking out all the rows with a price greater than $4, and then calculating an average price for each product. In the other, you're plucking out all rows, calculating an average price for each product, and then discarding the rows with a price less than or equal to $4. I'm not sure what your database looks like, but if there are lots of rows with products of all different price ranges, you're not going to end up with the same result set.

toneewa 81 Junior Poster in Training

I'm a little late to the party, however, I want to share my experience learning MySQL in the past day. I setup a server, a database, and wrote a C++ program to connect to it. It measures the times for 3 SELECTs. The whole product list, using HAVING, and WHERE. I also tested reversing the orders.

Query execution time: 0.0002336 seconds
WHERE Query execution time: 0.0002878 seconds
HAVING Query execution time: 0.0002313 seconds

Query execution time: 0.0001929 seconds
HAVING Query execution time: 0.0002674 seconds
WHERE Query execution time: 0.0004905 seconds

#include <iostream>
#include <mysql_driver.h>
#include <mysql_connection.h>
#include <cppconn/driver.h>
#include <cppconn/connection.h>
#include <cppconn/statement.h>
#include <cppconn/exception.h> 
#include <chrono>
#include <Windows.h>
using namespace std;
#pragma comment(lib, "libcrypto.lib")
#pragma comment(lib, "libssl.lib")
#pragma comment(lib, "mysqlcppconn.lib") // For MySQL Connector/C++ version 6
#pragma comment(lib, "mysqlcppconn8.lib") // For MySQL Connector/C++ version 8

sql::mysql::MySQL_Driver* driver;
sql::Connection* con;
sql::Statement* stmt;
int ct = 0;
int main(){
    while(ct!=5){
    try {
        driver = sql::mysql::get_mysql_driver_instance();

        con = driver->connect("tcp://127.0.0.1:3306", "root", "MySQLDani");
        con->setSchema("electronics");

        // Create a statement
        stmt = con->createStatement();
        auto start_time0 = std::chrono::high_resolution_clock::now();
        // SQL query
        stmt->execute("SELECT * FROM Products");

        sql::ResultSet* result = stmt->getResultSet();

        while (result->next()) {
            int id = result->getInt("ProductID");
            string name = result->getString("ProductName");
            string price = result->getString("Price");
            cout << id << " " << name << " " << price << endl;
        }
        auto end_time0 = std::chrono::high_resolution_clock::now();

        std::chrono::duration<double> elapsed_seconds0 = end_time0 - start_time0;
        std::cout << "Query execution time: " << elapsed_seconds0.count() << " seconds\n";

        auto start_time1 = std::chrono::high_resolution_clock::now();
        // Second query with WHERE
        stmt->execute("SELECT ProductName, AVG(Price) AS AvgPrice "
            "FROM …
rproffitt 2,580 "Nothing to see here." Moderator

I wonder if the last other stable releases show this issue? "Stable release: 11.3.2 / 16 February 2024; 46 days ago" or the most recent release of MariaDB 10.11: MariaDB 10.11.7 Stable (GA)

That is, many fixes don't get released for out of date versions. The new version is how many fixes are issued.

rproffitt 2,580 "Nothing to see here." Moderator

The problem is still unresolved. Until the bug is fixed. Be sure to tell all that you don't accept this as a bug and want a fix now.

rproffitt 2,580 "Nothing to see here." Moderator
mx_983 commented: This URL was proposed by me on another platform. The problem is still unresolved +0
mx_983 0 Newbie Poster

Basic background information
Mariadb Ver 15.1 District 10.11.6 MariaDB Glarea cluster, one with three nodes:
Node1:192.168.18.78
Node2: 192.168.18.79
Node3: 192.168.18.80

Among them, Node1 node was restarted after a power outage of 1 hour, and after executing the system ctl start mariadb, it was stuck for a long time (running for 6 hours) but still did not recover.

The configuration information of Glarea is as follows:

[mysqld]
event_scheduler=ON
bind-address=0.0.0.0

# Galera 提供者配置
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so

# Galera 集群配置
wsrep_cluster_name="hy_galera_cluster"
wsrep_cluster_address="gcomm://192.168.18.78,192.168.18.79,192.168.18.80"

# Galera 节点配置
wsrep_node_address="192.168.18.78"
wsrep_node_name="data-server"

# SST 方法选择
wsrep_sst_method=rsync

# InnoDB Configuration
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
binlog_format=ROW

The log input situation is as follows:

240403 05:05:09 mysqld_safe Starting mariadbd daemon with databases from /var/lib/mysql
240403 05:05:09 mysqld_safe WSREP: Running position recovery with --disable-log-error  --pid-file='/var/lib/mysql/data-server-recover.pid'
240403 05:05:09 mysqld_safe WSREP: Recovered position 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7183126
2024-04-03  5:05:10 0 [Note] Starting MariaDB 10.11.6-MariaDB source revision fecd78b83785d5ae96f2c6ff340375be803cd299 as process 233407
2024-04-03  5:05:10 0 [Note] WSREP: Loading provider /usr/lib64/galera/libgalera_smm.so initial position: 20c1183c-e5c5-11ee-9129-97e9406cb3f8:7183126
2024-04-03  5:05:10 0 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
2024-04-03  5:05:10 0 [Note] WSREP: wsrep_load(): Galera 26.4.16(rXXXX) by Codership Oy <info@codership.com> loaded successfully.
2024-04-03  5:05:10 0 [Note] WSREP: Initializing allowlist service v1
2024-04-03  5:05:10 0 [Note] WSREP: CRC-32C: using 64-bit x86 acceleration.
2024-04-03  5:05:10 0 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1, safe_to_bootstrap: 0
2024-04-03  5:05:10 0 [Note] WSREP: GCache DEBUG: opened preamble:
Version: 2
UUID: 20c1183c-e5c5-11ee-9129-97e9406cb3f8
Seqno: -1 - -1
Offset: -1
Synced: 0
2024-04-03  5:05:10 0 [Note] WSREP: Recovering GCache ring buffer: version: 2, UUID: 20c1183c-e5c5-11ee-9129-97e9406cb3f8, offset: -1
2024-04-03  5:05:10 0 [Note] WSREP: GCache::RingBuffer initial scan...  0.0% …
Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

I know that in my experience, OR's are very bad for speed you can get away with a few but they can get very bad when they stop the whole query making use of one of the main indexes, you can actually speed it up by moving the or's into the HAVING usually.

Thank you for that tip! I never thought of that, but it totally makes sense!!

Another option is to use AJAX to load the data after the page has loaded, I moved to AJAX & Javascript websites like 6 or 7 years ago - let the DOM load and use javascript to put the data in after it has already rendered.

We do that with DaniWeb Connect business cards e.g. https://www.daniweb.com/connect/users/view/1 because figuring out the matching is super resource intensive.

I currently am looking at storing all data in a Redis DB updated periodically from MariaDB, so the backend updates the data instantly over SSE(server sent events) so the react app keeps up to date without needing to wait for data over the network.

DaniWeb uses Redis for a handful of things here, but most certainly not as sophisticated as storing the majority of our database and updating in realtime. (We also use Memcached, but I like Redis for the combination of performance and persistence.)

Another option is to switch your Database server to an M2 SSD if you haven't already, those things are insanely fast

That's super …

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

Point #2 would apply if the improvement was noticible but I doubt anyone could seriously comment, "I think this page rendered 50ms faster than it used to", especially considering all the other things that affect timing, for example, the current loading on my ISP servers, anything else running on my computer or home network, etc.

Remember, we aren't shaving 50ms off of something that took 80ms to 250ms. That's how long it typically takes to generate the HTML from the PHP code and send it over the wire (e.g. including network latency). When I said I was able to shave off 50ms, we're talking strictly about the time to generate the HTML from the PHP code, which I would guesstimate is overall like 60ms-80ms (although I've never benchmarked it). So a huge win for an afternoon's worth of work.

I can see how saving 50ms in a process that took 80ms to 250ms is a big deal. I just wonder if it is a big deal that is perceptually visible to the user.

Well, when you consider that people tend to not even stick around if the HTML takes more than ~400ms to retrieve, then yeah :)

As far as saving resources on the hosting platform, while that's true, the other thing to consider here is the almighty SEO. Every website has a certain amount of "crawl budget" that is allocated to the domain based on its popularity, ranking, incoming links, etc. Crawl budget is essentially …

Biiim 182 Junior Poster

I don't have experience with MariaDB, but in MySQL, something like that will work as long as I do SELECT sum(points) AS total FROM ... HAVING total > 10. Is that what you were getting at?

Kind of, MariaDB is a fork of MySQL from around 2009 or something like that, MySQL 5.* and the creator continued developing MariaDB and Oracle took MySQL - so that's why they are very similar as in 2009 they were the same!

That said, I mean't that the HAVING statement is like dumping your query result into a temp table and then running another query on it after

As far as what I was trying to accomplish that provoked this question, I was working on a HAVING clause that was filtering recommended topics by a bunch of OR clauses. Nearly all of the filters could be accomplished in WHERE, but there were two (specifically, that looked at whether the end-user had posted in the topic, or had unread posts in the topic). At the time I was using a subquery in the SELECT clause, hence the need for HAVING. I switched to using JOINS, and then was able to use WHERE. And that's how I shaved nearly 50ms off of https://www.daniweb.com/programming/4 for logged in members.

I know that in my experience, OR's are very bad for speed you can get away with a few but they can get very bad when they stop the whole query making use of …

Reverend Jim 4,780 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

I suppose I am looking at it in terms of practicality. I suppose there might be several reasons to optimize:

  1. It saves resources on the hosting platform
  2. It improves the user experience
  3. It provides personl satisfaction

Point #1 would save you money if the savings were significant.

Point #2 would apply if the improvement was noticible but I doubt anyone could seriously comment, "I think this page rendered 50ms faster than it used to", especially considering all the other things that affect timing, for example, the current loading on my ISP servers, anything else running on my computer or home network, etc.

As for point #3, personal satisfaction is a big deal, but I could not have used that at my office to justify the time spent on improving a process by that little (which is why I used to sneak those changes in under the radar).

I can see how saving 50ms in a process that took 80ms to 250ms is a big deal. I just wonder if it is a big deal that is perceptually visible to the user.

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

Here are some articles that can explain it in greater depth:

The HTML page must be downloaded in its entirety before the web browser can begin loading anything else (CSS, Javascript, images, etc.) and start rendering the page. CWV dictates that the entire page must be fully loaded, meaning CSS files downloaded and rendering the HTML, JS files downloaded and executed, etc., in 2s or less. That means the fastest we can get that HTML over the wire to the user's browser, the sooner we can start doing any of those things.

And, with a serverside language, we have to interpret it to generate the HTML code before we can even start sending it over the wire. That means all PHP interpreted, SQL queries executed, etc. Everything we need to build the HTML.

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

I'm too exhausted for an in depth explanation right now but 90% of web development is optimizing for performance. The average DaniWeb page takes anywhere from 80ms up to 250ms to load the HTML (when dealing with low network latency), depending on the type of page, so shaving 50ms off of that is a huge win.

Reverend Jim 4,780 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

I have to admit that 95% of my work from 1995 to 2008 was back end stuff where I didn't have to worry about stuff like that. Digital plumbing and monitoring. The other 5% was single user apps. Not counting the 20% which was pointless meetings. So if you don't mind explaining, I'm curious as to why 50ms would even be noticible. I'm not asking just to be picky.

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

Now as far as whether going down that rabbit hole that day was worth the cost of losing AndreRet, then I'd have to give a resounding no.

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

In exchange for less than a day's worth of work? Of course!!!!!

I've spent a lot longer to shave off a lot less.

https://www.conductor.com/academy/page-speed-resources/faq/amazon-page-speed-study/

Reverend Jim 4,780 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

Was it worth all that for 50ms?

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

SELECT sum(points) total FROM ... HAVING points > 10

I don't have experience with MariaDB, but in MySQL, something like that will work as long as I do SELECT sum(points) AS total FROM ... HAVING total > 10. Is that what you were getting at?

As far as what I was trying to accomplish that provoked this question, I was working on a HAVING clause that was filtering recommended topics by a bunch of OR clauses. Nearly all of the filters could be accomplished in WHERE, but there were two (specifically, that looked at whether the end-user had posted in the topic, or had unread posts in the topic). At the time I was using a subquery in the SELECT clause, hence the need for HAVING. I switched to using JOINS, and then was able to use WHERE. And that's how I shaved nearly 50ms off of https://www.daniweb.com/programming/4 for logged in members.

Biiim 182 Junior Poster

I realise this has been marked as solved, but I wanted to make it known that the HAVING clause runs on the returned result set of your query, which as you say has no indexes on it as it is just a temporarily created result set - but has the benefit of allowing you do do some simple post-processing on the result set. (This is for MariaDB at least)

you can see it by making an alias from a column name, like SELECT sum(points) total FROM ... HAVING points > 10 (will error cause points doesn't exist in the temp result set, only total

I only use HAVING for botch jobs when you just want some complicated data filtered but don't want to spend the time restructuring the sub-queries to give it in the right format or it just isn't a problem that it takes 5 minutes to run.

Also note that LEFT JOIN (SELECT * FROM tbl) tbl2 messes up indexing too, as that subquery loses its indexes, you need to solve the query so it joins the table directly like LEFT JOIN tbl2 as tbl2 ON tbl1.idx = tbl2.idx AND tbl2.abc = 2

Then you could add a double index on tbl2 for columns idx, abc so it can quickly filter those rows out and return it in 0.0001s

Dani commented: Thank you for bringing this topic back on track! +34
Audun 0 Newbie Poster

I tried that using Thonny, and I´m getting the same result.

I tried the same in the "CMD" that opens when I press Python 3.12, and what happened was, it just said nothing and went back to an empty "line", or whatever. I suppose that´s a good sign? It says syntax error a lot on there too.

What sort of programs do you use for this?

Reverend Jim 4,780 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

What happens if you open a python shell and just type "import cv2"? As I said in your other thread, the problem might be with python 3.12. It imports ok under 3.10.

Audun 0 Newbie Poster

Sorry for the weird formatting..

Audun 0 Newbie Poster

Thanks. That sorted it.

Now, I have a new problem..

I tried the two variations of quotation marks, without the backslash, and got this:

%Run 'open cv - tot.py'
Traceback (most recent call last):
File "C:\Users\Audun Nilsen\open cv - tot.py", line 8
image = cv2.imread("C:\Users\Audun Nilsen\Pictures\417507225_372771342183154_3253415116081493518_n.jpg")

SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape

Then I did the r before, and I got this:

%Run 'open cv - tot.py'
Traceback (most recent call last):
File "C:\Users\Audun Nilsen\open cv - tot.py", line 1, in <module>
import cv2 # OpenCV for image processing
ModuleNotFoundError: No module named 'cv2'

I double-checked to see if I had installed it, and ... it was fine.

This is the guide I´m trying to get through:

https://finnstats.com/2024/01/17/python-for-image-based-data-mining/

Reverend Jim 4,780 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster

We "bashed" viewpoints, and never each other

As it should be. I wish this applied to the real world on a broader scale (we all know who I am talking about).

I would hope that's the way Jim took it too

Absolutely. Dani is a respected friend and I always appreciate her viewpoint whether I agree or disagree with it. And I have been convinced to change my mind from time to time. That's the great thing about informed opinions.

Dani 4,084 The Queen of DaniWeb Administrator Featured Poster Premium Member

I see it very differently than you see it.

I consider Jim a real world friend of mine, as is his son, Adam, who I had actually invited to my wedding.

I didn't bash Jim with my moderator nor admin nor forum owner hat on. He contributed a post where I disagree with his viewpoint. He disagrees with mine. We had a public debate about it. I took the time to elaborately explain my perspective and thought process. I voted down one of his posts, just as he voted down one of mine. In the end, we both agreed to disagree. Everyone was respectful. We "bashed" viewpoints, and never each other. There were no personal attacks. (After all, we're friends debating a controversial topic, and I would hope that's the way Jim took it too.) Isn't that the epitome of a healthy conversation in an online discussion forum?

If the forum was indeed "governed by a 'my way or the highway' mindset," then I would moderate posts by members that disagree with my perspective, or contain these types of discussions to behind-the-scenes forums.

As a side note, I can give my own opinions as to the demise of DaniWeb, but I'll spare you ;)